text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
It’s the network, stupid: a population’s sexual network connectivity determines its STI prevalence There is little consensus as to why sexually transmitted infections (STIs), including HIV and bacterial vaginosis (BV) are more prevalent in some populations than others. Using a broad definition of sexual network connectivity that includes both structural and conductivity-related factors, we argue that the available evidence suggests that high prevalence of traditional STIs, HIV and BV can be parsimoniously explained by these populations having more connected sexual networks. Positive feedback, whereby BV and various STIs enhance the spread of other STIs, then further accentuates the spread of BV, HIV and other STIs. We review evidence that support this hypothesis and end by suggesting study designs that could further evaluate the hypothesis, as well as implications of this hypothesis for the prevention and management of STIs. Introduction There is little consensus as to why the prevalence of bacterial vaginosis (BV), HIV and other sexually transmitted infections (STIs) varies so dramatically around the world. A range of explanations have been put forward, including variation in circumcision prevalence 1 , STI treatment efficacy 2 , poverty 2-4 , socioeconomic inequality 5 , gender inequality 6 , migration intensity 7 , hormonal contraception 8 , vaginal microbiome 9 , host genetic susceptibility 10 and sexual behavior 11,12 . We do not dispute that each of these can play a role in differential STI spread. Rather we argue that differential connectivity of sexual networks emerges as a parsimonious dominant explanation for the global variation in STI prevalence, taking a central position in the causal pathway that links all of the above-mentioned risk factors for STI infection (Figure 1). Outline and origins of the network connectivity theory STIs are transmitted along sexual networks and, as a result, the structural characteristics of these networks determine the speed and extent of STI spread [13][14][15] . These structural characteristics include summary measures of the number of partners per unit time, coital frequency, prevalence of concurrent partnering (having two or more partners at the same time), size of core groups (and their connections with non-core populations), type of sex, size of sexual network, length of gaps between partnerships and degree/type of homophily 13,[15][16][17][18][19][20] (reviewed in 21). These structural factors determine the forward reachable path of a network, which is defined as the cumulative set of individuals in a population that can be infected with an STI from an initial seed via a path of temporally ordered partnerships 22 . Two particularly important determinants of the forward reachable path are the prevalence of concurrency and the number of partners per unit time 22 . STI transmission can also be enhanced through a sexual network by factors that enhance the conductivity or probability of STI transmission per sex act. These factors include a low prevalence of circumcision, pre-exposure prophylaxis (PrEP) and condom usage (Figure 1 and Figure 2). Enhanced screening/early and effective treatment of STIs could reduce spread of STIs via reducing the duration of infectivity. Because numerous STIs enhance the transmission/acquisition of other STIs 23 , effective STI control could then also reduce the conductivity of a network. We use a broad definition of network connectivity in this paper that includes both these structural and conductivity-related factors. The origins of this network connectivity theory lie in the STI modeling field. Previous modelling studies from the 1970s established the importance of the rate of partner change and mixing between core and non-core groups to STI spread 24,25 . Seminal modelling papers by Morris et al. 13 and Watts et al. 15 in the 1990s built on these findings by revealing that the prevalence of sexual partner concurrency may be a particularly important determinant of network connectivity. Their analyses found that relatively small increases in concurrency could lead to dramatic increases in network connectivity and as a result, HIV spread 13 . The main mechanisms whereby concurrency promotes STI spread are illustrated in Figure 1. A number of empirical studies have subsequently established that markers of network connectivity such as concurrency and rate of partner change are correlated with the prevalence of all major STIs (Table 1). In this paper, we review some of the cross sectional and longitudinal evidence that two components of network connectivity (concurrency and rate of partner change) are associated with STI prevalence. We then summarize evidence that network connectivity influences the prevalence of BV and end by noting the potential for positive feedback loops between various STIs being underpinned by network connectivity. Markers of network connectivity are correlated with the prevalence of STIs: cross-sectional evidence a. Ethnic group comparative analyses USA: In the United States the prevalence of BV, HIV and most STIs for non-Hispanic blacks in the 1990s was considerably higher than in non-Hispanic whites ( Figure 3). Historical data is limited but the available data demonstrates that these divergences in prevalence extend back to the 1930s for syphilis 26,27 and the 1970s for HSV-2 28 . Morris et al. used five large national behavioural surveys to investigate which possible risk factors could underpin these differences in HIV prevalence, and found that the prevalence of concurrency was on average 3.5 and 2.1 times higher in non-Hispanic black men and women, respectively. In their modelling analysis, they found that these differences in concurrency prevalence between these groups translated into 2.6-fold differences in HIV prevalence. They did not, however, model the enhanced transmission probability that is associated with acute HIV which subsequent analyses have shown to have a synergistic effect with concurrency on HIV transmission 29 . Subsequent studies have demonstrated that concurrency plays an important role in the spread of the other STIs and thus the differential concurrency prevalence they found could represent a parsimonious explanation for the differences in the range of STI prevalence demonstrated in Figure 3. Southern Africa: The HIV prevalence varies 40-fold between ethnic groups in South Africa 30 . Analyses from 5 nationally representative behavioural surveys revealed that the most Figure 1. Schematic comparison of STI spread in a low (left) and high (right) sexual network connectivity populations soon after sexual debut. In both populations, STI acquisition commences when 'A' has sex with an older man and acquires BV-associated bacteria (yellow) and HSV-2 (black border around each node). In the high connectivity population 'A' also acquires T. vaginalis (TV; red) from this relationship. The major determinant of the difference in network connectivity is that more relationships run concurrently in the high connectivity population. This facilitates STI spread by: i) Creating a larger reachable path for STIs 13 , ii) Removing the benefits of partner sequencing seen in serial monogamy (for details see 21), iii) Reducing the time between STI transmissions since infections are not trapped in dyads 21 and iv) Bypassing the rapid-clearance-in-males-buffer 31,32 . This is the buffer that reduces STI spread in serial monogamous networks where the gap between partnerships (time points 1 and 2) is longer than the duration of colonization of TV and BV-associated bacteria in men. This gap protects the women at time point 2 in the serial monogamy/low connectivity population (represented by the partner of B at time point 2) but not in the high connectivity population from BV and TV acquisition. Various STIs, including HSV-2, BV and TV enhance the susceptibility/infectiousness of other STIs, leading to positive feedback loops. This is conveyed via the transmission probabilities being depicted as proportional to edge width. The high connectivity network also has a low prevalence of circumcision and condom usage which further increase STI transmission probabilities in this population. The combination of high network connectivity, low circumcision/ condom-use lead to a rapid spread of multiple STIs in the high-but not the low-connectivity network (blue nodes, no STIs; squares, men; circles, women). Conceptual framework for the understanding the genesis of differences in STI prevalence between populations. Note that we use directed cyclic graphs as short-hand notation for an infinite acyclic directed graph containing variables indexed by time. Our definition of network connectivity is broad: in addition to considering the sexual links between individuals, it takes into account the "conductivity" and timing of these links. plausible risk factors that could explain this were the 5-to 17-fold higher prevalence of male concurrency and the higher number of partners per year in the highest compared to the lowest HIV prevalence ethnic group 30 . A modelling study likewise demonstrated that the combination of concurrency and rate of partner change was responsible for approximately 75% of the HIV infections in the 1990s when antenatal HIV prevalence increased from 0.7% to 24.5% 62 . In a similar vein, modelling studies from Zimbabwe found that both the observed high prevalence of concurrency and the increased transmission probability associated with acute HIV were needed to replicate Zimbabwe's explosive HIV epidemic curve 29 . Elsewhere: The prevalence of concurrency and/or number of partners have also been found to be associated with variations in HIV prevalence by ethnic group in Ethiopia 43 , Honduras 63 , Kenya 64 , Uganda 65 and the United Kingdom 66-68 . Although no published study has assessed how generalizable these findings are globally, one study attempted to do this within sub-Saharan African countries. This study used demographic and health surveys to systematically assess the behavioural correlates of HIV prevalence by region (as a proxy for ethnic group) in 47 surveys from 27 African countries where HIV prevalence varied by at least two-fold between regions. It found that the lifetime number of partners reported by men and women was positively correlated with HIV prevalence in 23 and 18 out of 36 surveys, respectively. Likewise, reporting sex with a non-marital, noncohabiting partner by men and women was positively correlated with HIV prevalence in 38 and 39 out of 47 surveys, respectively 69 . The relationship between ethnicity, race and STI prevalence It is of paramount importance to emphasize that our hypothesis makes no reference to race. The hypothesis proposes that there are differences in sexual behavior between different groups of people which translate into differences in network connectivity and as a result differential STI prevalence. These groups can be defined by sexual orientation, ethnicity, social class, caste or whatever categories meaningfully segregate sexual networks. These categories are social constructs and thus vary considerably across time and place. It is our considered opinion that investigators who conduct investigations into STI epidemiology using these categories do so with sufficient sensitivity to the concerns as to how these categories are and have been used and abused. Other authors have hypothesized that biological differences between racial groups play an important role in STI epidemiology. A recent form of this argument is that 'black populations' are innately more likely to have bacterial-vaginosis-type vaginal microbiomes which in turn facilitates the transmission of various STIs in this population 9 . We and others have argued that the evidence does not support this, and other race-based explanations of differential STI spread 70,71 . As an example, we noted that 'black populations' with evidence of low sexual network connectivity have a very low prevalence of BV and conversely 'white populations' with high connectivity had a high BV prevalence 70,71 . b. Country level comparisons In the country level analysis we focus on studies that investigate the correlates of country level peak HIV prevalence. Peak HIV prevalence, which represents the maximal HIV prevalence that countries obtained prior to the widespread availability of antiretroviral therapy is a useful composite measure of the factors that enabled the rapid spread of HIV 72 Only two risk factors have been consistently found to be associated with peak HIV prevalence: circumcision and the prevalence of concurrency. i. Circumcision: There is a strong negative association between circumcision and peak HIV prevalence within sub-Saharan Africa, but not globally 11,77 . This is unsurprising since sub-Saharan Africa has the highest prevalence of HIV and the second highest prevalence of circumcision in the world 77,79 . The vast majority of the world's population lives in countries with both low HIV and low circumcision prevalence 11 . Various lines of evidence suggest that something else is driving the spread of HIV in sub-Saharan Africa and that circumcision is then moderating this risk 77,80 . ii. Concurrency prevalence: The prevalence of male concurrency has been found to be associated with peak HIV prevalence in a cross country study 48 . Other studies have however failed to find this association 81,82 but serious methodological questions have been raised pertaining to these studies including the fact one of these studies compared 5 year cumulative concurrency rates from European countries with point prevalence of concurrency in African countries 48,81,83 . A further problem related to these cross-national comparisons is that national populations are frequently composed of multiple subpopulations that may have large differences in HIV prevalence. In 29 sub-Saharan countries with available data, for example, HIV prevalence was found to vary by a median of 3.7-fold (IQR 2.9-7.9) between regions within countries 69 . As argued above, more fine-grained studies investigating the correlates of HIV prevalence by ethnic group or region within these and other countries have found a range of markers of network connectivity (such as partner number and concurrency) and other risk factors to be associated with HIV prevalence 12 Markers of network connectivity are correlated with the incidence of STIs: longitudinal evidence In this section, we consider two (of many possible) examples where large changes in STI incidence are preceded by corresponding changes in network connectivity. Although this evidence is indirect and susceptible to confounding, it is at least suggestive that the two large increases and one precipitous decline in syphilis incidence were determined to some extent by corresponding changes in network connectivity. Of note, this epidemic trajectory of syphilis in MSM in the United States was similar to that of a range of other STIs such as LGV and gonorrhoea in this same population and in MSM in other high income countries 84 . In a range of European countries where MSM were similarly affected by the AIDS epidemic, the incidence of STIs such as syphilis, LGV and gonorrhoea declined to very low rates in the post AIDS period before large increases in the late 1990s onwards corresponding to increases in partner number and declines in condom usage 84 . b) Southern and Eastern Africa A number of studies from general populations in Southern and Eastern Africa have concluded that reductions in partner number and concurrency played an important role in the impressive declines in HIV incidence in Uganda, Zimbabwe and other countries in the region 95-100 . Delayed sexual debut, increased condom usage, enhanced antiretroviral therapy coverage and AIDS mortality (via reduced network connectivity) also played an important role in this regard 96,100,101 . Clustering of STIs includes incurable STIs and network connectivity is the most parsimonious way to explain this clustering We have already noted the striking clustering of STIs within certain ethnic groups and sexual orientations in a number of countries. Strong evidence of clustering of STIs has also been found at WHO world regional 102 and country levels. At a country level, the peak HIV prevalence has been found to be associated with the prevalence of a range of STIs before/early in the HIV epidemics: syphilis 103 , gonorrhoea 104 , HSV-2 103 and trichomoniasis 104 and BV 104 . This clustering of STIs is important for two reasons. Firstly, it suggests that one or more common risk factors could underpin variations in all these STIs. Secondly, the treatable but incurable STI, HSV-2 is correlated with both peak HIV prevalence 103 and antenatal syphilis prevalence from the pre-HIV period 66 . This is relevant because differential STI treatment efficacy can explain differences in the prevalence of treatable STIs such as syphilis but not HSV-2. Differential network connectivity, which can explain the differential spread of all STIs, is thus a more parsimonious way to explain the clustering of STIs. Previous modelling studies have found that relatively small increases in parameters of network connectivity can lead to nonlinear increases in HIV/STI spread 13 . If this applies to BV as well, then more connected sexual networks would be expected to facilitate the rapid spread of BV and the various STIs soon after sexual debut. These would then increase susceptibility and transmission of other STIs, adding a further means by which enhanced network connectivity could lead to increases in STI spread. Network connectivity would thus indirectly enhance probability of transmission per sex act for different STIs (Figure 1). Limitations It should be emphasized that this paper presents a narrative, non-systematic review of evidence for network connectivity as a parsimonious explanation of variations in genital microbiomes and STI prevalence. As such, our sampling of evidence is likely biased. We acknowledge that we have picked evidence that is supportive of our hypothesis. Our definition of network connectivity could also be criticized as being impractical because it includes such a breadth of structural and conductivity variables. Consequently, in our conceptual framework of network connectivity, different combinations of these variables could yield the same STI prevalence. Considerable further work is necessary to construct formulae of the determinants of network connectivity and then establish how these relate to empirical estimates of STI prevalence around the world. A global study that uses a standardized methodology ( Table 2) to map the variations in STI prevalence and associated risk factors by ethnic group/region within all relevant countries could provide valuable further information. So too, longitudinal studies that follow up populations from high and low STI prevalence populations from the time of sexual debut would be useful. These should accurately map the timing and correlates of STI spread including alterations of vaginal and penile microbiomes and allow more precise quantitation of which risk factors are most important for STI spread. These studies should enable the construction of more accurate models of STI spread that can be used to predict STI prevalence for specific populations under various counterfactual scenarios such as reductions in the prevalence of concurrent partnering. Our theory includes mention of the wide array of upstream socioeconomic and political factors that have been shown to influence the spread of STIs 123 . We argue that the pathways through which these factors facilitate STI transmission is to a large extent mediated via alterations in network connectivity 124 . We have not, however, gone into any detail into reviewing the evidence on which this view is based 123,125,126 . Furthermore, our focus on the more downstream factors responsible for STI transmission should not detract from efforts to target the upstream determinants of enhanced STI transmission. Implications of network connectivity: Know Your Network, Determine Your Prevalence If confirmed by further experimental data, the network connectivity approach would generate new opportunities for STI prevention interventions. Whilst individual level biomedical STI control interventions have delivered considerable successes, they do not address the root cause of high STI prevalence and are therefore unlikely to accomplish radical prevention 126,127 . HIV pre-exposure prophylaxis and treatment as prevention, for example, may reduce HIV transmission but will not reduce the transmission of other STIs. If differential network connectivity is a fundamental determinant of STI and BV prevalence then this could be communicated to affected populations as an opportunity to effect radical prevention. Along these lines, a 'Know your Network' intervention has been successfully piloted in Kenya 128 . During a community meeting, the community's sexual network was computed by fitting a dynamic network model to data from individual sexual diaries, and a graphical representation of the network was fed back to the community. Participants reported the intervention to be transformative but formal trials are required to assess the efficacy on STI incidence of this 129 and similar processes elsewhere in Africa 100,130 which resulted in dramatic declines in side-partners and HIV incidence, could be viewed as providing both guidance and evidence for this approach. Data availability No data are associated with this article. Grant information The author(s) declared that no grants were involved in funding this work. This is a well referenced, very interesting paper which attempts and in large part succeeds, in critically examining, and in many cases rejecting common "risk factors/ determinants" of high STI (sexually transmitted infection) rates. The authors have traced a long history in mathematical modelling of primarily HIV and summarised the most common and consistently found explanations of "peak HIV" rates at an ecological level. I think this article is thought provoking and even if one is not entirely convinced by it, it adds much deeper and more critical thought to assumptions of mathematical modellers and epidemiologists about the immediate causes of HIV than I have ever seen. I am glad it is published. I think the paper could be improved in general by addressing two major areas. First, some mention of the empirical study of sexual networks should be made.There are examples from all over the world which the authors may use to strengthen their ultimate hypothesis; that sexual interactions between sexual network members in which some people are infected with STI are the primary and necessary conditions for STI to transmit . This will help define for their readers very clearly the fact that because ecological findings focus on whole systems, whether national or regional, these studies are more indicative of large networks and should not be regarded as "weaker" evidence as they are in epidemiology. The authors can also argue that the ultimate risk factor on an individual level for acquiring gonorrhoea for example is having unprotected sex with another person with gonorrhea; that is it and that is all. Other factors are moderators of that plain risk; such as use of condoms, cumulative frequency of intercourse, etc. Concurrency or number of sex partners are indirect indicators of the risk itself; because transmission of STI is an interaction which occurs between at minimum two people, the other "factors" are reduced to characteristics or behaviors of individuals within the population, which again are proxies, though more proximal than age, sex, ethnic background or income. The second thing I would be very clear on is the part played by ethnic group. This is again just a proxy for 1,2 The second thing I would be very clear on is the part played by ethnic group. This is again just a proxy for sexual interactions and cultural norms, and may not be a consistent nor accurate marker. However, it is very easy in the discussion of HIV in Africa to "racialize" the epidemic and/or behavior. For example, the effect of the labour policies under apartheid South Africa, where black labourers were forced into a pattern of migrant labor for a year at a time from their rural homes to large urban or mining compounds did much to accelerate sexual interactions which in turn exacerbated HIV spread. Likewise, sexual violence in some parts of Africa is extremely high, and while it is not mentioned here specifically, certainly does contribute to transmission. However, this also is linked to harsh colonial conditions and the authors could make it very clear that this should not be interpreted as being a intrinsically ethnic or cultural characteristic. The successful Ugandan educational intervention is a great demonstration of that. One of the best network explanations of different STI rates in people of different ethnic backgrounds can be found in , pages 689-697. More minor comments follow; Page 2 -I love Figure 1. Page 3, 2nd paragraph -better give credit where credit is due . Page 3, 3rd paragraph -so concurrency may be a marker of connectivity; or a determinant, but one may equally pose that if one has many sex partners the only way to fit them all in is to have concurrent ones.. and that may have an effect also on frequency of intercourse with each partner. Lots of food for thought! Page 4 -wonderful summary! I would love to see a systematic review which includes an exhaustive list with negative findings. Again, one could look for only proximal causes of transmission rather than secondary or structural determinants. Page 5 -you state that "Peak high prevalence HIV is based generally of high quality longitudinal data". Please justify and provide references. Page 5 -the statements on poverty and socioeconomic status, gender inequality which are not consistently associated with HIV are great; this is because these determinants are only distally linked to transmission. Page 5, paragraph 8 -to describe numbers of partners more accurately I would use the median, range or IQR as the mean implies these distributions are Gaussian, but they are not. The devil is literally in "de tail" Also one may have to look at the penile anal penetration risk of transmission which is not exactly comparable to penile vaginal sex. But I agree in principle. Page 7, paragraph 1 -I would be more accurate here, HSV is certainly treatable, but not as easily diagnosed or treated as the bacterial STI. This is a well referenced, very interesting paper which attempts and in large part succeeds, in critically examining, and in many cases rejecting common "risk factors/ determinants" of high STI (sexually transmitted infection) rates. The authors have traced a long history in mathematical modelling of primarily HIV and summarised the most common and consistently found explanations of "peak HIV" rates at an ecological level. I think this article is thought provoking and even if one is not entirely convinced by it, it adds much deeper and more critical thought to assumptions of mathematical modellers and epidemiologists about the immediate causes of HIV than I have ever seen. I am glad it is published. I think the paper could be improved in general by addressing two major areas. First, some mention of the empirical study of sexual networks should be made.There are examples from all over the world which the authors may use to strengthen their ultimate hypothesis; that sexual interactions between sexual network members in which some people are infected with STI are the primary and necessary conditions for STI to transmit . This will help define for their readers very clearly the fact that because ecological findings focus on whole systems, whether national or regional, these studies are more indicative of large networks and should not be regarded as "weaker" evidence as they are in epidemiology. The authors can also argue that the ultimate risk factor on an individual 1,2 they are in epidemiology. The authors can also argue that the ultimate risk factor on an individual level for acquiring gonorrhoea for example is having unprotected sex with another person with gonorrhea; that is it and that is all. Other factors are moderators of that plain risk; such as use of condoms, cumulative frequency of intercourse, etc. Concurrency or number of sex partners are indirect indicators of the risk itself; because transmission of STI is an interaction which occurs between at minimum two people, the other "factors" are reduced to characteristics or behaviors of individuals within the population, which again are proxies, though more proximal than age, sex, ethnic background or income. The second thing I would be very clear on is the part played by ethnic group. This is again just a proxy for sexual interactions and cultural norms, and may not be a consistent nor accurate marker. However, it is very easy in the discussion of HIV in Africa to "racialize" the epidemic and/or behavior. For example, the effect of the labour policies under apartheid South Africa, where black labourers were forced into a pattern of migrant labor for a year at a time from their rural homes to large urban or mining compounds did much to accelerate sexual interactions which in turn exacerbated HIV spread. Likewise, sexual violence in some parts of Africa is extremely high, and while it is not mentioned here specifically, certainly does contribute to transmission. However, this also is linked to harsh colonial conditions and the authors could make it very clear that this should not be interpreted as being a intrinsically ethnic or cultural characteristic. The successful Ugandan educational intervention is a great demonstration of that. One of the best network explanations of different STI rates in people of different ethnic backgrounds can be found in , pages 689-697. Reply: In addition to small additions to the text along the lines suggested we have also added the two references suggested and the following new paragraphs: Page 6, L28: The relationship between ethnicity, race and STI prevalence It is of paramount importance to emphasize that our hypothesis makes no reference to race. The hypothesis proposes that there are differences in sexual behavior between different groups of people which translate into differences in network connectivity and as a result differential STI prevalence. These groups can be defined by sexual orientation, ethnicity, social class, caste or whatever categories meaningfully segregate sexual networks. These categories are social constructs and thus vary considerably across time and place. It is our considered opinion that investigators who conduct investigations into STI epidemiology using these categories do so with sufficient sensitivity to the concerns as to how these categories are and have been used and abused. Other authors have hypothesized that biological differences between racial groups play an important role in STI epidemiology. A recent form of this argument is that 'black populations' are innately more likely to have bacterial-vaginosis-type vaginal microbiomes which in turn facilitates the transmission of various STIs in this population . We and others have argued that the evidence does not support this, and other race-based explanations of differential STI spread . As an example, we noted that 'black populations' with evidence of low sexual network connectivity have a very low prevalence of BV and conversely 'white populations' with high connectivity had a high BV prevalence . Page 10, L22: Our theory includes mention of the wide array of upstream socioeconomic and political factors that have been shown to influence the spread of STIs . We argue that the pathways through which have been shown to influence the spread of STIs . We argue that the pathways through which these factors facilitate STI transmission is to a large extent mediated via alterations in network connectivity . We have not, however, gone into any detail into reviewing the evidence on which this view is based . Furthermore, our focus on the more downstream factors responsible for STI transmission should not detract from efforts to target the upstream determinants of enhanced STI transmission. More minor comments follow; Page 2 -I love Figure 1. Page 3, 2nd paragraph -better give credit where credit is due . Reply: This reference has been added Page 3, 3rd paragraph -so concurrency may be a marker of connectivity; or a determinant, but one may equally pose that if one has many sex partners the only way to fit them all in is to have concurrent ones.. and that may have an effect also on frequency of intercourse with each partner. Lots of food for thought! Reply: Indeed increasing concurrency tends to lead to an increase in number of partners per unit time. Interestingly a number of modelling studies have shown that, in certain scenarios, increasing concurrency whilst keeping total number of partnership per unit time unchanged can still result in increases in markers of connectivity such as the forward reachable path . In the datasets we have reviewed, the two however tend to covary at population levels and thus in our estimation a reasonable case can be made to not consider these two variables in isolation. Page 4 -wonderful summary! I would love to see a systematic review which includes an exhaustive list with negative findings. Again, one could look for only proximal causes of transmission rather than secondary or structural determinants. Reply: Hopefully someone reading this will be tempted to do this review. Page 5 -you state that "Peak high prevalence HIV is based generally of high quality longitudinal data". Please justify and provide references. Reply: A justification for this assertion has been provided and backed up with 3 new references (Page 7, L9-14). Page 5 -the statements on poverty and socioeconomic status, gender inequality which are not consistently associated with HIV are great; this is because these determinants are only distally linked to transmission.
7,488.6
2018-12-02T00:00:00.000
[ "Biology" ]
The CompTox Chemistry Dashboard: a community data resource for environmental chemistry Despite an abundance of online databases providing access to chemical data, there is increasing demand for high-quality, structure-curated, open data to meet the various needs of the environmental sciences and computational toxicology communities. The U.S. Environmental Protection Agency’s (EPA) web-based CompTox Chemistry Dashboard is addressing these needs by integrating diverse types of relevant domain data through a cheminformatics layer, built upon a database of curated substances linked to chemical structures. These data include physicochemical, environmental fate and transport, exposure, usage, in vivo toxicity, and in vitro bioassay data, surfaced through an integration hub with link-outs to additional EPA data and public domain online resources. Batch searching allows for direct chemical identifier (ID) mapping and downloading of multiple data streams in several different formats. This facilitates fast access to available structure, property, toxicity, and bioassay data for collections of chemicals (hundreds to thousands at a time). Advanced search capabilities are available to support, for example, non-targeted analysis and identification of chemicals using mass spectrometry. The contents of the chemistry database, presently containing ~ 760,000 substances, are available as public domain data for download. The chemistry content underpinning the Dashboard has been aggregated over the past 15 years by both manual and auto-curation techniques within EPA’s DSSTox project. DSSTox chemical content is subject to strict quality controls to enforce consistency among chemical substance-structure identifiers, as well as list curation review to ensure accurate linkages of DSSTox substances to chemical lists and associated data. The Dashboard, publicly launched in April 2016, has expanded considerably in content and user traffic over the past year. It is continuously evolving with the growth of DSSTox into high-interest or data-rich domains of interest to EPA, such as chemicals on the Toxic Substances Control Act listing, while providing the user community with a flexible and dynamic web-based platform for integration, processing, visualization and delivery of data and resources. The Dashboard provides support for a broad array of research and regulatory programs across the worldwide community of toxicologists and environmental scientists. Electronic supplementary material The online version of this article (10.1186/s13321-017-0247-6) contains supplementary material, which is available to authorized users. 2.6.Date of model development and/or publication: 2016 2.7.Reference(s) to main scientific papers and/or software package: [1]An automated curation procedure for addressing chemical errors and inconsistencies in public https://cfpub.epa.gov/si/si_public_record_Report.cfm?dirEntryId=311655 [7]The importance of data curation on QSAR Modeling: PHYSPROP open data as a case study. 3.3.Comment on endpoint: The logarithm of the ratio of a contaminant concentration in biota to its concentration in the surrounding medium (water). 3.7.Endpoint data quality and variability: The 4.2.Explicit algorithm: Distance weighted k-nearest neighbors (kNN) This is a refinement of the classical k-NN classification algorithm where the contribution of each of 4.Defining the algorithm -OECD Principle 2 the k neighbors is weighted according to their distance to the query point, giving greater weight to closer neighbors.The used distance is the Euclidean distance. kNN is an unambiguous algorithm that fulfills the transparency requirements of OECD principle 2 with an optimal compromise between model complexity and performance. 5.2.Method used to assess the applicability domain: The applicability domain of the model is assessed in two independent levels using two different distance-based methods. First, a global applicability domain is determined by means of the leverage approach that checks whether the query structure falls within the multidimensional chemical space of the whole training set. The leverage of a query chemical is proportional to its Mahalanobis distance measure from the centroid of the training set. The leverages of a given dataset are obtained from the diagonal values of the hat matrix. This approach is associated with a threshold leverage that corresponds to 3*p/n where p is the number of model variables while n is the number of training compounds. A query chemical with leverage higher than the threshold is considered outside the AD and can be associated with unreliable prediction. The leverage approach has specific limitations, in particular with respects to gaps within the descriptor space of the model or at the boundaries of the training set. To obviate such limitations, a second tier of applicability domain assessement was added. This comprised a local approach which only investigated the vicinity of the query chemical. This local approach provides a continuous index ranging from 0 to 1 which is different from the first approach which only provides Boolean answers (yes/no). This local AD-index is relative to the similarity of the query chemical to its 5 nearest neighbors in the p dimensional space of the model. The higher this index, the more the prediction is likely to be reliable. 5.3.Software name and version for applicability domain assessment: Implemented in OPERA V1.02 An implementation of a local similarity index and the leverage approach based on the work of 5.4.Limits of applicability: These two AD methods described in Section 5.2 are complementary and can be interpreted in the following way: -If a chemical is considered outside the global AD with a low local AD-index, the prediction can be unreliable -If a chemical is considered outside the global AD but the local AD-index is average or relatively high, this means the query chemical is on the boundaries of the training set but has quite similar neighbors. The prediction can be trusted. -If a chemical is considered inside the global AD but the local AD-index is average or relatively low, this means the query chemical fell in a "gap" of the chemical space of the model but still within the boudaries of the training set and surrounded with training chemicals. The prediction should be considered with caution. -If a chemical is considered inside the global AD with a high local AD-index, the prediction should be considered reliable. 6.6.Pre-processing of data before modelling: No preprocessing of the values. RMSE=0.55 A plot of the experimental versus predicted values for the training set is provided in supporting information Section 9.3. 7.5.Other information about the external validation set: The validation set consists of 161 chemicals. The values are ranging from ~-0.3 to ~5. 7.6.Experimental design of test set: The structures are randomly selected to represent 25% of the available data keeping a similar normal distrubution of LogBCF vlaues in both training and test sets using the Venetian blinds method. A plot of the distribution of LogBCF values is provided in the supporting information Section 9.3. 8.1.Mechanistic basis of the model: The model descriptors were selected statistically but they can also be mechanistically interpreted. 8.2.A priori or a posteriori mechanistic interpretation: A posteriori mechanistic interpretation. 8.3.Other information about the mechanistic interpretation: For more details and full reference, see references in Section 4.3 and Section 9.2. 9.1.Comments: This QSAR model for BCF prediction is part of the NCCT_Models Suite that is a free and open-source standalone application for the prediction of physicochemical properties and environmental fate of chemicals. This application is available in the Supporting information Section 9.3 of this report and in the paper ref 2 Section 2.7. The detailed results of this suite of models applied on more than 700k DSSTox chemicals are available on the iCSS chemistry dashboard To be entered by JRC 10.2.Publication date: To be entered by JRC 10.3.Keywords: To be entered by JRC 10.4.Comments: To be entered by JRC
1,682.8
2017-11-28T00:00:00.000
[ "Chemistry" ]
Polymer Hydrogels for Wastewater Treatment The pollution of water resources turns into a worldwide problem because of the indiscriminate disposal of pollutants both organic and inorganic in nature. It stays hard to manner or control the purification of wastewater before it flows to water reservoirs. The growing interest in the improvement and application of novel hydrogels in wastewater remedy is due to its particular chemical characteristics along with hydrophilicity, sensitivity, and functionality. Hydrogels exhibit superior overall performance inside the adsorptive removal of a wide variety of aqueous pollutants along with heavy metals, nutrients, and toxic dyes. In this chapter, we are focusing on the behavior and importance of the hydrogels used so far for the removal of both organic and inorganic pollutants from wastewater. With this contribution, we will be able to elaborate the answer for why these hydrogels are superior than other materials used for the same purpose. More attention is given to the removal of heavy metal ions from wastewater using different hydrogel systems. Introduction In recent times, the fast growth of industries has caused critical troubles within the natural environment. The effluents of many industries that include paint industries, metal plating, food industries, pharmaceutical industries, and battery production, which comprise heavy metallic ions, dyes, and organic materials, are discharged without delay into water bodies and cause water pollution. These pollutants above the permissible limit cause serious effects on human beings and other terrestrial and aquatic animals. These substances penetrate and accumulate inside the bodies through food chains [1]. For the remediation and purification of wastecontaminated water, a number of different strategies were used, which include chemical precipitation [2], ion exchange [3], biological methods [4], membrane separation [5], reverse osmosis [6], coagulation and flocculation [7], catalysis [8][9][10][11], photodegradation [12], and adsorption [13,14]. Among these strategies, adsorption is considered a cheap, quick, and environmental friendly process for wastewater treatment. Generally, the adsorption process is broadly categorized in chemisorption and physisorption. Chemisorption also called chemical adsorption involves the formation of a chemical bond between adsorbate and adsorbent and therefore behaves as an irreversible system. Physisorption or physical adsorption takes place through physical interaction like hydrogen bonding, van der Waals, and hydrophobic interactions between adsorbate and adsorbent and acts in a reversible manner. Physical conditions such as pH, ionic energy, adsorbate and adsorbent dosage, contact time, and temperature are the most vital factors that affect the adsorption capability of hydrogels. The optimization of these factors is very crucial and should be considered first to layout the adsorption process at large scale [15,16]. A number of efforts had been made within the field of hydrogels for wastewater treatment. In most studies, the researchers focus on the removal capability of hydrogels toward organic toxic dyes and inorganic toxic heavy metal ions, and nowadays, special interest is given to emergent pollutants. The distinguished emergent pollutants are pharmaceuticals, drugs, insecticides, pesticides, and other toxic chemical substances [33]. These pollutants even at very low concentrations in wastewater are highly dangerous to human bodies and aquatic animals [34]. Graphene oxide (GO)-based hydrogels for wastewater treatment Polymer hydrogels are used for purification purpose, but due to weak mechanical strength, the use of these materials is restricted to some specific conditions. Therefore, to increase the application window for hydrogels, GO or some other inorganic species were introduced to fabricate composite hydrogels having enhanced mechanical strength. Besides the mechanical strength, the sheets of GO show excellent adsorption capacity for the elimination of toxic organic dyes from an aqueous environment. Currently, Guo et al. [35] carried out the facile synthesis of GO/polyethylenimine (PEI) by incorporation of GO in polyethylenimine (PEI) network producing a green adsorbent GO/PEI having enhanced removal capacity toward organic dyes. The hydrogen bonding and electrostatic interactions among amine groups of PEI and GO sheets accomplished the GO/PEI hydrogels. The removal performance was studied for both methylene blue (MB) and rhodamine B (RhB). The as-prepared hydrogels show complete removal of these dyes within 4 h followed by the pseudo-second-order kinetics. The superior dye removal ability of hydrogels is strongly attributed to the GO sheets, while the PEI is responsible for the facilitation of the gelation process of GO sheets. The beauty of these hydrogels is that it can be recovered and reused again without any trouble from an aqueous environment, suggesting the potential importance of these materials for wastewater treatment. Figure 1 shows the various steps occurring in the formulation of GO/PEI hydrogels. The GO sheets having large hydrophilic functional groups, e.g., carboxyl, hydroxyl, and epoxides ( Figure 1A), on the surface can generate hydrogen bonds with amine groups of PEI under appropriate conditions. Consequently, PEI ( Figure 1B) facilitates the gelation of GO sheets in an aqueous solution and also reveals the correct adsorption and adhesion properties. It was found that the dye adsorption capacity of the GO/PEI hydrogels increased with the amount of PEI in polymer network. It is due to the electrostatic attractions among the amine Polymer Hydrogels for Wastewater Treatment DOI: http://dx.doi.org/10.5772/intechopen.89000 functionalities in polymer network and the dye molecules. From these results, we can conclude that the adsorption capability of the GO/PEI hydrogels is largely attributed to PEI, and GO increases the mechanical strength. Therefore, in maximum composite substances, PEI is extensively applied as a robust chelating agent and organic intermediate. Furthermore, the GO/PEI composite hydrogel showed very stable self-assembly behaviors by using hydrogen bonding and electronic interactions, which confirmed an extraordinary possibility to launch PEI to motive secondary waste as dye adsorbents for wastewater treatment. Jute/polyacrylic acid hydrogel systems for wastewater treatment In real life the materials having high adsorption capacity, rapid removal kinetics, reusability, and cost-effective are preferred to utilize in wastewater treatment. To achieve these properties, a porous Jute/Polyacrylic acid (Jute/PAA) hydrogel was prepared. The high permeability and 80 wt% water in polymer network of Jute/PAA hydrogel made the inner sites fully available for the adsorption of metal ions. The Jute/PAA gel adsorbs heavy metal ions particularly Cd 2+ and Pb 2+ from wastewater with very high adsorption capacities of 401.7 and 542.9 mg/g for Cd 2+ and Pb 2+ , respectively. Furthermore, the adsorption equilibrium was reached within 10 min for 40 mg/L of initial ion concentration using 1 g/L of hydrogel. Meanwhile, the elimination efficiencies reached 81% for Pb 2+ and 79.3% for Cd 2+ . The materials were checked for other metal ions such as Cu, Zn, Mn, Cr, and Fe in melting wastewater under the same environmental conditions using different amount of hydrogel, and the results are tabulated in Table 1. The concentrations of Pb, Cd, and Cr reduced beneath 0.001 mg/L with the use of 4 g/L adsorbent. In the fixedbed column experiments, the treatment quantity of melting wastewater reached 2900 BV (32.8 L) only generating 50 BV (565 ml) eluent. This study strongly helps in the development of a realistic adsorption system based on hydrogel adsorbents for the wastewater treatment. Therefore, the removal performance of hydrogels toward heavy metal ions was investigated in real industrial water collected from smelting plant. The preferential removal to low degree of Fe and Cr is because of the excessive average valence electron power and the configuration of 3d 6 4s 2 subshell, which provides empty orbital and strongly coordination ability [36]. When increasing the adsorbent dosage to 2 g/L, Pb was preferentially removed in divalent metallic ions with the residual attention below 0.001 mg/L, probably because of the better electronegativity of Pb. While in addition to increasing the Jute/PAA gel dosage to 4 g/L, the Cd ions could be completely adsorbed, and the removal efficiencies of Cu, Zn, and Mn ions attain up to 99.8, 90.5, and 61.6%, respectively. From the obtained results, it is confirmed that the Jute/PAA hydrogel has a strong capability in the removal of heavy metal ions from commercial effluents. Table 1 shows the adsorption statistics of heavy metal ions through different amount of Jute/PAA hydrogel dosage after remedy for 2 h. Carboxymethyl cellulose/2-acrylamido-2-methyl propane sulfonic acid hydrogels A series of functional copolymer hydrogels composed of carboxymethyl cellulose (CMC) and 2-acrylamido-2-methyl propane sulfonic acid (AMPS) have been synthesized using γ-radiations and prompted copolymerization and cross-linking, and their swelling ability was investigated to optimum conditions. The capacity of these hydrogels was tested in the recovery of toxic heavy metal ions, i.e., Mn +2 , Co +2 , Cu +2 , and Fe +3 , from their aqueous solutions. The hydrogels showed a pronounced effect on the removal of metal ions. The pronounced removal ability is due to the existence of AMPS in the inner composition of hydrogels, which has a strong chelating potential and forms a stable interaction with metal ions. Therefore, by increasing the AMPS concentration in polymer chains, the chelating potential increases, and the hydrogels will show enhanced removal performance. The prepared hydrogels of CMC/AMPS were stable and utilized in a multiple cycles with no reduction compared to their initial performance. The adsorption process will be more active and favorable if the interaction of metal ions with the adsorbent is strong. Therefore, the effect of contact time on the adsorption ability of the CMC/AMPS copolymer hydrogel toward metal ions, i.e., Co, Mn, Cu, and Fe, changed. Initially the adsorbed amount of metal ions was efficient and then reduced. Few researchers studied and found that the decrease of chelating ability resulted in polymer chain shrinkage that takes area due to adsorption happening at the hydrogel network, due to which the diffusion of cations become difficult inside the bulk of the hydrogel. The fast adsorption process in the initial stage occurs on the surface and after the adsorption takes place inside the hydrogel network and slows down due to the penetration of metal ions through the pores in the hydrogel matrix. The sorption process depends on the intraparticle diffusion, chelation, and ion interactions. Table 2 shows the adsorption rate constant obtained by removing different metal ions from wastewater. In other studies, the environment friendly carboxymethyl cellulose (CMC) hydrogel beads were successfully prepared using epichlorohydrin (ECH) as a crosslinking agent through ether linkage formed between ECH and CMC in the suspension of fluid wax. The characteristic bands in FTIR spectra confirmed the ether linkage. The prepared hydrogel beads were 4 mm in diameter with fully transparent and apparently spherical geometry. It was further confirmed by X-ray diffraction (XRD) patterns that the adsorption of metal ion onto the oxygen atom of carboxyl group changed the crystallinity of hydrogels. The adsorption capacity depends on the initial concentrations of metal ions and the pH value of metal ion solution and was found increased with increase in pH and initial concentration of metal ion solution. After the application of Freundlich and Langmuir isotherm models on the data obtained from the batch adsorption experiments, it was found that the sorption mechanism of the hydrogel beads for metal ions follows the Langmuir model. The maximum adsorption values of hydrogel beads for metal ions is 6.49, 4.06, and 5.15 mmol/g for Cu 2+ , Ni 2+ , and Pb 2+ , respectively. Superabsorbent hydrogel beads based on CMC were prepared by suspension cross-linking method and were characterized by FTIR, XRD, and SEM. The crystallinity of these hydrogels was less than the pure CMC and was confirmed by XRD analysis. It was assumed that the adsorption of metal ions on hydrogel beads formed coordination bonds with the oxygen atoms in the carboxyl groups of hydrogel beads and showed good adsorption ability for heavy metal ions. The maximum amount of adsorbed metal ions from the data of Langmuir model is 6.49, 4.06, and 5.15 mmol/g for Cu 2+ , Ni 2+ , and Pb 2+ , respectively, at pH 7. The study of adsorption indicates that the hydrogel beads have a potential application and can be applied on large scale for wastewater treatment [39]. Hydrogels based on natural polysaccharides for wastewater treatment Nowadays hydrogels based on bio-originated polymers like chitosan, maltodextrin, and gum arabic with and without magnetite nanoparticles had been employed as adsorbents for entrapping of heavy metal ions from aqueous solutions. The adsorption and removal of heavy metal ions by hydrogels are due to the diffusion of water molecules inside the hydrogel network and confirmed by adsorption kinetics using the Fickian equation. The shape of macromolecules rests and can be laid low with the presence of magnetite nanoparticles, and this effect is associated with the reticulation factors inside the hydrogel network. The CS-, M-malto-, and M-GA-based hydrogels were applied for the removal of Cd 2+ ions from aqueous solutions under the controlled conditions of pH 4.5-5.5, initial concentration of 20 mg L −1 , and dried hydrogel mass of 100 mg [40]. By applying three adsorption isotherms, i.e., Langmuir, Freundlich, and Redlich-Peterson, it was found that a change in physiochemical phenomena related to Cd 2+ adsorption occurred and the data fitted to the Langmuir or Redlich-Peterson models more than the Freundlich model. The beauty and advantage of these materials are recovery by a simple magnetic field compared to other hydrogels that require mostly ultracentrifugation or using solvents like HCl, HNO 3 , etc. The removal of heavy metals from water and industrial effluents has been the goal of a large number of studies. Paulino et al. concluded from their results that hydrogels based on polysaccharides such as CS, M-malto, and M-GA are important absorbers for the treatment of wastewater and removal of heavy metal ions from industrial effluents. It was also elaborated that the diffusion of Cd 2+ through polymer hydrogel network changed when magnetic nanoparticles were introduced into polymer network. Based on Fickian parameters, the hydrogels have diffusion properties with a tendency toward macromolecular relaxation, which is very important for adsorption studies of both organic and inorganic pollutants. Different parameters such as contact time, pH, initial hydrogel dosage, and initial concentration of the Cd 2+ solution were studied to examine the potential application of hydrogels with and without magnetic properties for the removal of Cd 2+ from water and effluents. The results confirmed that hydrogels without magnetic nanoparticles based only on CS, M-malto, and M-GA can be applied more efficiently in wastewater treatment for the removal of Cd 2+ compared to hydrogels with magnetic particles. However, the regeneration of magnetic hydrogels can be done more easily with the application of magnetic field which is environmental friendly and green approach [40]. Treatment of polluted water resources using reactive polymeric hydrogel An experimental work becomes performed to study the overall performance of the prepared polyvinyl pyrrolidone/acrylic acid (PVP/AAc) copolymer hydrogel to chelate heavy metals from the bulk material [41]. The results clearly indicate that PVP/AAc copolymer hydrogel has excessive binding capacities and proper adsorption kinetics for the metal ions. The sorption of these metal ions follows the Langmuir adsorption isotherm. The feasibility for the uses of PVP/AAc hydrogel for the treatment of polluted samples, accrued from distinct water assets in Helwan location (Egypt), was investigated. The results showed that by using these hydrogels, we can obtain pure usable water from wastewater. In recent years, there has been considerable interest in the chelation of metal ions by insoluble cross-linked polymeric substrate. Such substrates have advantages over soluble materials of easy separation from the reaction medium, leading to operational flexibility of their facial regenerability and of higher stability. The removal performance of hydrogels can be disturbed in the presence of other metal salts like NaCl, MgCl 2 , CaCl 2 , etc. in polluted water and affect the chelating ability of the species. This effect was studied by Shawky et al. in the adsorption of Fe ions by PVP/AAc hydrogel in the presence of different metal salts, and the effect is shown in Figure 2. No effect on the adsorption of Fe ions was observed in case of NaCl even at high concentration of NaCl. Furthermore, Fe adsorption is not suffering by low concentrations of CaCl 2 and MgCl 2 ; however, at higher concentrations, the adsorption decreases, and this could be attributed to the affinity of the reactive polymers toward alkaline earth metals as compared to transition metal ions. The results obtained clearly demonstrate and confirmed the applicability of PVP/AAc hydrogel for wastewater treatment. Water resources in the Helwan area showed that trace metal contents are very high when the analysis of nine water samples was carried out. The hydrogel treatment resulted in a satisfactory removal of polluted heavy metals especially iron, manganese, and aluminum [41]. Conclusion The smart polymer hydrogels prepared from both synthetic and natural polymers can be used successfully with full confidence for wastewater treatment. However, the properties of these hydrogels will be kept according to the required environmental conditions by changing the composition of polymer networks. The performance of few polymer hydrogels was explained and executed in this chapter, which clearly indicates that due to smart behavior, easy synthesis, recycling, low cost, environment friendly, biocompatibility, etc. make these hydrogels as efficient candidate compared to other materials for wastewater treatment. By reading this chapter, the researchers could find new approaches which help them in designing new hydrogel systems for different applications.
3,959.2
2019-08-27T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]
Computational evidence of a new allosteric communication pathway between active sites and putative regulatory sites in the alanine racemase of Mycobacterium tuberculosis Alanine racemase, a popular drug target from Mycobacterium tuberculosis, catalyzes the biosynthesis of D-alanine, an essential component in bacterial cell walls. With the help of elastic network models of alanine racemase from Mycobacterium tuberculosis, we show that the mycobacterial enzyme fluctuates between two undiscovered states—a closed and an open state. A previous experimental screen identified several drug-like lead compounds against the mycobacterial alanine racemase, whose inhibitory mechanisms are not known. Docking simulations of the inhibitor leads onto the mycobacterial enzyme conformations obtained from the dynamics of the enzyme provide first clues to a putative regulatory role for two new pockets targeted by the leads. Further, our results implicate the movements of a short helix, behind the communication between the new pockets and the active site, indicating allosteric mechanisms for the inhibition. Based on our findings, we theorize that catalysis is feasible only in the open state. The putative regulatory pockets and the enzyme fluctuations are conserved across several alanine racemase homologs from diverse bacterial species, mostly pathogenic, pointing to a common regulatory mechanism important in drug discovery. Author summary In spite of the discovery of many inhibitors against the TB-causing pathogen Mycobacterium tuberculosis, only a very few have reached the market as effective TB drugs. Most of the marketed TB drugs induce toxic side effects in patients, as they non-specifically target human cells in addition to pathogens. One such TB drug, D-cycloserine, targets pyridoxal phosphate moiety non-specifically regardless of whether it is present in the pathogen or the human host enzymes. D-cycloserine was developed to inactivate alanine racemase in TB causing pathogen. Alanine racemase is a bacterial enzyme essential in cell wall synthesis. Serious side effects caused by TB drugs like D-cycloserine, lead to patients’ non-compliance with treatment regimen, often causing fatal outcomes. Current drug discovery efforts focus on finding specific, non-toxic TB drugs. Through computational studies, we have identified new pockets on the mycobacterial alanine racemase and show that they can bind drug-like compounds. The location of these pockets away from the pyridoxal phosphate-containing active site, make them attractive target sites for novel, specific TB drugs. We demonstrate the presence of these pockets in alanine racemases from several pathogens and expect our findings to accelerate the discovery of non-toxic drugs against TB and other bacterial infections. Author summary 38 In spite of the discovery of many inhibitors against the TB-causing pathogen 39 Mycobacterium tuberculosis, only a very few have reached the market as effective TB drugs. 40 Most of the marketed TB drugs induce toxic side effects in patients, as they non-specifically 41 target human cells in addition to pathogens. One such TB drug, D-cycloserine, targets 42 pyridoxal phosphate moiety non-specifically regardless of whether it is present in the 43 pathogen or the human host enzymes. D-cycloserine was developed to inactivate alanine 44 racemase in TB causing pathogen. Alanine racemase is a bacterial enzyme essential in cell 45 55 Tuberculosis is one of the top 10 causes of mortality globally and according 56 to latest available estimates, 10.4 million people developed this disease in 2016, of which 4.9 57 million people were infected with multidrug-resistant TB strains (MDR-TB) [1]. The 58 prevalence of multidrug-resistant TB (MDR-TB) and extensively drug-resistant tuberculosis 59 (XDR-TB) necessitates the inclusion of novel anti-tubercular therapies and strategies in the 60 treatment of TB. Treatment regimen comprising simultaneous use of multiple drugs is the 61 current strategy in practice [2]. Despite the implementation of this strategy, TB mortality 62 rates have not abated. Therefore, efforts to eradicate the TB pandemic have been stepped up 63 globally through research oriented towards finding new drugs against the tubercle bacilli [3]. 64 Alanine racemase (EC 5.1.1.1; Alr), an essential bacterial enzyme [4] is a 65 popular drug target due to the absence of human homologs. The enzyme catalyzes the inter-66 conversion of L-and D-alanine and requires pyridoxal 5'-phosphate (PLP) as a cofactor. PLP 67 is covalently attached to the enzyme through an internal Schiff's base linkage [5]. In the L to 68 D direction, the enzyme catalyzes the formation of D-alanine, an essential component of D-69 alanyl-D-alanine found in the peptidoglycan layer in bacterial cell walls [5]. In some bacteria 70 including Escherichia coli [6], Salmonella typhimurium [7] and Pseudomonas aeruginosa 71 [8], there are two Alr isozymes (Alr1 and Alr2 (aka DadX)), responsible for the anabolic and 72 catabolic functions respectively. 73 The catalytically active form of Alr is a dimer [9], due to the participation of 74 residues from both the monomers towards the formation of a functional active site. A narrow 75 passage from the exterior forms an entryway to the substrate binding cavity in the active site 76 and is lined by conserved residues, some of which have been demonstrated to orient the 77 substrate molecules during their entry into the active site [10,11]. In Alr Mtb , the substrate 78 binding cavity is a small, conical space gated by two tyrosine residues (inner gates), which 79 restrict the entry of substances into the active site [12]. Carboxylates such as acetate, 80 propionate and substrate analogs such as alanine phosphonate co-crystallize in the substrate 81 binding cavities of alanine racemases [13][14][15] and are suggested to regulate catalysis by 82 competitive inhibition, though the exact control mechanisms are not known [16]. 83 Including the structure of Alr Mtb [12], there are around a dozen and a half 84 unique alanine racemase structures in protein databases [13,[17][18][19][20][21][22][23]. Though there has been 85 considerable interest in elucidating the detailed catalytic mechanism of D-to L-alanine 86 racemization in several organisms [5,10,24,25], the regulatory aspects of catalysis suffer 87 from lack of research. In spite of the discovery of a plethora of inhibitors against pathogenic 88 Alr [26-28], only one of them has reached the market as a TB drug. This drug (D-89 cycloserine) is a structural analog of D-alanine and binds to all PLP-containing enzymes non-90 specifically, including those in the host, inducing toxic side-effects [29]. Current drug 91 discovery efforts focus on finding safer, selective, non-substrate inhibitors. Several inhibitors 92 of Alr are non-substrate leads, whose target sites on the enzyme are not known. Of these, five 93 were shown to be non-toxic to mammalian cells in a high-throughput screen for anti-94 tubercular small molecule inhibitors [28]. Until now, there have been no studies concerning 95 the binding sites of these five drug-like leads (Fig 1) on the enzyme. Considering the 96 numerous hurdles in culturing M. tb and the urgency in developing novel drugs to contain the 97 superbug strains, we sought to determine the target sites of these leads through computational 98 studies. 99 In recent years, normal mode analysis (NMA) has been widely used in 100 probing large-scale, collective motions of proteins and has been increasingly utilized to 101 characterize the dynamic aspects of enzymes [30][31][32]. Particularly, elastic network model 102 (ENM) based NMA has been useful in studying intrinsic dynamics of slow protein motions 103 over longer timescales [33,34]. Computationally, the generation of elastic network models of 104 diverse protein conformations is less expensive compared to molecular dynamics (MD) 105 simulations [35]. In enzymes, ENM-NMA-predicted global motions represent biologically 106 relevant functional motions and have been shown to include local fluctuations such as loop 107 movements essential in catalysis [36]. We searched ENM-based Alr Mtb conformations for 108 target sites of lead inhibitors through multiple, robust search algorithms by a blind docking 109 strategy (BD). BD remains a common choice in the discovery of novel, allosteric binding 110 sites [37,38]. In conjunction with pocket search tools, BD is capable of identifying new 111 functional pockets on the target protein [39]. This strategy helped us in the successful 112 identification and validation of new pockets in Alr Mtb . Further to the above investigations, a 113 comparative study of the intrinsic dynamics of Alr homologs with the help of a range of 114 computational tools helped us gain new insights into the regulatory aspect of D-alanine 115 synthesis. 116 All-atom normal mode analysis 118 The putative regulatory pockets are conserved across homologs The crystal structure of 119 Alr Mtb is a kidney-shaped dimer, with two active site cavities opening on the convex side 120 (Figs 2A, 2C and 2E) and two pockets located on the concave side (Figs 2A and 2D). 121 Residues found to be missing (Fig 2B) in the crystal structure were from both internal and 122 terminal regions. The internal stretches of missing residues (176-180 of subunit A and 266-123 280 of subunit B), pertained to the same region, i.e., the mouth region of the first active site 124 cavity ( Fig 2E). 125 Fig 2 Structure of alanine racemase from Mycobacterium tuberculosis. A. Molecular surface representation of the structure of alanine racemase (monomers A and B shown in green and cyan colours respectively). Magnified region shows the putative dimer interface groove (DIG) pocket on the dimer interface. B. Unresolved regions in the crystal structure of Alr indicated by different colours in the cartoon representation of the enzyme (missing Nterminus-yellow; missing C-terminus-blue; missing internal stretches-red). C. TIMbarrel of active site 2 showing the cofactor PLP (red sticks) covalently attached to the catalytic residue Lys44 (green sticks). Note that the active site is composed of residues from both monomers (B monomer shown in cyan colour and residues from A monomer are coloured green) D. Surface representation of the enzyme showing the putative regulatory sites (yellow) E. Surface representation of the enzyme showing tiny pockets (pink) flanked on either side by the active site cavities (red). (Due to the revision in UniProt sequence information, the residue numbers given in this work should be decremented by 2 in order to compare with the numbering provided in LeMagueres et al., 2005 [12]. For example, the residues, 176-180 in our work refer to residues, 174-178 in LeMagueres et al., 2005 [12]). Alignment of the protein sequences of Alr homologs (Figs 3, S1 and S2) 126 revealed highly similar residues in the newly identified regions (described later): dimer 127 interface groove region (Fig 3B), putative regulatory sites ( Fig 3C) and a short helix (Fig 128 3D). On the other hand, the N-termini of the homologous Alr were of different lengths and 129 were dissimilar in sequence composition (Fig 3A). Despite the presence of terminus in their 130 sequences, 8 of the crystal structures of the homologs were devoid of either the N-terminus 131 (varied between 3-15 residues) or the C-terminus (varied between 1-6 residues) or both. Of 132 the remaining structures, eight were complete and showed disordered coils in their termini. 133 Both PSI-PRED (secondary structure predictor based on position-specific-scoring-matrices of 134 unique fold libraries) and Phyre2 (protein structure modeller based on a combination of ab 135 initio and template-based strategies) generated highly disordered coils in the terminal regions 136 homologs. In some of the homologs, the putative lid regions were shorter than those in M. tb. 177 In the closed conformations of such homologs, the pockets were not completely covered. 178 Other residues of the two pockets exhibit a unique arrangement in the sequence and 212 are placed side by side in an alternating fashion ( Fig 3C). Consequently, the adjacent 213 residues in the structure belong to one of the two pockets and assume opposing states 214 at any given instant during the dynamics. Therefore, the tiny and the RS pockets may 215 be fulfilling opposing roles in regulation. 216 Concordantly, higher deformation energies were seen in the pivot residues (96, 141,146, 261 236 and 263) of the dimer interface pocket region (DIG pocket region), part of which is the far C-237 terminal region (Fig 7). Apart from this hinge-like region, the putative terminal lid regions, 238 the catalytic tyrosine and the short helix region showed higher deformation energy peaks 239 signifying greater local flexibility (Fig 7). 240 homologs, refer to Text S1. Across the homologs, the conserved residues constituting the 274 invariant core of the enzyme were found clustered around the same amplitude (Fig S4-A). 275 Moreover, the alignment positions displaying partial conservation of residues also fluctuated 276 more or less to the same extent (Fig S4-B). subsp. tengcongensis, which is a remote homolog of Alr Mtb (sequence identity=28.6%) shows 324 99.84% similarity in dynamics, as measured by the Bhattacharyya coefficient. Though the 325 differences between the RMSIP scores were more pronounced than those of BC (Table 3), 326 the latter is generally considered to be a better index for assessing the similarity of dynamics, 327 as it incorporates eigenvalues. It is to be noted that RMSIP does not represent the energetic 328 separation between the modes in the sets [43]. Sequence and structural similarity measures 329 such as RMSD values scored lesser than dynamics similarity measures such as RMSIP and 330 BC values (Table 3), proving that the conservation of dynamics far exceeds the sequence and 331 structural conservation in alanine racemases. 332 (Table S1)). The aromatic ring side of L2 -04 was 339 often found in pi-stacking interactions between the inner gate residues, Tyr366 and Tyr273' 340 (residue labeled with a prime to indicate that it belongs to the opposite monomer) while its 341 tail formed hydrogen bonds with the cofactor in the substrate binding cavity of Alr Mtb . 342 Substrate binding cavity measures 5.5 X 5.0 X 2.5 Å 3 and accommodates the substrate, L-343 alanine. Many guest substrates, substrate analogues and inhibitors such as acetate, 344 Exploring inhibitor binding sites on Alr propanoate, L-alanine phosphonate, lysine and D-cycloserine have been reported to occupy 345 this cavity in homologs [13-15, 17, 44]. In the crystal structure of a thermo-stable Alr of a novel thermophile, Caldanaerobacter subterraneus subsp. tengcongensis [21], the substrate 347 is found between the catalytic residues, Lys40 and Tyr268 (equivalent to Lys44 and Tyr273 348 in Alr Mtb ) in the substrate binding cavity and forms hydrogen bonds (2.7 Å) with the catalytic 349 tyrosine. We found that the substrate, alanine (Fig S12) Superposition of the open and closed states, both of whose RS pockets were 389 bound with high affinity inhibitor poses (Fig 10A), clearly demonstrated the twisted active 390 site cavity in case of the closed state. Docked poses of the bound inhibitors were observed to 391 interact with charged RS pocket residues, Arg378 or Asp46 or both and such interactions 392 appears to be driving the pull experienced by the short helix, H2 (Y48-G47-D46-A45-K44), 393 seen in the normal mode motions. Such a movement of the short helix (Fig 10B) between the 394 active site cavity and the RS pocket leads to the expansion and contraction of the active site 395 cavities, as observed in the conformations of LF 8 . The catalytic residue, Lys44, linked by a 396 covalent bond with the cofactor PLP on the inside of the TIM-barrels of the active site cavity 397 (Fig 2C), would be dragged along with Lys44 towards the periphery of the active site cavity. 398 As a result, the orientation of the catalytic residues would be lost. Tyr48, which walls the 399 substrate binding cavity on one side through its side chain, forms the other end of the short 400 helix and therefore would also be displaced, leading to the rearrangement of the substrate 401 binding cavity (Fig 10B). Thus, the dynamic interactions between the inhibitor and the 402 enzyme residues, viz., Arg378---Asp46 ( ranks all the three middle residues of the short helix as highly flexible residues in the order, 414 Glycine > Serine > Alanine, Aspartic acid and Asparagine. This result is in agreement with 415 the need for higher conformational flexibility in the short helix residues in order to move 416 between the RS pocket and the active site upon inhibitor binding. Supporting the above 417 results, NMA studies show that deformation energies of the short helix residues are higher 418 than the surrounding structure, indicating higher local flexibility (Fig 7). Generally, in TIM-419 barrel structures, there is a repetition of 8 alternating α helices and β strands. But, in case of 420 alanine racemases, the arrangement of the active site TIM-barrel is as follows: α1-β1-α2-α3-421 β2-α4-β3-α5-β4-α6-β5-α7-β6-α8-β7-α9-α10-β8. It appears that the short helix H2 (α2), is an 422 additional insertion (most likely by the splitting of the original second helix into α2 and α3) 423 into the conventional TIM-barrel arrangement, the insertion event evolving probably later, in 424 order to carry out allosteric regulation. 425 (Table 4). In 441 contrast, the active site entrance was twisted and closed in the closed state, rendering the 442 entryway (active site entrance) inaccessible. In such a shut active site cavity, catalysis is not 443 feasible. Therefore, we reason that the open state is catalytically active. 444 Table 4 Hydrogen bonds between L-alanine and alanine racemase residues (1) 550 where K AA represents the sub-matrix of K corresponding to the aligned C-α atoms, K QQ for 551 the gapped regions, and K AQ and K QA are the sub-matrices relating the aligned and gapped 552 sites [65]. The normal modes of the individual structure in the ensemble can then be obtained 553 by solving the eigenvalue problem, 554 where V is the matrix of eigenvectors and λ, the associated eigenvalues. 556 In order to analyze the flexibility profile of the mycobacterial racemase, 557 cross-correlations of residual fluctuations and deformation energy profiles were generated on 558 the filtered normal mode data. Across homologs, the alanine racemase motions along the 559 selected normal modes were compared with the help of similarity measures, viz., RMSIP and 560 Bhattacharyya coefficient. 561 where represents the i th eigenvector, the corresponding eigenvalue, and N, the number 591 of C-α atoms in the protein structure (3N−6 non-trivial modes). As formulated by Fuglebakk 592 et al. [69], the Bhattacharyya coefficient can then be written as, 593 Table S1. Results of docking simulation runs of substrate and inhibitors on the ensemble conformations of alanine racemase from Mycobacterium tuberculosis. Table S2. Properties of pockets of NMA ensemble conformations as calculated on the CASTp server. Flexibility measures to assess Alr Text S1. Multiple sequence alignment of Alr homologs utilized in ensemble NMA (shown with alignment positions). Movie S1. Secondary structure representation of normal mode number 8 in Alr Mtb . Nterminal putative lid-like region is shown in red colour. Helices H3 (yellow) and H4 (violet) undergo displacements.
4,285.4
2018-06-13T00:00:00.000
[ "Medicine", "Chemistry" ]
Functional Calcium-sensing Receptors in Rat Fibroblasts Are Required for Activation of SRC Kinase and Mitogen-activated Protein Kinase in Response to Extracellular Calcium* Changes in the concentration of extracellular calcium can affect the balance between proliferation and differentiation in several cell types, including keratinocytes, breast epithelial cells, and fibroblasts. This report demonstrates that elevation of extracellular calcium stimulates proliferation-associated signaling pathways in rat fibroblasts and implicates calcium-sensing receptors (CaR) as mediators of this response. Rat-1 fibroblasts express CaR mRNA and protein and respond to known agonists of the CaR with increased IP3 production and release of intracellular calcium. Agonists of the CaR can stimulate increased c-SRC kinase activity and increased extracellular signal-regulated kinase 1/mitogen-activated protein kinase activity. Both of the increases in SRC activity and mitogen-activated protein kinase activation are blocked in the presence of a nonfunctional mutant of the CaR, R796W. Proliferation of wild-type Rat-1 cells is sensitive to changes in extracellular calcium, but expression of the nonfunctional CaR mutant or inhibition of the calcium-dependent increase in SRC kinase activity block the proliferative response to calcium. These results provide evidence of a novel signal transduction pathway modulating the response of fibroblasts to extracellular calcium and imply that calcium-sensing receptors may play a role in regulating cell growth in response to extracellular calcium, in addition to their well known function in systemic calcium homeostasis. calcium-sensing receptors (CaR) 1 in mediating this response in keratinocytes is supported by evidence of CaR mRNA expression in human keratinocytes (8) and by the observation that calcium ionophore cannot mimic the effects of elevated extracellular calcium on rasGAP-associated p62 (9). A seven-transmembrane receptor capable of sensing millimolar changes in extracellular calcium has recently been cloned from bovine parathyroid (10) and rat kidney and brain (11,12). Activation of this receptor with Ca 2ϩ , Mg 2ϩ , Ba 2ϩ , or Gd 3ϩ results in the generation of IP 3 and the release of intracellular calcium when assayed in parathyroid cells, Xenopus oocytes, or transfected Chinese hamster ovary cells (10,13,14). Hereditary disruptions of systemic calcium homeostasis have been mapped to mutations in the human CaR gene (15). In particular, a mutation of arginine 796 to tryptophan in the third intracellular loop of the CaR was found to cause neonatal severe hyperparathyroidism when homozygous and hypocalciuric hypercalcemia when heterozygous (15). This CaR-R796W mutant was nonfunctional when assayed for Gd 3ϩ -stimulated intracellular calcium release in Xenopus oocytes (15) and has been characterized as a "dominant negative" mutant when co-expressed with wild-type CaR in HEK293 cells or in human parathyroid cells (16). In this report we show that Rat-1 fibroblasts express endogenous CaR mRNA and protein and that these cells respond to the CaR agonist Gd 3ϩ with an increase in IP 3 production and intracellular calcium release. Proliferation of Rat-1 cells is sensitive to changes in extracellular calcium concentration as shown by a marked increase in thymidine incorporation at 2.0 mM as opposed to 0.3 mM extracellular calcium. Stimulation of Rat-1 cells with the CaR agonist Gd 3ϩ resulted in increased in c-SRC kinase activity and increased ERK1 kinase activity. Each of these responses was significantly inhibited in Rat-1 cells expressing the nonfunctional CaR-R796W mutant. Furthermore, inhibition of the calcium-mediated increases in SRC and ERK1 activity prevented the calcium-stimulated increase in proliferation. Cell Culture Rat-1 fibroblasts were grown at 37°C in 5% CO 2 , 95% air in DMEM (1.7 mM Ca 2ϩ , BioWhittaker) supplemented with 10% bovine calf serum (Hyclone) and gentamycin (10 mg/ml). New cultures were started from frozen stocks every 4 -6 weeks. For experiments where calcium concentration was specified, Hams F-12 medium (0.3 mM Ca 2ϩ , BioWhittaker) was used and the calcium concentration was adjusted with calcium * This work was supported by National Institutes of Health Grant CA-60738 (to K. D. R.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ Present address: Dept. of Clinical Investigation, Triple Army Medical Center, Honolulu, HI 96859. RNA Isolation and Northern Blot Analysis Total RNA was purified as described previously (17), size fractionated by electrophoresis in 1.2% agarose-formaldehyde gels and transferred to nylon membranes (Nytran, Schleicher & Schuell). The hybridization probe was a 3.7-kb XbaI/BamHI fragment from the full-length rat striatal CaR cDNA clone, pCIS:CaR (kindly provided by S. Snyder) (12) labeled with 32 P by random primer extension. Hybridization was conducted at 42°C in a 50% formamide hybridization solution. The blot was then washed 2 times in 2 ϫ SSC at 42°C and 2 times in 2 ϫ SSC at 50°C. Bound radioactivity was detected by PhosphorImager analysis (Molecular Dynamics) following a 16 h exposure of the phosphorimage screen. Antibodies Monoclonal anti-GAP antibodies and protein A/G-agarose were obtained from Santa Cruz Biotechnology (Santa Cruz, CA). Monoclonal anti-SRC antibodies were obtained from Upstate Biotechnology, Inc. (Lake Placid, NY) and monoclonal anti-Fak antibodies were from Transduction Laboratories, Inc. (Lexington, KY). The affinity purified polyclonal anti-CaR antibody was produced in collaboration with Affinity BioReagents (Golden, CO). The monoclonal antiphosphotyrosine antibody (4G10) was a generous gift from Brian Druker (Oregon Health Sciences University). The monoclonal anti-p62 (2C4) antibody was a generous gift from Richard Roth (Stanford University). Purification of Plasma Membrane Proteins Crude plasma membranes were isolated from Rat-1 cells essentially as described by Bai et al. (16). Cells were scraped in 1 ml of homogenization buffer (50 mM Tris, pH 7.5, 250 mM sucrose, 1 mM EDTA, 1 mM EGTA, 10 g/ml aprotinin, 1 mM phenylmethylsulfonyl fluoride) on ice. The cells were homogenized with 15 strokes of a 1.5-ml Dounce homogenizer and the nuclei were removed by centrifugation at 800 ϫ g for 10 min. The supernatant was subjected to centrifugation at 43,000 ϫ g in a TLA 100.3 rotor for 1 h to pellet the plasma membrane fragments. The resulting pellet was resolubilized in homogenization buffer with 1% Triton X-100 added. Immunoblotting Proteins were size fractionated by SDS-polyacrylamide gel electrophoresis and transferred to polyvinylidene difluoride membrane (Immobilon P, Millipore) by electroblotting. Membranes were blocked in 3% bovine serum albumin, 0.05% NaN 3 for 1 h at room temperature followed by overnight incubation with primary antibody at 4°C in Tween ϩ Tris-buffered saline (0.05% Tween 20, 20 mM Tris, pH 7.5, 150 mM NaCl). Membranes were washed 3 times in TTBS, incubated with appropriate secondary antibody conjugated to horseradish peroxidase (Santa Cruz Biotechnology) for at least 2 h, and washed extensively in TTBS. Bands were visualized by chemiluminescence (Renaissance, NEN Life Science Products). Films from at least three independent experiments were scanned where indicated, and densitometry analysis was performed using NIH Image. Immunoprecipitations Rat-1 cells were harvested 10 min after agonist addition by lysis in M-TG buffer (1% Triton X-100, 10% glycerol, 20 mM HEPES, pH 8.0, 2 mM Na 3 VO 4 , 150 mM NaCl, 1 mM NaF, 1 mM phenylmethylsulfonyl fluoride, and 1% aprotinin), and lysates were cleared by centrifugation at 6000 ϫ g. Protein concentration was determined using the Bio-Rad protein assay (Bio-Rad) and 500 g of protein incubated with primary antibody for at least 4 h and followed by incubation with protein A/G-agarose (Santa Cruz Biotechnology) for at least 1 h. The immune complex was collected by centrifugation at 10,000 ϫ g for 5 min; the pellet was washed extensively with M-TG buffer and boiled for 3 min in 1 ϫ Laemmli buffer. Measurement of Intracellular Calcium Cells were plated on 18-mm dishes containing a central section of optical glass, and grown overnight in the appropriate culture medium containing serum. Two hours prior to Fura-2 measurement, cells were transferred into Hank's buffered salt solution containing 0.5 mM Mg 2ϩ (Mg-HBSS). Cells were exposed to 1 M Fura-2/AM (Molecular Probes, Eugene, OR) for 30 min at 37°C, then the Fura-2-containing medium was removed and replaced with Mg-HBSS. Cells were incubated at room temperature for 20 -40 min to allow de-esterification of the Fura-2/AM and then subjected to calcium imaging. Intracellular calcium concentrations were determined from the ratio of emissions measured at 510 nm following excitation at 340 and 380 nm. Images were collected in an integrating CCD camera and analyzed with the Double-Wavelength InCa program (software and hardware provided as an integrated system by Intracellular Imaging, Inc., Cincinnati OH). Measurements were collected from 20 -30 individual cells per field. Measurement of Inositol Trisphosphate Production Rat-1 cells grown to confluence in 10-cm dishes were transferred into serum-free DMEM containing 3 Ci/ml myo-[ 3 H]inositol and loaded for 48 h. 16 h prior to stimulation and harvest, the cells were transferred into serum-free myo-[ 3 H]inositol-free Ham's F-12 (0.3 mM Ca 2ϩ ). Cells were stimulated by the addition of 250 M Gd 3ϩ in the presence of 100 mM LiCl to inhibit IP 3 turnover as described previously (18). 20 min after the addition of LiCl with or without Gd 3ϩ , cells were lysed, and the inositol phosphates were extracted and fractionated on Dowex-formate columns as described previously (18). Construction of Wild-type and Mutant CaR Expression Vectors pcDNA3-CaR (CaR)-The full-length 3.7-kb rat kidney CaR clone was excised from pCIS:CaR using XbaI-BamHI (12). This was subcloned into pBluescript II for ease of manipulations. The full-length 3.7-kb CaR from the pBS:CaR construct was excised with XbaI-XhoI and subcloned into the pcDNA3 mammalian expression vector. pcDNA3-CaR:R796W (R796W)-The R796W mutation was introduced into a 1.2-kb SphI fragment of the CaR by inverse polymerase chain reaction using the oligomer pair: J5, TTG AAG GCA AAG AAG AAG CAG ATG G; and J3, GTC CTG GAA GTT ACC CGA GAA CTT C. Primer J5 is identical to the published rat CaR sequence. Primer J3 introduces the arginine to tryptophan mutation at amino acid 796 and adds a diagnostic AvaI site. Positive colonies identified by AvaI digests were sequenced and subcloned into the pBS:CaR contruct. Correct orientation was verified by AvaI digests. The full-length mutant CaR-R796W was subcloned into pcDNA3. Transfections Rat-1 cells were transfected with either pcDNA3 alone, pcDNA3-CaR, or pcDNA3-CaR-R796W using Lipofectin as described previously (19). Cells were selected in medium containing 700 g/ml G418 (BioWhittaker), and stable clones were cultured in 300 g/ml G418 to maintain selection. New cultures were started from frozen stocks every 6 -8 weeks. Incorporation of 3 H-thymidine Rat-1 fibroblasts in 12-well plates were grown to 70 -80% confluence in DMEM ϩ 10% calf serum. Cells were then changed into serum-free Ham's F-12 medium (0.3 mM Ca 2ϩ ) for 24 h prior to the addition of either CaCl 2 to 2.0 mM or EGF to 1 ng/ml. [ 3 H]Thymidine (0.1 Ci/ml) was added 18 h later, and cells were harvested after a 4-h thymidine incorporation. Thymidine incorporation was determined by precipitation in 10% trichloroacetic acid, solubilization in 0.2 M NaOH, and liquid scintillation counting. Effect of Extracellular Calcium on Proliferation of Rat-1 Fibroblasts Although the role of extracellular calcium in modulating the proliferation of keratinocytes is well established, less is known about the responsiveness of mesenchymal cell types to extracellular calcium. To determine whether Rat-1 cells resembled human diploid fibroblasts in displaying a positive mitogenic response to increasing extracellular calcium (5), we measured [ 3 H]thymidine incorporation in Rat-1 fibroblasts as a function of extracellular calcium concentration. As shown in Fig. 1, changing the extracellular calcium concentration from 0.3 to 2.0 mM produced an 8.4-fold increase in [ 3 H]thymidine incorporation in Rat-1 cells; the extent of the increase was equivalent to that observed when EGF at 0.5 ng/ml was added in the presence of 0.3 mM extracellular calcium (Fig. 1, white bar). Expression of Functional CaR in Rat-1 Fibroblasts Activation of calcium-sensing receptors similar or identical to those expressed in parathyroid cells, neurons, and intestinal epithelial would provide a potential mechanism for inducing proliferative signals in response to changes in extracellular calcium. To detect the presence of CaR protein, we generated a polyclonal antibody against a synthetic peptide representing amino acids 11 through 27 in the extracellular domain of the rat CaR. Antibodies generated against the same region of the rat CaR were effective in both immunohistochemical and immunoblot studies of CaR expression in the brain (12). The anti-CaR antibody was affinity purified against the synthetic peptide and used in immunoblot experiments with partially purified plasma membrane preparations from rat kidney and Rat-1 fibroblasts (Fig. 2A). The affinity purified antibodies detected an identical pattern of strongly hybridizing bands at 120 -140 kDa in both kidney and fibroblasts. Rat kidney was used as a positive control as both Riccardi et al. (11) and Ruat et al. (12) have shown high levels of CaR expression in rat kidney. Preincubation with the synthetic peptide prevented detection of the putative CaR protein as shown in Fig. 2A. The anti-CaR antibody was also effective in immunohistochemical detection of CaR protein in fixed Rat-1 cells (Fig. 2B), and specificity was demonstrated by a loss of staining when excess synthetic CaR peptide was present during the antibody incubation (Fig. 2B). The identity of the CaR immunoreactive protein was confirmed by Northern hybridization analysis and reverse transcription-polymerase chain reaction. Two bands of approximately 5.3 and 3.8 kb were observed following hybridization with a full-length CaR probe; CaR mRNA levels were not influenced by extracellular calcium concentration (data not shown). Sequence analysis of an 800-base pair reverse transcription-polymerase chain reaction product obtained from Rat-1 fibroblasts using primers flanking the conserved transmembrane domain indicated that the Rat-1 CaR was 100% identical to the rat kidney CaR between amino acids 627 and 724 (Ref 11; data not shown). Release of Intracellular Ca 2ϩ in Response to Agonists of the CaR-To determine whether the presence of CaR protein in Rat-1 cells represented the presence of functional receptors, we tested the ability of Rat-1 fibroblasts to release intracellular Ca 2ϩ in response to extracellular Gd 3ϩ as used by Nemeth, Brown, and others (10,11,15,20) to detect functional CaR. When Rat-1 cells were exposed to 1.0 mM Gd 3ϩ in Mg-HBSS, many of the cells within a given field responded with an increase in intracellular calcium of at least 3-fold over basal intracellular Ca 2ϩ levels from a resting level of 100 nM to a peak of 250 -300 nM (Fig. 3A). The magnitude of this response is similar to that observed following treatment with thapsigargin and is sufficient to promote calcium-dependent gene expression (17). In contrast, treatment with 1 M ionomycin produced intracellular calcium peaks of 900 -1000 nM (data not shown). In six independent experiments on Rat-1 fibroblasts, 25-33% of the cells in a given field responded to extracellular Gd 3ϩ with a release of intracellular Ca 2ϩ . Similar results were obtained in response to 2 mM Ca 2ϩ (data not shown). In each of these experiments, no cells responded to addition of Mg-HBSS alone, indicating that the response was specific to the presence of Gd 3ϩ or Ca 2ϩ . Inositol Trisphosphate Production in Response to Activation of the CaR-Activation of the CaR has been demonstrated to result in increased production of IP 3 (10,21). To determine whether the CaR agonist Gd 3ϩ could induce IP 3 production in rat fibroblasts, we measured IP 3 production directly in response to 250 M Gd 3ϩ as described previously (18). Fig. 3B shows that stimulation with Gd 3ϩ produced a significant increase (p Ͻ 0.05, n ϭ 3) in IP 3 production in the Rat-1 cells, documenting a second aspect of established CaR function in these cells. Effect of CaR Activity on c-SRC Kinase Activity Elevation of extracellular calcium levels has been associated with increased c-SRC kinase activity in keratinocytes (6). If the CaR is responsible for modulating c-SRC activity in response to changes in extracellular calcium, then an increase in SRC kinase activity should be observed in the presence of Gd 3ϩ , which binds and activates the CaR without passing through calcium channels. If activation of the CaR is required for Ca 2ϩand Gd 3ϩ -induced activation of SRC, expression of an interfering CaR mutant (such as the R796W mutant associated with severe neonatal hyperparathyroidism; see Refs. 16 should inhibit the activation of c-SRC. We tested this possibility by measuring SRC kinase activity in the Rat-1 cells stably transfected with either pcDNA3 alone, pcDNA3-CaR, or pcDNA3-CaR-R796W. A 2-to 3-fold increase in immunoreactive CaR protein was detected in the pcDNA3-CaR and pcDNA3-CaR-R796W clonal cell lines chosen for use in these experiments (data not shown). The Rat-1 cells that were stably transfected with pcDNA3-CaR-R796W failed to increase IP 3 production in response to Gd 3ϩ (data not shown). The three stable cell lines were grown in low calcium medium (50 M Ca 2ϩ ) for 4 h, and half the plates were then exposed to 250 M Gd 3ϩ for 5 min prior to harvest. Cell lysates were immunoprecipitated with anti-SRC antibody (Upstate Biotechnology, Inc.) and kinase activity was assessed in vitro. As shown in Fig. 4, Gd 3ϩ treatment produced a 2.5-fold increase in c-SRC autophosphorylation in both vector and CaRtransfected cells, indicating that endogenous levels of CaR are sufficient for activation of c-SRC in response to the dose of Gd 3ϩ tested. The increase in SRC kinase activity was significantly inhibited (p Ͻ 0.05) in cells transfected with pcDNA3-CaR-R796W, indicating that the activation of SRC by Gd 3ϩ could be abrogated by an interfering mutant of the CaR. SRC kinase activity was normalized to c-SRC protein levels as determined by immunoblotting of the immunoprecipitates. Similar results were obtained in three replicate experiments with three different independently derived clones of pcDNA3-, pcDNA3-CaR-, and pcDNA3-R796W-transfected Rat-1 cells (data not shown). The results of these experiments indicate that the endogenous CaR in Rat-1 cells has the potential to activate c-SRC as a consequence of Gd 3ϩ binding and that this ability is disrupted in the presence of the R796W mutant. Effect of CaR Activity on Tyrosine Phosphorylation in Rat-1 Cells Increased c-SRC kinase activity in response to elevated extracellular Ca 2ϩ should be accompanied by an increase in the tyrosine phosphorylation of at least some c-SRC substrates. If the calcium-sensitive increase in c-SRC activity is mediated by the CaR then Gd 3ϩ should be able to mimic the effects of Ca 2ϩ , and expression of the nonfunctional CaR mutant R796W should inhibit calcium-sensitive phosphorylations. To test these hypotheses, we analyzed changes in tyrosine phosphorylation in whole cell lysates obtained from Rat-1 cells transfected with either vector, wild-type CaR, or the R796W CaR mutant. As shown in Fig. 5A, increasing extracellular Ca 2ϩ from 0.3 to 1.8 mM was associated with an increase in the tyrosine phosphorylation of proteins with apparent molecular masses of approximately 125-135, 62-65, and 41 kDa in control and CaR-transfected cells. Stimulation with Gd 3ϩ , rather than Ca 2ϩ , resulted in increased phosphorylation of the same or similar proteins (data not shown). Tyrosine phosphorylation of the 125-135-and 62-65-kDa bands was significantly inhibited in cells expressing the R796W mutant CaR (Fig. 5A). This FIG. 2. CaR protein expression. Panel A, immunoblot detection of CaR protein in rat kidney and fibroblasts. Fresh rat kidney and Rat-1 fibroblasts were processed for plasma membrane proteins as described previously (28). Equal amounts of protein (30 g/lane) were subjected to SDS-polyacrylamide gel electrophoresis, blotted, and incubated with anti-CaR antibody as described under "Experimental Procedures." In the two lanes on the right, incubation with anti-CaR antibody occurred in the presence of 50 g/ml blocking peptide. The arrow indicates the position of a strongly immunoreactive band at approximately 120 kDa. Panel B, immunohistochemical detection of CaR protein in Rat-1 fibroblasts. Rat-1 cells fixed in 4% paraformaldehyde were incubated overnight at 4°C with a 1:1000 dilution of anti-CaSR (left panel) or anti-CaSR plus blocking peptide at 50 g/ml (right panel). The second antibody was goat anti-rabbit conjugated to horseradish peroxidase, and 3,3Ј-diaminobenzidine was used as the chromophore. No staining was seen in the presence of secondary antibody alone (data not shown). To identify the proteins showing increased tyrosine phosphorylation, we used a combination of immunoprecipitation and immunoblotting with antibodies specific for candidate proteins. When antiphosphotyrosine immunoprecipitates obtained from Rat-1 cells under low and high calcium conditions were immunoblotted with a monoclonal anti-FAK antibody (F15020, Transduction Laboratories, Inc.), increased FAK immunoreactivity was observed in immunoprecipitates from cells treated with 2 mM Ca 2ϩ , compared with 0.3 mM Ca 2ϩ (Fig. 5B). These data suggest that p125 FAK is a potential substrate of CaRstimulated SRC activation. The protein band of approximately 63-65 kDa could represent any of three proteins that are known to be tyrosine phosphorylated in response to various stimuli. These three proteins are p68Sam, a known SRC substrate (23), p62 dok , a supposed SRC substrate that associates with p120 rasGAP (24), and a protein of approximately 65 kDa that shows increased tyrosine phosphorylation in response to extracellular calcium (25). Immunoprecipitation with antibodies specific for p68Sam and p62 dok , respectively, followed by immunoblotting with antiphosphotyrosine antibodies indicated that neither of these proteins showed an increase in tyrosine phosphorylation in response to extracellular calcium (data not shown). As no antibodies are currently available for study of the calcium-associated p65 protein, we were unable to test the phosphorylation status of this protein directly. Since the calcium-responsive p65 protein was originally identified in Rat-1 cells (25), it is possible that the 63-65-kDa protein showing CaR-dependent increases in tyrosine phosphorylation in Fig. 5A is the same protein identified by Medema et al. (25). CaR Activation and ERK1 Kinase Activity The observation of increased c-SRC kinase activity and increased tyrosine phosphorylation of specific proteins in response to elevated extracellular calcium raised the possibility that downstream proliferation-associated signaling events were also stimulated in a CaR-dependent fashion. To obtain a direct measurement of the effects of the CaR on an important proliferation-associated pathway, we measured changes in mitogen-activated protein kinase activity in response to extracel-lular Ca 2ϩ and Gd 3ϩ using in vitro kinase assays with immunoprecipitated ERK1 and GST-Elk1 as a substrate. As shown in Fig. 6, control Rat-1 cells displayed a 10-to 25-fold increase in ERK1 kinase activity in response to either 1 mM Ca 2ϩ or 100 M Gd 3ϩ . By comparison, EGF treatment produced a 48-to 68-fold increase in ERK1 activity. Rat-1 cells transfected with the R796W mutant CaR showed a nearly complete inhibition of ERK1 kinase activity in response to either Gd 3ϩ or Ca 2ϩ (Fig. 6B); ERK1 activation in response to EGF was also reduced in these cells but remained significantly higher than control values (p Ͻ 0.001). These results suggest that the extracellular calcium-dependent changes in SRC activity may be associated with activation of mitogen-activated protein kinase signaling pathways and that these pathways are disrupted in the presence of the mutant CaR. The observation of decreased ERK1 kinase activity in response to EGF in the presence of overexpressed mutant R796W-CaR suggests that cross-talk may exist between the CaR and the EGF receptor, as has been previously documented between the endothelin receptor and the EGF receptor (26,27). It is possible that the R795W-CaR may be inhibiting the activity of intermediate proteins involved in signaling from the EGF receptor. Role of c-SRC in Calcium-dependent Activation of ERK1 If the CaR-dependent activation of c-SRC is an essential component of the signal transduction pathway leading to a calcium-dependent increase in proliferation and mitogen-activated protein kinase activity in Rat-1 cells, then inhibition of c-SRC activity should significantly reduce or prevent the extracellular calcium-dependent increase in ERK1 activity and FIG. 4. Activation of c-SRC by Gd 3؉ treatment. Rat-1 cells stably transfected with either pcDNA (vector; ٗ), pcDNA-CaR (CaR; _), or pcDNA-CaR-R796W (CaR-R796W; u) were grown to confluence in DMEM at 1.7 mM Ca 2ϩ and then incubated in low calcium medium (50 M) for 4 h. Experimental plates were exposed to 250 M Gd 3ϩ for 5 min prior to lysis. Lysates were immunoprecipitated with anti-SRC antibody and kinase activity was assessed as autophosphorylation. Relative activity was quantified by PhosphorImager analysis and is expressed as the mean Ϯ S.D. normalized to c-SRC, n ϭ 3; * represents p Ͻ 0.05 compared with low Ca 2ϩ value. FIG. 5. CaR-mediated changes in protein tyrosine phosphorylation. Rat-1 cells stably transfected with either pcDNA, pcDNA-CaR, or pcDNA-CaR-R796W were grown in low calcium medium (0.3 mM) for 4 h (panel A). Calcium was elevated to 1.8 mM where indicated, and cells were harvested as described. Cell lysates normalized to contain 200 g of protein/lane were immunoprecipitated with antiphosphotyrosine antibody 4G10 and immunoblotted with the same antiphosphotyrosine antibodies. Arrows indicate three proteins that show increased tyrosine phosphorylation in response to elevated extracellular calcium in the presence of wild-type CaR. The 41-kDa protein was also phosphorylated in the presence of mutant CaR. Wild-type Rat-1 cells were grown in low calcium medium (0.3 mM) for 4 h (panel B). The high calcium group was exposed to 2 mM extracellular calcium for 15 min before lysis. Cell lysates normalized to contain 200 g of protein were immunoprecipitated with antiphosphotyrosine antibody 4G10 and immunoblotted with anti-FAK antibodies. The arrow indicates the position of p125 FAK . Rat-1 proliferation. This hypothesis was tested by measuring ERK1 kinase activity and thymidine incorporation in Rat-1 cells stimulated with 2.0 mM Ca 2ϩ in the presence or absence of herbimycin, a tyrosine kinase inhibitor with selectivity for c-SRC (28). As shown in Fig. 7, treatment with herbimycin (200 ng/ml) inhibited both the increase in ERK1 kinase activity (open bars) and the increase in thymidine incorporation (widely hatched bars) induced by elevated extracellular calcium. The values observed in the presence of herbimycin were approximately 20% of control values (Fig. 7). These data indicate that activation of c-SRC or a related cytoplasmic tyrosine kinase is required for the CaR-mediated activation of proliferative signals involving ERK1 activation. Neither wortmannin nor pertussis toxin could inhibit the calcium-induced activation of ERK1 (data not shown), implying that neither pertussis toxinsensitive G proteins nor phosphatidylinositol-3 kinases play an important role in signaling from the CaR in Rat-1 cells. To determine whether activation of ERK kinases was required for increased proliferation of Rat-1 cells in response to agonists of the CaR, we measured thymidine incorporation in Rat-1 cells as a function of extracellular calcium concentration in the presence or absence of PD98069, a specific inhibitor of the mitogen-activated protein kinase kinase MEK1 (29). PD98069 effectively inhibited the increase in the thymidine incorporation that is normally observed in response to elevated extracellular calcium (Fig. 7, closely hatched bars). In contrast, the ability of EGF to stimulate thymidine incorporation was only partially reduced in the presence of PD98059 (approximately 75% of control values). This result indicates that MEK1dependent activation of ERK kinases is an essential component of the signaling mechanism leading from CaR activation to increased proliferation of Rat-1 cells. DISCUSSION In this report we provide evidence demonstrating expression of functional CaR on fibroblastic cells. The physical presence of CaR on rat fibroblasts was established by immunodetection of CaR protein with specific anti-CaR antibodies and by Northern hybridization analysis. Sequence analysis of reverse transcription-polymerase chain reaction products representing the conserved transmembrane domain indicates that the fibroblast CaR is identical in sequence to the kidney and brain CaR, at least in this subdomain. Production of IP 3 and release of intracellular calcium in response to extracellular Ca 2ϩ or Gd 3ϩ provided pharmacological evidence that the immunoreactive protein represented functional receptors. We also present data bearing on the biological function of the CaR in fibroblastic cells. We show that Rat-1 cells respond to elevated extracellular Ca 2ϩ or Gd 3ϩ with an increase in the activity of proliferation-associated signaling events, including activation of the mitogen-activated protein kinase ERK1. Primary dermal fibroblasts have been shown to increase thymidine incorporation in response to elevated extracellular Ca 2ϩ (5); in this report we demonstrate a similar response in Rat-1 fibroblasts. These results indicate that extracellular Ca 2ϩ can have cell-type specific effects on proliferative pathways, stimulating increased proliferation in fibroblasts (this report and Ref. 5) while inhibiting proliferation in keratinocytes (1,2). Opposing effects of a common agonist on fibroblasts and keratinocytes is not unknown; the ability of transforming growth factor ␤ to stimulate proliferation in fibroblasts while inhibiting proliferation in keratinocytes (30, 31) presents an established example. FIG. 6. CaR-dependent activation of ERK1. Rat-1 cells stably transfected with either the pcDNA vector (wild-type CaR) or pcDNA-CaR-R796W (R796W) were grown to 90% confluence and then made quiescent in serum-free DMEM for 24 h. Cells were changed into low calcium medium (0.3 mM Ca 2ϩ ) for 4 h prior to addition of agonists as indicated. Cells were harvested 10 min after agonist addition, and lysates containing 200 g protein were immunoprecipitated with anti-ERK1 (Santa Cruz Biotechnology). In vitro kinase assays using GST-Elk1 as the substrate were performed as described (17), products were resolved by SDS-polyacrylamide gel electrophoresis, and the dried gel was imaged by PhosphorImager. Treatments were as follows: control, no additions; Ca 2ϩ , added Ca 2ϩ to 2 mM final concentration; Gd 3ϩ , 100 M Gd 3ϩ ; EGF, 10 ng/ml. Panel B, normalization of PhosphorImager data to immunoprecipitated ERK1. Results represent mean and standard deviation of two independent experiments. The mechanisms by which changes in extracellular Ca 2ϩ signal a change in the proliferation of keratinocytes have not yet been established. Exposing keratinocytes to elevated extracellular Ca 2ϩ is associated with an increase in the kinase activity of c-SRC and a decrease in the activity of c-YES (6,32); the mechanism producing this response has not been established. An increase in the apparent phosphorylation of an approximately 62-kDa protein co-immunoprecipitated with anti-rasGAP antibodies has also been demonstrated and attributed to activation of a calcium-binding receptor as opposed to Ca 2ϩ influx (7,9). However, these data remain correlative as it has not yet been shown that disrupting these responses to elevated Ca 2ϩ alters the antiproliferative response of keratinocytes to extracellular Ca 2ϩ . We have shown that many of these same signaling events occur in Rat-1 fibroblasts exposed to elevated extracellular Ca 2ϩ or Gd 3ϩ . Specifically, we have shown that these treatments are associated with an increase in the kinase activity of c-SRC and an increase in ERK1 kinase activity. Furthermore, we have used the nonfunctional CaR mutant R796W as a tool to disrupt the function of the endogenous CaR. In addition to its identification as the genetic mutation responsible for at least one form of hereditary severe neonatal hyperparathyroidism (15), CaR-R796W has been shown to function as an interfering mutant of the CaR when co-expressed with wild-type CaR in HEK293 cells and assayed by measurement of intracellular calcium release in response to Gd 3ϩ (16). We have shown that overexpression of CaR-R796W substantially inhibited the changes in SRC activity and ERK1 activation observed in response to agonists of the endogenous CaR. These results strongly imply that activation of the CaR is mechanistically involved in the activation of proliferative signaling pathways by extracellular Ca 2ϩ . Current information about signal transduction events downstream of ligand binding to the CaR is confined to the demonstration of IP 3 production and intracellular calcium release in response to CaR-specific agonists, such as Gd 3ϩ (13,33). In the parathyroid, elevating extracellular calcium to 5 mM produces a pertussis toxin-sensitive decrease in cAMP levels (34), presumably through a CaR mediated activation of G i , although this study preceded the cloning of the CaR. The ability of extracellular calcium to induce chemotaxis in osteoblasts is also inhibited by pertussis toxin, but it is insensitive to wortmannin, a chemical inhibitor of phosphatidylinositol-3 kinase (35). In contrast, activation of the CaR in AtT20 pituitary tumor cells is associated with a small increase in cAMP levels, and this increase is not inhibited by pertussis toxin (36). In our studies of CaR function in fibroblasts, we have found that the ability of extracellular calcium to stimulate ERK activity is insensitive to either wortmannin or pertussis toxin. These results suggest that the immediate consequences of CaR activation may show significant cell type variablity, presumably in association with the availability of particular G proteins for coupling to the CaR. Our observation of pertussis toxin-insensitive changes in tyrosine kinase activity following activation of the CaR in Rat-1 fibroblast suggests that the CaR may couple to members of the Gq/11 family in these cells (37). Results presented in this report provide evidence for a signal transduction pathway that connects the CaR to activation of SRC, tyrosine phosphorylation of known SRC substrates such as focal adhesion kinase, activation of ERK1, and ultimately, increased proliferation. In complementary experiments, we have shown that activation of SRC (or a related cytoplasmic tyrosine kinase) is required for the activation of ERK1 in response to elevated extracellular calcium and that functional CaR are required for activation of SRC and ERK1 in response to elevated extracellular calcium. Furthermore, the proliferative response to extracellular calcium can be prevented if any of these three key components (CaR, SRC, or ERK1) is inhibited by mutation or chemical inhibitors. These results support the existence of a proliferative pathway linking the CaR to activation of SRC and ERK1 and provide a potential mechanism for the known ability of extracellular calcium to modulate proliferation in a variety of cell types.
7,733.6
1998-01-09T00:00:00.000
[ "Biology" ]
Water Quality in Hydroelectric Sites The most widely form of renewable energy is the hydropower, which produces electrical power using the gravitational force of falling or flowing water. Comparing to fossil fuel powered energy plants, hydropower plants are considered „green” energy source, because they do not produce direct waste and have almost no output level of greenhouse gas carbon dioxide. Hydropower is the most important source of renewable electricity generation – 86.3 %, and essential to operate the others sources of renewable energy that are random generation. Introduction The most widely form of renewable energy is the hydropower, which produces electrical power using the gravitational force of falling or flowing water. Comparing to fossil fuel powered energy plants, hydropower plants are considered "green" energy source, because they do not produce direct waste and have almost no output level of greenhouse gas carbon dioxide. Hydropower is the most important source of renewable electricity generation -86.3 %, and essential to operate the others sources of renewable energy that are random generation. be presented. Finally a review of chemical and quality water evolution in time will be presented for a hydroelectric site of Romania. Fig. 1. Cross section of a hydroelectric power plant During warm seasons the large reservoirs become subject of thermal stratification. Because the upper layers are close to the free water surface, they have a higher level of dissolved oxygen (DO). On the opposite side, the lower layers have a low level of DO, mainly because of the organic sediments at the bottom of the reservoir. When DO level goes under 5.0 mg/l, the aquatic life is endangered and large quantities of fish can die if the DO remains at 1÷2 mg/l for a few hours. In hydropower plants, the water that goes to the turbines is taken from the lower layers of the reservoirs, sometimes with low DO content, which can affect the downstream water quality. The DO level from downstream water depends also on water head, periodic temperature variations, intensity and frequency of rain, hydropower plant design and its operation regimes. Recently, the number of studies concerning water quality from hydropower releases increased. Many environment or ecological issues were reported, in different types of hydroelectric schemes. Scientist and engineers try to find solutions and mechanisms which will improve water quality, especially DO level. Generally, the low DO level is caused by organic sediments left on the reservoir bottom floor from the initial filling. When these organic sediments decompose, they absorb the oxygen from water, producing sulphuretted hydrogen, carbon dioxide and methane (like greenhouse gas). This pollution alters the local flora and fauna, even causing total extermination of some aquatic species. Low DO level happens when the reservoir has a depth greater than 15 m and a volume bigger than 61·10 6 m 3 , the power output is more than 10 MW, and the retention time is longer than 10 days. Romania has about 170 hydroelectric sites, a quarter of them having reservoirs larger than 61·10 6 m 3 and deeper than 15 m, so they are susceptible for a low DO level (Bucur et al., 2010). www.intechopen.com The usual methods used to increase the DO level in the hydropower plants downstream waters include selective intakes, air diffusers, hub and draft tube deflectors. These equipments are used in the hydropower plants with different success rates in the aeration process. Generally, in order to increase the DO with 1mg/l, an air quantity of 1% from water volume is necessary (March, et al, 1992). A bibliographical revue is presented in this paper and recommendations are done for the implementation of aeration devices. There is no legal support for DO level control downstream hydropower plants, but there are intense concerns regarding this issue. Usually, turbine aeration is made only in order to reduce turbine central vortex, so to increase the efficiency and reduce unsuitable pressure fluctuations and structure vibrations. The aeration made to increase the DO level downstream hydropower plant must be more consistent. Injection of a bigger air quantity can decrease the turbine efficiency; therefore air injection becomes an important factor for the balance between power output and ecology. Hydroelectric reservoirs Regarding the mean multiannual water flow, the surface water sources in Romania are much higher than ground water sources. Each type of water source has its own physicchemical and biological characteristics, varying from region to region, depending on the mineralogical composition of the crossed areas, by the contact time, temperature, weather conditions, etc. Water accumulated in reservoirs has the physical -chemical qualities significantly different from water flowing in the river, before the dam construction and the hydropower development. Thus, the processes occurring in lakes can have an important impact on the water quality. On one hand the stagnation of water leads to a natural settling of suspended materials which determinate a good transparency of the water and less sensitive to weather conditions. On the other hand, the stagnation of water leads to thermal and chemical stratification which excludes the water circulation on vertically direction. Seasonal stratification of water in hydroelectrical reservoirs Thermal structure of lakes varies by climate, by configuration of the lake basin, by the water intake surface and by the total mineralization of water. The most common structure is the direct stratification, which involves the higher temperatures at the water surface and lower temperatures to the bottom. For this kind of stratification the decrease of water temperature is not uniform with depth. Temperate regions are characterized by dimictic lake ecosystems. Most lakes in Romania are considered dimictic, meaning they mix twice a year -spring and fall. In the winter season, reverse stratification will be installed, while in the summer period a direct stratification will appear. The lakes dynamic is characterized by energy and mass exchange processes. Dominant energy flow comes from the kinetic energy of wind and thermal energy produced by solar radiation. The vertical profile of temperature/density established in a lake results by superposing these two energy contributions (Dumitran and Vuta, 2011). Thermal stratification consists in the existence of a vertical thermal gradient in the water mass. The low thermal conductivity of water contributes also, assuring that thermal energy www.intechopen.com is very slowly transferred to the bottom layers of the water. This transfer is accelerated by vertical turbulent mixing and convective cooling of the water body. In time, the cumulative effect of heat loss and convective cooling can be felt throughout the water column, reducing the lake water temperature and causing a full mixing between the water layers (Pourriot and Meybeck, 1995). The cumulative effect of heat loss and convective cooling can be felt in the entire water column, thus reducing the lake water temperature and producing a full mixing between the water near the surface and deeper layers. Turbulent mixing is a process that precludes stratification, tends to destabilize the water column and is caused by shear induced by wind action (Stevens and Imberger, 1996). Convective cooling occurs only if the net heat flow from the lake surface is negative. Thus the lake is losing heat to the atmosphere and the water layers near the surface are cooling, becoming denser than deeper waters. At this point thermal stratification becomes unstable, and the volumes of water near the surface descend to a water layer with same temperature. Because of friction, running water entails other volumes of water, producing a new vertical mixing. The movement is done without wind energy contribution and there is a destratification tendency of superior layer to the equilibrium depth (Fig. 2). In summer, a typical temperature/density profile for a temperate lake is composed from two layers of small temperature/density gradient (epilimnion, hipolimnion) divided by a layer of high temperature/density gradient (metalimnion). Over the year, the lake water follows the cycle. In spring the ice melts into the lake, the wind picks up and the lake mixes. This is called spring turnover. Oxygen and nutrients get distributed throughout the water column as the water mixes. Then, as the weather becomes warmer, the surface water warms again and sets up summer stratification. During the summer the lake has a barocline structure, so at the surface a stable warmer layer of water www.intechopen.com overlies a colder water layer. The water movement due to wind and convection currents produces a mixing process which homogenize just the epilimnion, while the water temperature in the hypolimnion is kept at around 4 ºC. In the fall the sunlight is not as strong as during summer and the nights become cooler. This change in season allows the epilimnion to cool off. As the water in the epilimnion cools, the density difference between the epilimnion and hypolimnion is not as great. Wind can then mix the layers. In addition, when the epilimnion cools it becomes denser and sinks to the hypolimnion, mixing the layers. This mixing allows oxygen and nutrients to be distributed across the whole water column. In winter, when surface water temperature drops below 4 ºC, circulation ceases again and winter stagnation appear, characterized by an inverse thermal stratification. During this period the water mass is characterized by lower temperature at the surface and higher to bottom. The lake stratification entails lower dissolved oxygen concentration in the bottom and the emergence of anaerobic oxidation processes. The stratification of lakes has a negative impact on trophic evolution of these ecosystems. Thus, the organic matter content and nutrients concentration will be increasing and sometimes even the hydrogen sulfide will appear at the bottom of lakes. Day/night stratification of water in hydroelectrical reservoirs In temperate regions the temperature differences between day and night are significant, so the water cooled during the night, goes down in a deeper layer. This depth is direct correlate with the reservoir size, so it can vary from 5 m up to 20 m (Read et al., 2011). In these conditions, the thermoclin layer appears which is characterized by a temperature drop of 10 to 15 ºC ( Eutrophication of hydroelectrical reservoires Biological quality of water is essentially affected by eutrophication, a phenomenon favored by building a hydropower plant. Eutrophication has proved to be one of the most widespread and serious anthropogenic disturbances to aquatic ecosystems. The major cause for eutrophication is the increased loading of nutrients, especially phosphorous. Increasing wastewaters, introduction of phosphorous containing detergents, use of fertilisers, and erosion in the watershed are the major reasons for increased loading of nutrients. The effects of the eutrophication phenomenon are negatively reflected on water quality, for reservoir ecosystem and also for river ecosystem (Fig. 4). Thus, eutrophication may lead in some cases even to the impossibility of using the water for certain uses. Effects of eutrophication emergence affect the lake ecosystem by: • organoleptic changes of water (color, taste, o d o r a n d t u r b i d i t y ) b y i n c r e a s i n g t h e biomass of planktonic algae. Water may have a green color due to high content of green algae or diatoms, a red color in blue-green algae species presence, or even brown. This effect gives an unaesthetic aspect of water and leads to additional costs when water is used as a source of drinking water; • premature clogging of filters and grids of treatment plants which are supply directly from the lake, due to increased phytoplankton biomass; • biological clogging of the lake and therefore a reduction of its volume due to growth of organic matter content and organic detritus at the lake bottom; • inability to release and to transform the organic matter due to its excessive quantity; • pronounced decrease in the dissolved oxygen content, especially at the bottom of the lake, due to increased organic matter decomposition reactions; • pronounced increase in the concentration of carbon dioxide, iron, manganese, ammonia and hydrogen sulfide due to occurrence of anaerobic decomposition conditions when dissolved oxygen is depleted; • corrosion of water storage facilities due to the occurrence of precipitation reactions of iron and manganese. The same effect occurs in water in the presence of some Cyanophyceae species (Oscillatoria), which corrodes steel tanks in the presence of light; • the appearance of toxic substances in water disposed by some Cyanophyceae species (Microcystis and Anabaena flos-aquae aeroginosa), causing human gastrointestinal disease; • replacement of special fish species by common species due to changes in water quality. From all the effects of water eutrophication, the most important consequence is the decrease of oxygen availability. During the day, plants produce oxygen, through photosynthesis, using sunlight. In the night, all organisms consume the oxygen dissolved in water by endogenous breathing. When excessive amounts of biomass exist in the water body, decomposing organic matter will lead to higher oxygen consumption. Thus the oxygen in the water will be depleted, leading on the one hand to the impossibility of aquatic organisms breathing and on the other hand to the occurrence of anaerobic decomposition. Therefore all the biotic components of the aquatic ecosystem will suffer. The heterotrophic organisms will be the first affected (fish and shellfish) because of their increased sensitivity to changes taking place in the chemical composition of water, like excessive alkalinity that occurs during intense photosynthesis processes and the lack of dissolved oxygen. Eutrophication leads to changes in the populations of organisms that live in water. This is done through changes in ecological factors, which are becoming limiting factors for development of the aquatic organisms. Mostly are considered abiotic ecological factors, such as light, temperature, water movement or quantity of certain nutrients in water. Since eutrophication involves a high input of allochthonous nutrients, often the light is the limiting factor for algae flourishing. Thus, due to changes in optical properties of water through cover of the surface water by vegetation, a high mortality of zoobenthos, nekton and zooplankton appear. The nutrients demand vary widely from one species to another, both in terms of type and nutrient intake, so that a deterioration of the relationship between nutrients (nitrogen, phosphorus, silicon and iron) determine changes in qualitative and quantitative composition of the phytoplankton. From all the nutrients in the aquatic ecosystems, phosphorus is given the most attention, because is essential for all phytoplankton species. From the 5000 phytoplankton species with high abundance and wide geographical distribution, only 300 produce algae flourishing. Among the species that produce large biomass are many Cyanobacteria, which have the capacity to produce toxic substances in the water with effects over health. Changes in the phytoplankton composition in an aquatic ecosystem will cause major changes in the entire trophic chain. Thus, the composition of primary consumers (zooplankton and fish) will change (Cooke et al., 2005). Accidental pollution of hydroelectrical reservoirs The death of fish is mainly caused by the level of dissolved oxygen. There are also some situations of water pollution with toxic substances, but they will not be detailed in this paper. As an example, in river Târnava Mare (downstream Odorheiul Secuiesc, Romania) an historical pollution incident happened in 2002 (table 1) [Serban, 2005]. It caused fish morbidity, because of high temperature (over 20°C), water flow lower than annually average, low water velocity (0.3 ÷ 0.6 m/s) and overloading with some organic substances from an upstream wastewater treatment. All these made the level of DO lower than 4mg/l. If the water pollution is limited, the reconstruction of original water quality is possible only by eliminating the accidental pollution sources. Management of water quality in hydro electrical reservoirs Because the lake stratification has a negative effect on water quality, the depth of reservoirs is a great disadvantage from water quality point of view. Therefore very deep lakes are not desired. For deeper reservoirs, one measure to combat the stratification is to locate water intakes at different depths. The main effect of the stratification in the hypolimnion and sediments is the increased consumption of oxygen and appearance of anoxic conditions which impoverish the deep www.intechopen.com water fauna. This condition may also lead to a series of chemical and microbial processes like nitrate ammonification, denitrification, desulphurication and methane formation. The release of phosphorous from the sediments is extremely important as it accelerates eutrophication. The following actions are required to maintain water quality in the reservoirs used by the hydroelectric development: the watershed management by river bed erosion control works, which will reduce the intake of silt in the lake; -discharge of the effluent downstream the reservoir section; -reducing water pollution; -setting up sanitary protection perimeters around the lake and adjacent control of tourist areas; -prevention of lake stratification and insurance of water vertical circulation through water intake at various depths and periodic use of bottom discharge system at a flow able to ensure hygiene riverbed downstream and hipolimnion renewal. -discharges in reservoir mass is advantageous to be submerged, perpendicular to the surface of the lake for a maximum effect of aeration and movement of water layers. -flows discharged from tailrace must have a minimum impact for downstream environment. Discontinuous discharge destroys the river bed, erodes the banks and can even break the roads and bridges. Such flows also have a stressful effect on fish. In this way the water quality can be maintained in reservoirs without negative effects on water quality. Aeration methods inside hydraulic turbines As presented before, the water quality downstream hydropower plant depends mainly on the quality of water from the upstream reservoir. After water passes through the turbines, a supplementary degasification of water takes place, because of the low pressure in turbine draft tube, which lowers the DO level. This process happens mostly in Francis turbines at partial load operating regimes. This is the main reason for developing and installing new aeration methods to increase DO level of turbined water. From hydraulic point of view, an air quantity injected downstream turbine runner, could affect turbine efficiency. For this reason, it is recommended that air inflow to be maximum 1÷3% of turbine water flow. Existing solutions for aeration of hydraulic turbines Measurement data are available for different technical solutions for water aeration: auto ventilation turbines, developed by Voith Hydro and Tennessee Valley Authority. The aeration can be made central, distributed or peripheral (through the outlet edge of blade) ( Figure 5). Test were made for each aeration system individual and combined with the others. The air injection is made through new or existing passages (vacuum braking system and snorkel tubes), using air compressors or natural air admission (proffered for a lower cost). a system with air injection in turbine and another one with oxygen injection through porous hoses in penstock were installed at Tims Ford Dam (Harshbarger et al., 1999 and. The autoventing turbines (central, peripheral or distributed) were implemented for the first time at Norris Dam. These aeration systems can be used individually or combined. The justification for any solution depends on many parameters, characterizing each hydroelectric site. For autoventing turbines all combination were tested. When a group operates with all aeration systems the DO increased up to 5.5 mg/l. In this case the air absorbed in turbine is twice than the air absorbed by the original runners. Depending on the operational conditions and the aeration system, the energetic efficiency decreased with 0 ÷ 4 %. Fig. 5. Aeration methods for autoventing turbines In other researches (March et al., 1992) the DO level in downstream water was up to 6 mg/l, trying to affect as little as possible the energetic efficiency. A few solutions were tested, like injecting air through runner, the design of a new deflector, low pressure edge blades, coaxial diffuser, injection in the conical part of the draft tube, or combination of them. Aeration performance can be evaluated by measuring the DO level upstream and downstream turbine, where DO u and DO d are the OD concentration upstream and downstream turbine. The effect of aeration over the hydraulic efficiency of the turbine is where a η is turbine efficiency with aeration system and 0 η is turbine efficiency without aeration system. Two aeration systems were tested at Tims Ford Dam (Harshharger et al., 1995), in order to achieve a DO level of 6 mg/l, by injecting air in turbine and oxygen in penstock, through porous rubber pipes. For an upstream DO level of maxim 1 mg/l, if both aeration systems were operational, the DO level got to 5.2 mg/l and if only air system was on the DO level was 4.2 mg/l. The air was injected with high pressure compressors under runner cover or in the draft tube. Also, porous line diffusers were installed in penstock, for oxygen injection, in case the desired DO level (6 mg/l) is not reached only with the air injection system. The cost of this oxygen injection system in 1995 was of 300'000 $. Both systems were used during low DO level periods. In order to evaluate the DO level increase and turbine efficiency, the air, oxygen and water discharge were modified during tests. In any of the cases, the turbine efficiency decreased with maxim 1%, so the aeration didn't affect it too much. But this technology was rejected because of the rate between initial installing cost, long time operation and service cost. Another research study, developed during two years, was made at la Bagnell Dam (Sullivan et al., 2006), over two turbines (with runner aeration orifices), an old one and a new one, with some changes. The tests were made for 51 combinations of water discharge, downstream water depth and aeration orifices diameters and were determined the water discharge through orifices, DO level and water temperature in sections upstream and downstream turbine. As a general conclusion, the oxygen transfer efficiency increases with the increase of air discharge and of downstream water level. For the older turbine, it was noticed that for smaller openings the runner orifices are more efficient than draft tube orifices, and that when both aeration systems were operational and the water discharge was small, the DO level was over 5 mg/l. Some researchers made studies concerning DO, temperature and fish growth downstream the hydro plants (Boring, 2005). For the Saluda River (U.S.) a model based on the historical data from 1990-2005 had been developed. In accordance with Environmental Protection Agency (EPA), the criteria established for 2006 are: for survival of trout -min. 3 mg/l, for growth protection -6.5 mg/l for an average of 30 days, and for sensitive cold-water invertebrates -min. 4 mg/l. The studies and researches continued with mathematical modelling of the flow (Rohland et al., 2010) for the three classical aeration methods in a Francis turbine. Each classical injection method has different characteristics and influence over the dimension and distribution of bubbles that flow through the draft tube and over the operating efficiencies. The parameters that influence the efficiency are: the shape and length of draft tube, bubble retention time, quantity of air or the void fraction, air admission intake, bubble size and distribution. From quantity of air or the void fraction point of view, central aeration is the most efficient. The calculations for turbine efficiency and the aeration methods were used for optimization of aeration solution at Bridgewater plant, one of the first power plants designed in respect to aeration. In Romania, environmental impact and water quality are main concerns, but the aspects of DO level are not taken in consideration. Even if the hydropower operators are preoccupied about environmental issues, there is no legal support for DO level. Turbine aeration (especially central zone) is made only in a few sites, but with hydraulic purposes (to reduce central vortex at partial load) in order to reduce pressure fluctuations. Aeration efficiency and main parameters As mentioned before, two main parameters must be considered in the aeration process: the DO transfer through air injection and the total energetic consumption to realize it. The DO transfer necessary for water quality improvement, depends on many physical parameters: the quantity of injected air, the gas -liquid contact surface, time of contact, temperature, pressure gradient flow, DO level gradient, turbulence level of flow. The energetic consumption necessary to introduce the air in liquid depends on the following operational parameters: the quantity of injected air, injection method (natural or induced), air influence over turbine efficiency (flow changes). In order to obtain a good global balance the DO transfer must be done with a minim energetic consumption. For this purpose is necessary to generate an optimal dimension for the gas bubble -as small as possible (the positive effect is double, because of increasing contact surface and retention time), but with low hydraulic losses for decreasing energetic consumption. The main parameters that must be considered to find the best compromise between the improvement of water quality and modification of turbine efficiency -see table no 2. Energetic operational parameters -Energetic consumtion for air injection (natural injection is more advantageous), -Efficiency looses due to changes in internal flow. The air injected in turbine affects its efficiency in two ways: because of the flow perturbation caused by air introduced in water flow and because of energetic consumption necessary to introduce the air. Considering the above, the injected air flow is limited to 1÷3% of water inflow. Usually, this air quantity is not enough to have a good aeration and to obtain the minim reference DO level (5÷6 mg/l) in downstream waters. The natural absorption of atmospheric air is preferred, because it uses the existent turbine depression. www.intechopen.com Mathematical models for turbulent two phase flow in complex configurations is one of the most difficult part in gas -liquid flow simulations, not solved until now. A more detailed representation of bubble movement and of their interaction with the liquid phase can get to development of turbulence models. In spite of consistent efforts for a correct description of closing rules for drag forces, lift forces and mass forces, the precisely model of interfacial forces remains an open matter in this kind of numerical simulations. This paragraph is focused on correlation of aeration quality (oxygen transfer for a constant injected air flow), with the energetic efficiency of the transfer (energy consumption for introduction of the air flow). Further on is presented the air bubble dimension influence on DO transfer, in relation to aerator required pressure, for different air discharges. Five metallic perforated plates were used with different orifice sizes (d) and identical geometries ( Figure 6). The total perforated surface area is equal in all configurations (12 mm 2 ). The tests were made in same hydrodynamic conditions, in a tank with 79.2 l of water. For each plate were determined the DO level (C), water temperature (t) and pressure losses ( Table 3. Evolution of mass transfer parameters with the orifice diameter for constant air discharge Q air =360 l/h Figure 7 shows the influence of orifice diameters on standard aeration efficiency provided by the metallic plates for different air flow rates . Also, by increasing orifices diameter is necessary to increase the air flow rate, otherwise the plates generate bubbles from half of their surface. A bigger air flow rate leads to increased pressure losses on the aeration device and diminishes the standard aeration efficiency. Finally is obtained the efficiency of the oxygen transfer rate related to power needed to inject the air. Water chemical parameters in reservoirs of low head hydropower plants It is important to study in time the water chemical parameters in large reservoirs. Big hydroelectric projects change the environment, creating retention lakes, the interaction between water and equipment materials causing different corrosion stages. In the same time, the equipments and their operation can affect the water quality (oil leaks, water degasification the through the turbines etc. A special particularity is that all five HPP operate both turbine and pump regime, which means important volumes of water are transported downstream and upstream in the same site. This could affect in a negative way the water quality by preserving for a longer time the accidental pollution effects. Because water quality and environment protection are main concerns, the chemical parameters of water in Olt River, is periodically analyzed. For example, a few parameters recorded after ten years of operation and after twenty years, shows that the long time interaction between water and the equipment of HPPs, did not affect the quality of water. Generally, all parameters remained at the same values, or even decreased, excepting sulphuretted hydrogen, but still remains in admitted values. During the analyzed period, there were no pollution accidents, neither in the hydropower plants in the Olt River and nor in its effluents. It is also important to determine the effect of water on the equipment (corrosion). For this purpose, determinations in ten different measurement points along Lower Olt cascade were made for the level of pH indicator, chlorine content and manganese content. The results showed for pH indicator that five values are between 5.5÷6.5, and five of them are between 6.5÷7.0, all being between 6.5 ÷ 8.5, the reference limits for natural waters. The chlorine content is between 20 mg Cl/l and 106.5 mg Cl/l and the total manganese contents are up to 0.004 mg Mn/l, but mostly is zero. The total manganese content being so low the possibility of strong oxidizer appearance on the steel surface can be excluded (Bucur et al., 2010). These results show that after twenty years of continuous, the water quality in this complex hydroelectric site was preserved from physical -chemical point of view. The dissolved oxygen quantity has a level that allows the preservation of aquatic life. Conclusion The hydraulic energy is an essential green energy source and important for the integration of the others energy sources in the energy system. However, as an essential live sourcewater -is used for energy generation, to conserve the green character of power generation, accompaniment measures are to be implemented on the hydropower plants sites. The quality of water in a hydro electrical site depends on both natural factors -like temperature variations, precipitation intensity and frequency, thermal stratification reservoirs, and operational parameters and components of the hydro electrical site. The behaviour of the water in the lake is to be studied and considered in operation. To not take in consideration this behaviour, the complete eutrophication of the lake can happen, with huge consequences of the ecological system. Another essential topic is related to the released water. The required parameters to preserve the aquatic life are to be insured; the main parameter is the DO, and 5 mg/l are needed. Aeration devices can be implemented to improve the DO content. The implementation has to consider the compromise between the positive effect of air injection and the inconvenient of energy losses due to air injection (energy needed for injection and energy losses due to flow perturbation by the air injection). Aeration devices can be implemented on new facilities or in the refurbishment process. Actual studies are done to improve the efficiency of the aeration. The environmental impact of a hydroelectric power plant should be minim, and the water parameters should be as close as possible to natural water course values. The necessity of supervising of the water quality is a reality that should be a main concern for all www.intechopen.com
7,233.8
2012-05-16T00:00:00.000
[ "Engineering", "Environmental Science" ]
Models for Impurity Incorporation during Vapor-Phase Epitaxy Impurity incorporation during vapor-phase epitaxy on stepped surfaces was modeled by classifying rate-limiting processes into i) surface diffusion, ii) step kinetics, and iii) segregation. Examples were shown for i) desorption-limited Al incorporation during chemical vapor deposition (CVD) of (0001) SiC, ii) preferential desorption of C atoms from kinks during CVD of Al-doped (000-1) SiC, and iii) segregation-limited C incorporation during metalorganic vapor-phase epitaxy of (0001), (000-1), and (10-10) GaN. Introduction Impurity incorporation during vapor-phase epitaxy has been modeled via, for example, site competition [1,2] and surface vacancies [3,4]. The latter, however, cannot explain the variation in impurity doping around facets [5]. Moreover, in the cases of homoepitaxial growths of SiC and GaN, misoriented substrates are often used for polytype [6] and doping-uniformity [7] controls, respectively. Accordingly, we modeled impurity incorporation during step-flow growth by taking Aldoped SiC and C-doped GaN, as examples. We believe the models should be beneficial for determining allowable off-angle variations for desired doping-level uniformities in advanced devices. Although Al was chosen due to the availability of a thermodynamic model [8], N doping for SiC could be similarly treated under the assumption of the N segregation coefficient being unity [9]. Proposed Models Impurity incorporation during vapor-phase epitaxy on stepped surfaces was modeled by classifying rate-limiting processes into i) surface diffusion [10], ii) step kinetics [11], and iii) segregation [12] (Table I). i) Desorption limits impurity incorporation at step-edges when surface diffusion length λ is less than a half of the average inter-step distance, λo. This should be the case with incorporation of Al, whose λ was estimated to be less than 2 nm at 1550℃ [10], into stepped 4H-SiC (0001). This is due to relatively large λo (eg., 7.2 nm for θ = 8°) originating from four-bilayer-high steps [13]. Based on the Burton−Cabrera−Frank (BCF) theory [14], we derived the following equation for x in AlxSi1-xC [10]: (1) where Fi, Pi e , and mi (i = Al, Si) are, respectively, the incident flux, equilibrium vapor pressure, and mass of i atom, K and γ are, repectively, the equilibrium constant and activity coefficient for AlC, Tg is growth temperature, and kB is Boltzmann's constant. Eq. (1) explains why x was independent of the off-angle θ (ranging from 2° to 8°) when the C/Si ratio, r, was small (i.e., 1.8 [15]); due to large PSi e , the first term in the right-hand side, which corresponds to the Al desorption flux, became dominant (solid line in Fig. 1). Eq. (1) also explains why x increased with θ when r was large (i.e., 4−6 [16]); due to small PSi e , the second term in the right-hand side, which corresponds to the Al flux incorporated into the solid, became so large that x increased with the step density on the surface (dashed and dotted lines in Fig. 1). Table I. Rate-limiting processes of impurity incorporation during vapor-phase epitaxy. Classification of host-atom desorption from kinks Surface-diffusion length of impurity atoms Less than λo/2 Much larger than λo/2 Preferential desorption of host atoms from kinks Surface diffusion Step kinetics Negligible desorption of host atoms from kinks Segregation ii) Preferential desorption of host atoms from kinks limits impurity incorporation at kinks even when λ >> λo/2. This should be the case with incorporation of Al into 4H-SiC (000-1) that has onebilayer-high steps [13]. We assume that a C atom making two bonds with a Si atom stays at kinks, while that a C atom making one bond with a Si atom easily desorbs from kinks [ Fig. 2(a)]. Since r is typically small (eg., r ≤ 6 [16]), some surface-diffusing Al atoms that arrive at kinks keep waiting (for an average time τC) until C atoms make one bond with Si atoms at kinks [ Fig. 2(b)] before they are incorporated into the solid [ Fig. 2(c)]. Based on the reported experimental results [16], surface Al concentration nAl (normalized by the mean residence time τAl) was calculated (Fig. 3). nAl in the vicinity of step-edges (i.e., local minima in Fig. 3) on (000-1) is much larger than that on (0001), indicating longer τC on (000-1). 4 Silicon Carbide and Related Materials 2021 Fig. 2. Schematic illustrations of (a) preferential desorption of a C atom having one bond with a Si atom, (b) adsorption of a C atom to a dangling bond of a Si atom and bonding of an Al atom to two C atoms, and (c) bonding of another Al atom to three C atoms at kinks on 4H-SiC (000-1). iii) Segregation limits impurity incorporation even when λ >> λo/2 and desorption of host atoms from kinks is negligible. This should be the case with incorporation of C into GaN that is typically grown with the N/Ga ratio exceeding 1000 [17−19]; namely, soon after a N atom making one bond with a Ga atom desorbs from kinks, another N atom makes one bond with the Ga atom. When the length of time before the C concentration at the step-edge site reaches its equilibrium value, τstep, is much smaller than the meantime until a C atom incorporated at kinks moves through the step-edge site to the surface site, τ, the C concentration in the solid can be expressed as [20] where Nsurf and Nstep are, respectively, the equilibrium C concentrations at the surface site and at the step-edge site, D is the diffusion coefficient in the solid, Vstep is the average step velocity, and a is the lattice constant. As shown in Fig. 4, the results for (0001) [17], (000-1) [18], and (10-10) [19] growths are well reproduced with D of 2×10 −13 cm 2 /s that agrees with the experimentally determined value [21]. Step-velocity dependences of carbon concentrations fitted to the reported results [17−19]. Summary Impurity incorporation during step-flow growth was modeled and exemplified by SiC: Al and GaN: C cases. We believe the proposed models should contribute to determining allowable off-angle variations for desired doping-level uniformities in advanced SiC and GaN devices.
1,397.2
2022-05-31T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Manipulating dc currents with bilayer bulk natural materials The principle of transformation optics has been applied to various wave phenomena (e.g., optics, electromagnetics, acoustics and thermodynamics). Recently, metamaterial devices manipulating dc currents have received increasing attention which usually adopted the analogue of transformation optics using complicated resistor networks to mimic the inhomogeneous and anisotropic conductivities. We propose a distinct and general principle of manipulating dc currents by directly solving electric conduction equations, which only needs to utilize two layers of bulk natural materials. We experimentally demonstrate dc bilayer cloak and fan-shaped concentrator, derived from the generalized account for cloaking sensor. The proposed schemes have been validated as exact devices and this opens a facile way towards complete spatial control of dc currents. The proposed schemes may have vast potentials in various applications not only in dc, but also in other fields of manipulating magnetic field, thermal heat, elastic mechanics, and matter waves. developed to manipulate EM wave propagation in a practically arbitrary manner.Besides making objects invisible [1][2][3][4] , many other novel devices are rapidly emerging, with a representative one being a concentrator [5,6] that can enhance the energy density of incident waves in a given area.In addition to manipulation of EM waves [1][2][3][4][5][6] , the theoretical tool of coordinate transformation has been extended to other areas of physics (such as acoustic waves [7] , matter waves [8] and elastic waves [9] ). Recently, many significant achievements have been made in the manipulation of magnetostatic field [10][11][12][13][14][15] , thermal conduction [16][17][18][19] , and electrostatic field [20][21][22][23][24] .In 2007, Wood and Pendry proposed a dc metamaterial that pointed the way towards the design of static magnetic cloak [10] , and the dc metamaterial was experimentally verified soon afterwards [11] .Recently, the dc magnetic cloak is theoretically investigated [12] and experimentally realized using superconductors and ferromagnetic materials [13,14] .By using the same materials as dc magnetic cloak, the theoretical realization of a dc magnetic concentrator is also demonstrated [15] .On the basis of form invariance of the heat conduction equation, transformation thermodynamics is investigated to manipulate diffusive heat flow [16] ; through tailoring inhomogeneity and anisotropy of conductivities, transient thermal cloaking has been experimentally demonstrated [17] .In addition, manipulation of heat flux with only two kinds of materials (by utilizing a multilayered composite approach) has been reported [18,19] .Recently, a transformation-optics based dc electric cloak, composed of inhomogeneous and anisotropic conductivities, has been implemented using anisotropic and spatially-varying network of resistors [20] .Soon after, an ultrathin dc electric cloak [21] and a dc electric concentrator [22] are reported using similar resistor networks.More recently, an exterior dc cloak [23] and an active dc cloak [24] have been experimentally realized by the use of inhomogeneous and anisotropic conductivities with the aid of active sources.It is noted that EM invisibility cloaks made of L-C networks have been experimentally demonstrated [25] . Here, we demonstrate the designs of novel devices (viz., cloaking sensor, bilayer cloak and fan-shaped concentrator) for manipulating dc currents with natural bulk materials, and we further experimentally realize the bilayer cloak and fan-shaped concentrator to confirm the proposed methodology.The significance of this work is twofold.First, only the most common bulk materials are employed to construct the proposed devices; this does not involve exotic materials that need to be mimicked with complicated resistor networks [20][21][22][23][24] , thus pushing the transformation devices a big step further towards practical applications.Second, derived rigorously from electric conduction equation, our schemes are exact rather than approximate ones.Furthermore, the proposed devices can be arbitrarily scaled up and down without changing the materials. We begin with the concept of cloaking sensor in dc currents (which will naturally lead to the design of a bilayer cloak).Cloaking sensor [26] is a sensor wrapped by a shell that is capable of receiving incoming signal without distorting the external field.Fig. 1(a) demonstrates the concept of cloaking sensor in dc currents.We consider a round sensor (with radius of b) wrapped by a shell (with thickness of c-b).The conductivities of the sensor and shell are 1 σ and 2 σ respectively.A uniform dc current conducts along x-direction with current density of 0 J , which is equivalent to a uniform external electric field 0 E applied in the x-direction due to . Since the electric potentials satisfy Laplace's equation (1) into Eq.( 2), we obtain where Obviously, cloaking sensor can be successfully achieved as long as Eq. ( 4) is fulfilled.The drawback of cloaking sensor is that the cloaking shell has to be changed when either the geometrical size or material of sensor is changed.Actually, in most cases, we only need to render an object invisible without receiving the incoming signal.Analogous to EM cloaks that prohibit incident waves with a PEC layer, an insulating layer (a<r<b) located between the object and cloaking shell may prevent the electric current from touching the object.We thus derive a bilayer cloak as conceptually demonstrated in Fig. 1(d).By setting Hence, an exact bilayer cloak, derived directly from the electric conduction equation, has been obtained.Eq. ( 5) implies that the geometrical size of the bilayer cloak (b and c) can be tuned at will without changing the materials of the outer layer and background (i.e., 2 σ and b σ are fixed). The experimental realization of a bilayer cloak is schematically illustrated in Fig. 2 (a), which is composed of an insulating layer (where a<r<b) and a copper shell (where b<r<c). Here, air is wisely chosen as in the sulating layer, and can be very thin due to its good insulating property.The geometrical parameters are a=2.5 cm, b=2.6 cm, c=2.66 cm.We In the experimental setup, a gradually changing structure was adopted to transform circular equipotential lines to planar ones in the observation area., where n>0. Considering that a uniform electric field 0 E is externally applied in the x-direction, the potential for the three regions can be obtained: . Clearly, we obtain , which means that 100% concentrating efficiency is achieved. .Clearly, the concentrating efficiency of the fan-shaped concentrator is still above 80% even when b/a=100.Furthermore, the fan-shaped concentrator (employing only copper) always keeps the external potential distribution undisturbed no matter what the ratio b/a is, as shown in Figs.4(c)-4(e).We observe that, outside the concentrator, the potential distribution is not distorted, which is the same as those in the homogeneous materials. According to , we obtain the current density of the central region as , where background J  denotes the current density of the background.It is calculated that the current density of the concentrating region will be enhanced by more than 80 times when b/a=100. We fabricated the fan-shaped concentrator in Fig. 4(a) with a=1.2 cm and b=6 cm, including 36 copper wedges and 36 air wedges.Analogous to the bilayer cloak, a gradually changing structure is also adopted to transform circular equipotential lines to planar ones in the observation area.An exact concentrator has to satisfy two conditions: (1) the external field outside the concentrator should be undisturbed, and (2) the electric field or current density should be focused into a smaller region.To examine the first condition, we measured and simulated the normalized potentials along the observation lines at x = -7 cm and x = 7 cm, corresponding to the dotted black lines in Fig. 5(a), respectively.It is apparent that the potential profiles are the same as if there were nothing there.To examine the second condition, we measured and simulated the potential distribution along the observation line at y = 0 as shown in Fig. 5(b).Compared to the pure background with linear potential distribution, the concentrator makes the voltage change sharply in the central region, which unambiguously demonstrates the concentrating effect.The measurement results agree well with simulation results, which confirm that our fan-shaped concentrator completely fulfills these two conditions. In summary, we have demonstrated the manipulation of dc currents with natural bulk materials, and experimentally confirm the methodology through bilayer cloak and fan-shaped concentrator.Our design schemes do not rely on transformation optics, and we can thus avoid the problems present in previous proposals (such as inhomogeneous and anisotropic parameters that need to mimicked via complicated resistor networks [20][21][22][23][24] ).Also, the proposed schemes, derived directly from the electric conduction equation, are exact rather than approximate ones.Finally, excellent performance can be achieved by employing only natural bulk materials, thus indicating that our advanced scheme may be readily extended for various applications beyond dc control [7,13,14,18,27,28] . Bilayer cloak in Iron To demonstrate that our proposed scheme is robust, we design and fabricate a bilayer cloak with background of iron as shown in the photograph of Equipotential lines (white) and dc current lines (green) are also demonstrated in panel. The measured results of bilayer cloak in iron are demonstrated in Fig. S3, in which (a) and (b) show the normalized potential distribution at the left observation line (where x = -3.2cm) and the right observation line (where x = 3.2 cm) respectively.Again, the original equipotential lines are significantly distorted due to the bare object, and have been restored to original straight lines when the object is wrapped by our bilayer cloak.The measurement results agree very well with simulations.It is noted that the measured results of bilayer cloak in stainless steel (Fig. 3) is not as perfect as those in iron (Fig. S3), which is attributed to the fabrication errors because copper layer in stainless steel (0.06 cm) is much thinner than in iron (0.5 cm). 3 ). are constants to be determined by the boundary conditions and i φ denotes the potential in different regions -i = 1 for the cloaking region (where b r ≤ ), i = 2 for the cloaking shell (where c r b ≤ <) and i = 3 for the exterior region (where r>c).Taking into account that 3φ should tend to θ As the electric potential and the normal component of electric field vector are continuous across the interfaces, we have σ is the electric conductivity of the background.By substituting Eq. Considering that the sensor (central region), cloaking shell, and background are stainless steel obtain c = 2.9 cm when b = 2.5 cm according to Eq. (4).The simulated plot reproduced in Fig.1(b) affirms the concept of cloaking sensor in dc currents compared to the case of bare sensor in Fig.1(c). consider the case where the central region (cloaking region) is connected to the ground.The calculated potential distribution of bilayer cloak is plotted in Fig. 1(e), in which the dc currents (electric-field lines) are also presented.As expected, the dc currents bend conformally around the cloaking region and restore exactly outside the cloak without distortion, thus rendering the object invisible.When the bilayer cloak is removed, the simulation result of bare object is demonstrated in Fig. 1(f), in which severe distortions of potential distribution and dc current lines can be clearly observed. Fig. 2 (( 2 ) b) shows the simulated potential distribution of a homogeneous gradually changing structure: it is clear that planar equipotential lines have been successfully obtained in observation area.The inner layer, outer layer, and background of fabricated bilayer cloak are air, copper, and stainless steel, respectively.Its geometrical parameters are chosen the same as before: a=2.5 cm, b=2.6 cm, and c=2.66 cm.The simulation result demonstrated in Fig.2(c) agrees very well with the pure background in Fig.2(b).In the experimental setup, a dc power supply with 1 V magnitude is used as the source, and the voltage is measured using a FLUKE 45 Dual Display Multimeter.The cloaking performance can be evaluated by measuring the potential distribution along the observation lines (shown as black dotted lines in the insets of Fig.3) near the bilayer cloak.In the ideal case, the observation lines are straight equipotential lines.The simulated and measured results are shown in Fig.3, in which (a) and (b) correspond to the normalized potential distribution at the left observation line (where x = -2.8cm) and the right observation line (where x = 2.8 cm) respectively.As expected, without the cloak, the presence of object strongly distorts the original equipotential lines.When the central region is wrapped by the bilayer cloak, both forward or backward scattering are eliminated and the potential profiles restore exactly to the original equipotential lines (represented by straight lines).It is clear that the measurement agrees well with simulation, which validates our design scheme.To further demonstrate that our proposed scheme is robust, we design and fabricate a bilayer cloak with background of iron, as provided in Supplementary.An exact cloak has to satisfy two conditions: (1) the external field should be repelled from the cloaked region, and the external field outside the cloak should be undisturbed (as if nothing is there).Our bilayer cloak completely fulfills this ideal case based on the existence of an exact solution from the conduction equation.Conceptually demonstrated in Fig.1(g), a dc electric concentrator can enhance the electric field and current density in a given region without distorting the external field.The concentrator can be divided into three regions: focusing region (where r<a), external region (where r>b).Both the focusing region and external region have the same electric conductivity b σ .We assume that the electric conductivity of the shell region is homogeneous but anisotropic with Fig. 1 ( Fig. 1(h) shows the simulated potential distribution with the same geometrical parameters as Figure 1 . Figure 1.Demonstration of novel bilayer dc devices for manipulation of dc currents.(a) Cloaking sensor.(b) Simulation result of cloaking sensor.(c) Simulation result of a bare sensor.(d) dc bilayer cloak.(e) Simulation result of the bilayer cloak.(f) Simulation result of a bare and grounded object.(g) dc electric concentrator.(b) Simulation result of the concentrator.(c) Simulation result of a pure background without the concentrator.dc currents are represented with arrow lines in panel. Figure 2 . Figure 2. (a) Schematic illustration for experimental realization of dc bilayer cloak.(b) Potential distribution of a homogeneous gradually changing structure.(c) Potential distribution for the practical bilayer cloak in experiment.Equipotential lines are also demonstrated with white color in panel. Figure 3 . Figure 3. Simulation and experimental results of bilayer cloak with a=2.5 cm, b=2.6 cm, c=2.66 cm.(a) Normalized potential distribution at the left observation line x=-2.8cm presenting backward scattering.(b) Normalized potential distribution at the right observation line x=2.8 cm presenting forward scattering.Insets denote the position of observation lines. Figure 4 . Figure 4. (a) Schematic illustration for experimental realization of fan-shaped concentrator.(b) Calculated concentrating efficiency for ideal concentrator with n=4.5 and fan-shaped concentrator in (a).Simulated potential distribution with different b/a: (c) b/a=2, (d) b/a=5, (e) b/a=10.Equipotential lines are also demonstrated with white color in panel. Figure 5 . Figure 5. Simulation and measurement results of fan-shaped concentrator with a=1.2 cm and b=6 cm.(a) Normalized potential distribution at the observation lines x=-7 cm and x=7 cm, presenting backward scattering and forward scattering, respectively.(b) The simulated and measured potential distributions along a line y=0.Insets denote the position of observation lines. Fig. S1.The inner layer and outer layer are still kept as air and copper.Its geometrical parameters are a=2.5 cm, b=2.6 cm, c=3.1 cm.Simulated results are presented in Fig. S2, in which (a) and (d) illustrate the cloaking region without and with the bilayer cloak, respectively.As expected, when the object is wrapped by the bilayer cloak, the equipotential lines and dc current lines outside the cloak restore exactly without distortion, thus rendering the object invisible.Figs.S2 (b) and S2 (c)show the simulated results of the object with only a single layer of air and copper, respectively; significant distortions can be clearly observed. Figure S1 . Figure S1.Photograph of fabricated bilayer cloak in Iron.Black dotted lines denote Figure S2 . Figure S2.Simulation results for the bilayer cloak in iron with a=2.5 cm, b=2.6 cm, and Figure S3 . Figure S3.Simulation and experiment results of the bilayer cloak in Iron background with
3,637.2
2014-02-25T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Identification of ultra-high-frequency PD signals in gas-insulated switchgear based on moment features considering electromagnetic mode : The feature extraction and pattern recognition techniques are of great importance to assess the insulation condition of gas-insulated switchgear. In this work, the ultra-high-frequency partial discharge (PD) signals generated from four types of typical insulation defects are analysed using S-transform, and the greyscale image in time-frequency representation is divided into five regions according to the cutoff frequencies of TE m 1 modes. Then, the three low-order moments of every subregion are extracted and the feature selection is performed based on the J criterion. To confirm the effectiveness of selected moment features after considering the electromagnetic modes, the support vector machine, k -nearest neighbour and particle swarm- optimised extreme learning machine (ELM) are utilised to classify the type of PD, and they achieve the recognition accuracies of 92, 88.5 and 95%, respectively. In addition, the results show that the ELM offers good generalisation performance at the fastest learning and testing speeds, thus more suitable for a real-time PD detection. Introduction Gas-insulated switchgear (GIS) is a compact metal encapsulated switchgear consisting of high-voltage components such as circuitbreakers and disconnectors. It is widely used in power system due to the high reliability, low maintenance and compact size [1]. A dielectric breakdown in GIS could result in serious outages and thereby cause enormous economic losses [2]. Partial discharge (PD) activity often happens before the insulation failure [3]. Thus, the PD detection is of great significance for diagnosing the incipient faults of GIS [4]. Among the multifarious measurement methods, the ultra-highfrequency (UHF) method has attracted increasing attention because of its high sensitivity and strong anti-interference capability [5]. Different types of PDs give rise to harm with varying degrees. For example, damage caused by PD arising from the protrusion and void in the epoxy resin is more dangerous than that from the floating metal and bouncing particles [6]. Therefore, the PD source identification can provide a guideline for maintenance strategy. Generally, the patterns to recognise PD in GIS are divided into two categories, i.e. phase-resolved PD (PRPD) [1] and timeresolved partial discharge (TRPD) [4,6]. Sometimes, it is difficult to obtain the phase information of PD in the field [7]. Besides, the PRPD mode requires massive memory space to record the data up to one thousand power frequency cycles [8]. In contrast, TRPD mode merely needs to acquire a single PD waveform. Since PD pulse typically belongs to the transient and non-stationary signal, separated time or frequency description cannot offer complete information. Comparatively, the time-frequency (TF) analysis is a more powerful tool for characterising PD signal [9][10][11][12]. The shorttime Fourier transform (STFT) [9], wavelet transform (WT) [10], and S-transform (ST) which uniquely combines frequencydependent resolution with absolutely reference phase [11,12], have been employed to extract the PD feature parameters. However, these features are applied to the recognition of PD source in power transformers or cables only. Due to the coaxial structure, the propagation properties of electromagnetic (EM) waves in GIS are distinctive. For instance, for the transverse EM (TEM) mode, the cutoff frequency f c does not exist, whereas the high-order modes, including transverse electric (TE) mode and transverse magnetic (TM) mode, have a corresponding f c [13]. In addition, when the UHF signals passed through the insulation spacer, the attenuation was mainly due to the superimposition of reduced TE mode and TM mode [13], and in the case of the disconnecting part, the TEM mode component was reflected, whereas the higher frequency component over the f c of TE 11 could propagate [14]. Furthermore, it was found that the TEM mode component became the main component after the UHF signals passed through the L-shaped branch [15]. Therefore, the transmission characteristics of EM modes have a significant influence on the detected UHF signals. Nevertheless, the existing methods of extracting the features of UHF PD signals in GIS [16][17][18][19] do not take into account the impact of EM mode on the recognition accuracy of the PD source. In our work, the method that using the EM mode divides the TF plane is investigated to improve the accuracy. Various classification techniques, like k-means clustering [6], fuzzy c-means clustering [20], probabilistic neural network [21], support vector machine (SVM) [22], and k-nearest neighbour (KNN) [23], have been applied to the PD source identification of high-voltage equipment. Nevertheless, both the training and/or testing phases of these classifiers are time-consuming. Recently, a novel learning algorithm for single-hidden layer feedforward neural network (SLFN), the so-called extreme learning machine (ELM) [24], is considered to be able to efficiently process large data set and have excellent classification capability. However, in [24], the input weights and biases are randomly generated. Since these critical parameters have a great influence on the performance of ELM, they should be tuned. The novelty and contribution of this paper are elaborated as follows: (i) According to the cutoff frequencies of TE m1 modes in the range 0-2 GHz, the TF plane is divided into five regions. The three loworder moments of every subregion are extracted as the original feature space. For the ELM, SVM and KNN classifiers, the recognition accuracies based on the selected six moments through J criterion are raised up by 26.5, 29 and 37%, respectively, compared to those based on the three low-order moments of the whole greyscale image. It demonstrates that our selected feature parameters significantly improve the recognition accuracy of PD source in GIS. Thus, the image segmentation method based on the cutoff frequencies of EM modes is very effective due to considering the special coaxial structure of GIS. (ii) In our work, the input weights and biases are tuned by particle swarm optimisation (PSO) technique rather than generated randomly. When the ELM is applied to GIS PD pattern recognition, it provides the comparative generalisation performance at the fastest testing speeds compared to SVM and KNN, thereby more appropriate for the real-time PD online monitoring. Moreover, the PSO-ELM algorithm is first elaborately applied to GIS PD source identification. This paper is organised as follows: Section 2 mainly deals with the UHF PD measurement system. Then, the procedures of signal preprocessing and feature extraction are elaborated in Section 3, followed by the brief introduction of ELM optimised by PSO, as presented in Section 4. Furthermore, based on the moment features, the performance of ELM is investigated and compared with that derived from SVM and KNN classifiers in Section 5. Finally, Section 6 draws the conclusions. Fig. 1 depicts four types of artificial insulation defects. The metallic protrusion defect consists of a pair of needle-to-plane electrode; the floating metal defect is realised by suspending an insulation ring to fasten a floating brass, the HV electrode and floating brass use tapered tips in order to discharge easily and the plane electrode employs rounded corners to avoid corona discharge; to imitate the free metallic particle defect, fifteen aluminum foils with various diameters are put in a bowl-shaped electrode; the void defect is composed of three layers of epoxy resin and the middle one has a small hole. Although the designed artificial defect models are not exactly the same as the real insulation defects in practice, their PD mechanisms [25][26][27] are identical. Fig. 2 presents the experimental setup of UHF PD detection. The artificial insulation defect is placed in a perspex glass container to generate the PD signal. The PD source comprised of insulation defect and the container is located at the left end of GIS. The central conductor diameter and enclosure diameter of 220 kV GIS model are 90 and 320 mm, respectively. A disc coupler is used as an UHF antenna. Due to the limitation of measuring range of gigahertz TEM cell [28], its effective height for characterising the sensitivity to the incident electric field is measured only in the range of 0.2-2 GHz and the results are plotted in Fig. 3. A Keysight DSO9404A digital oscilloscope, whose analogue bandwidth is 4 GHz and the sampling rate is set to 10 GSa/s, is utilised to acquire the PD signals. To ensure the diversity of samples from the same defect, two experimental parameters are controlled. One is the test voltage. In our experiments, for each class of defect, three test voltages are applied, and at each test voltage, 100 samples are acquired. For example, the three test voltages of the void defect are 18, 19 and 20 kV, respectively. Changing the test voltage results in the different discharge quantity, which reflects the severity of discharge. The other is the duration of the applied voltage. The PD waveforms, such as the signal amplitude, slightly vary with the lasting time of applied voltage. In the experiments, 10 PD pulses are continuously acquired at intervals of ten minutes. Thus, these samples are representative. The PD inception voltage (PDIV), test voltage and the number of samples are given in Table 1. The PDIV mainly depends on the structure and dimension of defect models. Fig. 4 shows the typical UHF signals after wavelet-based denoising [29] and Fig. 5 gives the corresponding normalised frequency spectrums. For the protrusion defect, the energy of UHF signals is mostly distributed in the range of 0.4-1.9 GHz; for the floating metal defect, the energy is within 0.3-1.4 GHz; for the particle defect, the energy is within 0.5-1.8 GHz; for the void defect, the energy is within 0.5-1.3 GHz and the lasting time is the shortest. Signal preprocessing and feature extraction In this work, the PD source identification is split into seven steps, and the flowchart is shown in Fig. 6. S-transform In ST, the widths of time windows and frequency exhibit contrary change characteristics, resulting in a higher frequency resolution at low frequencies and a higher temporal resolution at high frequencies. Thus, ST integrates the merits of STFT and WT [11,12]. The ST of signal x(t) is defined as (1) where f represents the frequency of the UHF signal and is a parameter that controls the location in the timeline. Through ST, modulo operation and min-max normalisation, the TF matrix is projected into a greyscale image, as shown in Fig. 7. The deeper the red colour, the larger the magnitude of instantaneous frequency of the UHF signal. On the contrary, the deeper the blue colour, the smaller the magnitude. TF plane division GIS, which consists of a central conductor and an enclosure, is modelled as a cylindrical coaxial waveguide. Theoretically, the cutoff frequency of the TEM mode, f c , is 0 Hz and all frequencies of the TEM mode from direct current upwards can exist in GIS. However, for the high-order modes, only the frequencies above f c of corresponding modes can propagate. The formula to calculate f c of TE mode can be found in [30], where k = 2π f c /c is the cutoff wave number; a and b are the radii of central conductor and enclosure, respectively; G n ′ and Y n ′ are the first derivatives of the nth-order Bessel function of the first and second kind, respectively. Table 2 lists the cutoff frequencies of TE m1 (m = 1, 2, 3, 4 or 5) modes <2 GHz. The higher the order of TE mode, the greater the cutoff frequency. In addition, TEM mode component in GIS propagates at the light speed, but the high-order modes display the velocity dispersion characteristics and their velocities depend on the frequencies. The propagation velocity of the TE mode is given by where c is the light speed in the free space. Fig. 8 presents the velocities of TEM and TE m1 modes. It is observed that all frequency components of TEM mode can propagate, whereas each TE m1 mode has a corresponding cutoff frequency, which is consistent with Table 2. Moreover, the higher the order of TE m1 mode is, the slower is the propagation velocity at the same frequency. Considering the different propagation velocities and attenuation characteristics of EM modes in GIS, the TF plane is divided into five rectangular regions according to the cutoff frequency of TE m1 mode. The split lines of these regions along the vertical axis are at frequencies 0 Hz, 479, 1249, 1586 MHz, and 2 GHz, respectively, as illustrated in Fig. 9. Feature extraction In our work, the grey distribution of the image is characterised by its three low-order moments, i.e. the mean, the standard deviation and the third root of skewness. The extracted moments from the whole image are lack of spatial information. Therefore, the greyscale image is first divided into five regions according to the EM modes, and then the three low-order moments of every subregion are computed as the original feature space. where u i is the mean of the ith region, σ i is the standard deviation of the ith region, s i is the third root of skewness of the ith region, N i is the total number of pixels in the ith region and p i, j is the grey level of the jth pixel in the ith region. The mean can represent the average grey level of the image. The larger the mean, the greater the energy of UHF PD signals. The standard deviation can measure how far a set of the pixel value is spread out from their mean. The greater the standard deviation is, the more dispersed the data is around the mean. The skewness can describe the symmetrical degree of pixel value around their mean. The larger the skewness, the less symmetric the distribution of data. This is the reason why three low-order moments are chosen as features. Feature selection In order to address the issue caused by dimensionality, the J criterion is utilised to reduce the dimensionality of the original feature space [31]. The J-value is equal to the ratio of the betweenclass scatter value S b to the within-class scatter value S w . For Lclass problems, the J-value is calculated as follows where σ c is the number of samples belonging to class c, σ is the total number of samples, m c is the mean of selected feature for class c, m 0 is the mean of selected feature for all samples, and c is the standard deviation of selected feature for class c. The J-values of all the 15 moment features are shown in Fig. 10. A greater Jvalue denotes that the feature parameter has good capability to separate the different classes. Applying this criterion, six features with the J-value >0.2, namely, 2 , 4 , 5 , s 2 , s 4 and s 5 , are chosen as the input of classifiers and they are plotted in Fig. 11. Principle of ELM ELM is a highly efficient learning algorithm [24]. Given σ observations , y i2 , …, y il T ∈ R l , the output of SLFN with M neurons in the middle layer is expressed as which can also be written in the form of the matrix Hβ = Y, with where g x is the activation function, w i = w i1 , w i2 , …, w in T is the input weight vector linking the ith hidden node and the input nodes, b i is the bias of the ith hidden node, β i = β i1 , β i2 , …, β il T is the output weight vector linking the ith hidden node and the output nodes, and H is called the hidden-layer output matrix. In the ELM algorithm, the w i and b i are randomly generated, and the output weight matrix is estimated as β = HỸ (9) where H is the Moore-Penrose generalised inverse of H. Parameter optimisation of ELM To obtain better generalisation performance, the parameters of the hidden layer of ELM need optimisation. As an evolutionary algorithm, the PSO possesses a strong global search ability to avoid local optimum. In our work, the PSO is utilised to obtain the optimal input weights w = w 11 , w 12 , …, w 1n , w 21 , …, w 2n , …, w Mn −1, 1 and input biases b = [b 1 , b 2 , …, b M ] 0, 1 . Each particle w, b is a potential solution in (n × M + M)-dimensional space. At each iteration, it is manipulated using [32] where h is the current number of iterations; both V h k and X h k R n × M + M , and they refer to the velocity and position of particle k at the hth epoch, respectively; c 1 and c 2 represent the acceleration consts; r 1 and r 2 denote two random numbers within (0, 1); pBest k is the best position of particle k; gBest is the global best position of swarm; e is the current inertia weight; e max and e min are the initial and final inertia weights, respectively; h max is the maximum number of iterations. In addition, the velocity and position are limited within a certain range, i.e. V h + 1 k V min , V max and X h + 1 k (X min , X max ). Additionally, for each particle, the root means standard error (RMSE) [24] is calculated as its fitness value where y i is the expected output of ELM and l is the length of the vector y i . The pseudocode about the optimisation procedure is given in Fig. 12. Results and discussion The simulations are carried out in MATLAB 2016b environment running in an Intel core i5-6500 CPU with clock speed 3.2 GHz. Each class of measured PD signals is randomly divided into two groups, one of which contains 200 samples as training data set and the other contains 100 samples as testing dataset. Hence, the numbers of total training and testing samples are 800 and 400, respectively. Fig. 13 plots the number of hidden nodes versus training accuracy under different activation functions. To avoid overfitting, the optimal number of hidden neurons is first estimated according to the empirical formula in [33]. The training accuracy is raised up with increasing the number of hidden nodes, but when the number of hidden nodes is >16, the training accuracy grows at a snail's pace on matter the type of activation functions. Moreover, the 'sig' function performs the best, whereas the 'hardlim' function does the worst. Ultimately, the 'sig' function is selected and the number of hidden nodes is set to 16. The key parameters of the two algorithms are listed in Table 3. The fitness in the optimisation process is illustrated in Fig. 14. The part of the output results of ELM is listed in Table 4. For the second case, the correct class should be protrusion defect, whereas it is misclassified as a particle defect. Similarly, for the sixth case, the correct class should be particle defect, whereas it is misclassified as a floating metal defect. Accordingly, the classification results from ELM are shown in Table 5, where six samples are misclassified as protrusion defect, 1 sample is misclassified as floating defect and four samples are misclassified as gap defect in terms of particle defect. It can be seen that the main classification errors are concentrated on the type of particle defect. This is attributed to the great dispersity of UHF signals induced by the free metallic particles. For the ELM classifier, the overall accuracy is 95%. The multiclass SVM classifier model [34] is established to directly achieve multiclass classification by using LIBSVM 3.22 toolbox. Its kernel function is set to radial basis function, and the penalty factor C and kernel parameter are selected with ten-fold cross-validation and grid-search method in the range from 2 −8 to 2 8 . From Fig. 15, the highest training accuracy can reach 98.75% when C = 2 4 and = 2 6 . Regarding KNN, its highest testing accuracy can reach 88.5% when the type of distance is set to Euclidean distance and the number of nearest neighbours is equal to 5 through considerable trials. Table 6 lists the centroid of training dataset for six moment features. After building the SVM and KNN classifier models, the remaining samples are tested. Table 7 summarises the overall accuracy and time cost of three classifiers. It can be observed that although the training accuracy of SVM is slightly superior to the ELM, the ELM performs the best in the testing accuracy. Since the K-fold cross-validation initially randomly divides the data into K subsets, there is still a possibility of a non-representative data split being used for training and validation [35], resulting in that the testing accuracy of SVM is 6.75% lower than the training accuracy. From the perspective of time cost, the ELM only requires 5.25% of the training time taken by the SVM, and the testing speed of ELM is 21.33 and 9.67 times faster than those of SVM and KNN, respectively. Tables 8 and 9 list the precision and recall, respectively. For each class of defect, three classifiers have high precision and recall at the same time. It demonstrates that the overfitting does not occur. In addition, Table 10 lists the computational complexity. Since the KNN is a type of lazy learning [37], no explicit training step is required, and for the training phase, both the time cost and computational complexity are 0. The parameters in Table 10 are illustrated as follows: D i = 6 (the dimension of input features), Thereby, for the training and testing phases of ELM, the complexities are τ(32,000) and τ(160), respectively; for the testing phase of KNN, the complexity is τ(1200). Besides, the random allocation of training and testing samples leads to the varied number of support vectors at each run [36], and the time in Fig. 16 plots the three low-order moments of the whole greyscale image, based on which, the training and testing accuracies are tabulated in Table 11. For the ELM and SVM, the training accuracies are declined by 26.25% and 25.25%, respectively, compared with those based on the selected six moments; for the three classifiers, the testing accuracies are dropped by 26.5, 29 and 37%, respectively. It indicates that the selected six moments are far more effective for recognising the PD type. In the stage of feature selection, in order to determine the appropriate thresholding, the 15 features in Fig. 10 are reordered according to the descending order of J-value and then listed in Table 12; subsequently, the influences of the number of features on the training accuracy of ELM are investigated and the results are plotted in Fig. 17. As the number of features increases, the accuracy is raised up at the beginning and then declined. The critical number is equal to 6. Here, the J-values of the sixth feature, i.e. 2 , and the seventh feature, i.e. u 4 , are 0.347 and 0.198, respectively. Therefore, the thresholding is set to 0.2, and when the J criterion is applied in industrial practices, all the features with Jvalue greater than the thresholding can be chosen as the input of classifier. Conclusions This paper presents a novel feature extraction method of considering the EM modes to analyse the UHF PD signals. The conclusions are drawn as follows: (i) Considering the propagation properties of EM waves in GIS, the TF plane obtained through ST is divided into five regions according to the cutoff frequencies of TE m1 modes. The mean, the standard deviation and the third root of skewness of every subregion are computed as the original feature parameters, along with the dimensionality reduction by using the J criterion. The high recognition accuracies of ELM, SVM and KNN demonstrate the effectiveness of the selected moments. (ii) The recognition accuracies of three classifiers based on the selected moments are raised up by 26.5, 29 and 37%, respectively, compared with those based on the three low-order moments of the whole grey image. It indicates that the method of using EM modes to divide the TF plane significantly improves accuracy. (iii) The training times of ELM, SVM and KNN are 16, 305 and 0 ms, respectively. Correspondingly, the testing times are 3, 67, 32 ms, respectively. Thus, compared to SVM and KNN, the ELM possesses the fastest testing speed along with satisfactory learning speed, and it is more suitable for the real-time detection of PD. In view of the difference between the real PD environment in practice and our experimental condition, in the future, further studies will focus on obtaining more representative UHF signals to verify the robustness of the selected moment features after considering EM modes, by means of monitoring the on-site PD activities or changing the relative position between PD source and UHF sensor, such as the angle in the circumferential direction and the distance in the axial direction. Acknowledgments This work is supported by the National Natural Science Foundation of China (No. 51677061, 51507058).
5,794.6
2020-02-01T00:00:00.000
[ "Engineering", "Physics" ]
Antiviral Effects of ABMA against Herpes Simplex Virus Type 2 In Vitro and In Vivo Herpes simplex virus type 2 (HSV-2) is the causative pathogen of genital herpes and is closely associated with the occurrence of cervical cancer and human immunodeficiency virus (HIV) infection. The absence of an effective vaccine and the emergence of drug resistance to commonly used nucleoside analogs emphasize the urgent need for alternative antivirals against HSV-2. Recently, ABMA [1-adamantyl (5-bromo-2-methoxybenzyl) amine] has been demonstrated to be an inhibitor of several pathogens exploiting host-vesicle transport, which also participates in the HSV-2 lifecycle. Here, we showed that ABMA inhibited HSV-2-induced cytopathic effects and plaque formation with 50% effective concentrations of 1.66 and 1.08 μM, respectively. We also preliminarily demonstrated in a time of compound addition assay that ABMA exerted a dual antiviral mechanism by impairing virus entry, as well as the late stages of the HSV-2 lifecycle. Furthermore, in vivo studies showed that ABMA protected BALB/c mice from intravaginal HSV-2 challenge with an improved survival rate of 50% at 5 mg/kg (8.33% for the untreated virus infected control). Consequently, our study has identified ABMA as an effective inhibitor of HSV-2, both in vitro and in vivo, for the first time and presents an alternative to nucleoside analogs for HSV-2 infection treatment. Introduction Genital herpes is one of the world's most prevalent sexually transmitted diseases [1], and manifests as ulcerative and vesicular lesions on the genitals with lifelong latency [2]. Herpes simplex virus type 2 (HSV-2)-a single, large double-stranded DNA, enveloped virus belonging to the Herpesviridae family-is the major cause of genital herpes [3], and significantly increases the risk of developing cervical cancer and human immunodeficiency virus (HIV) infection [4][5][6]. HSV-2 infection is a global concern with estimates of 536 million people infected worldwide and an annual incidence of 23.6 million cases [7]. Despite the prevalence of infection in the global population, no vaccine has been developed and antiviral chemotherapy is standard practice in the management of HSV-2 infection [8,9]. However, long-term therapy with acyclovir and penciclovir as well as their prodrugs valaciclovir and famciclovir, respectively, has led to the emergence of drug resistance, especially in immune-compromised patients [10]. Additionally, various cases of toxicity have been encountered as a result of increasing use of traditional antivirals [11,12]. Although some non-nucleoside inhibitors have been developed, few are currently approved for HSV-2 infection treatment [13]. Foscarnet is approved as a second-line drug for HSV-2 infection treatment only when the patient has failed first-line treatment with acyclovir or there is a proven resistance mutation, and the use of foscarnet is limited by its toxicity and the fact that it is available only as an intravenous formulation [14]. Therefore, alternative antivirals against HSV-2 are needed. The small molecule ABMA [1-adamantyl (5-bromo-2-methoxybenzyl) amine], was first identified from a cell-based high throughput screening, as an inhibitor of ricin, both in cell cultures and in mice, selectively acting on host-endosomal trafficking [15]. Subsequently, ABMA has been reported to be active against other infectious pathogens, including bacterial toxins (diphtheria toxin from Corynebacterium diphtheriae, lethal toxin from Bacillus anthracis, toxin B from Clostridium difficile and lethal toxin from Clostridium sordellii), viruses (Ebola virus, Rabies virus and Dengue-4 virus), bacteria (Simkaniaceae and Chlamydiaceae) and Leishmania parasite [15]. Each of these pathogens relies on host-endosomal trafficking for pathogenicity, indicating that the inhibitory effect of ABMA is related to host-vesicle transport [16][17][18]. HSV-2 initiates infection with attachment to host cells, followed by membrane fusion or endocytosis to enter the cells. Subsequently, de-enveloped tegument-capsids are transported to the nuclei, where genome transcription, DNA replication and new capsid assembly occur. Filled capsids then bud into vesicles derived from the trans-Golgi network to obtain an envelope and an outer membrane after release from the nuclei. Finally, viruses exit the host cells by fusion of enveloped virus-containing vesicles with the cell membranes. Several processes including endocytic virus entry, virus capsid envelopment and virus egress in the HSV-2 lifecycle depend on host-vesicle transport, providing a rationale for testing ABMA as an inhibitor of HSV-2. In this study, we evaluated the antiviral activity of ABMA against HSV-2, in vitro and in vivo, and provided data to address possible mechanisms of action. Chloroquine, which was reported to inhibit herpes simplex virus infection by interacting with endocytic virus entry and the late stages of infection, as well as acyclovir, which is commonly used as the drug for HSV-2 infection treatment, were chosen as positive control drugs in this study [19][20][21][22][23][24]. ABMA was demonstrated to be an effective inhibitor of HSV-2 by a dual mechanism of action, acting on virus entry as well as the late stages of infection. HSV-2 strain G obtained from ATCC (cat # VR-734) (KU310668) was propagated in Vero cells. Virus titration was performed by endpoint dilution and plaque assays. Specific-pathogen-free female BALB/c mice (6-8 weeks old) were obtained from the Changchun Institute of Biological Products and maintained under the guidelines for animal experiments at Jilin University, China. Cytotoxicity Assay Cytotoxicity was measured by the Cell Titer-Glo ® Luminescent cell viability assay, as reported previously [25]. Serially diluted compounds were added to 90% confluent Vero cells in 96-well plates. After incubation for 72 h, cell viability was assayed with the Cell Titer-Glo ® reagent (Promega, Madison, WI, USA) and quantified by the PerkinElmer VICTORTM X2 (Waltham, MA, USA). Cytotoxicity was measured by the percentage of the luminescence intensity of compound-treated cells relative to that of the untreated cell control. The 50% cytotoxicity concentration (CC50) was calculated by regression analysis of the dose-response curves [26]. Antiviral Activity Assay of ABMA against HSV-2 In Vitro The antiviral activity of ABMA against HSV-2 was measured by cytopathic effect (CPE) inhibition and plaque reduction assays. In the CPE inhibition assay, serially diluted ABMA or chloroquine (positive control drug), was added to 90% confluent Vero cells in 96-well plates 5 h before infection with HSV-2 (MOI = 0.04, which was determined to cause appropriate CPE (100%) and cell viability reduction (60%) on Vero cells after infection for 72 h), while acyclovir (positive control drug) was added at the same time as infection. After incubation for 72 h post-infection in the presence of the compounds, cell viability was measured as described in Section 2.2. In the plaque reduction assay, serially diluted ABMA or chloroquine (positive control drug), was added to Vero cell monolayers in 12-well plates 5 h before infection with HSV-2 (50-100 PFU, which was determined to ensure that appropriate numbers of plaques form in the plates and are counted accurately), while acyclovir (positive control drug) was added at the same time as infection. After infection for 1 h, DMEM-2% FBS-1% low-melting agarose containing the above compounds at corresponding concentrations was overlaid in place of infected medium. Plaque numbers were counted after the cells were fixed with 4% paraformaldehyde and stained with 0.5% crystal violet when plaques formed. CPE inhibition and plaque reduction rates were measured by the following equations: where C, V and T are the luminescence intensities of the untreated cell control, the untreated virus infected control and compound-treated cells, respectively. Plaque reduction % = 1 − plaque number plaque number × 100% , where (plaque number)T and (plaque number)V are plaque numbers of compound-treated cells and the untreated virus-infected control, respectively. The 50% effective concentration (EC50), which refers to the concentration of a drug that induces a response halfway between the baseline and the maximum after a specified exposure time, was calculated by regression analysis of the dose-response curves [26]. Cytotoxicity Assay Cytotoxicity was measured by the Cell Titer-Glo ® Luminescent cell viability assay, as reported previously [25]. Serially diluted compounds were added to 90% confluent Vero cells in 96-well plates. After incubation for 72 h, cell viability was assayed with the Cell Titer-Glo ® reagent (Promega, Madison, WI, USA) and quantified by the PerkinElmer VICTORTM X2 (Waltham, MA, USA). Cytotoxicity was measured by the percentage of the luminescence intensity of compound-treated cells relative to that of the untreated cell control. The 50% cytotoxicity concentration (CC 50 ) was calculated by regression analysis of the dose-response curves [26]. Antiviral Activity Assay of ABMA against HSV-2 In Vitro The antiviral activity of ABMA against HSV-2 was measured by cytopathic effect (CPE) inhibition and plaque reduction assays. In the CPE inhibition assay, serially diluted ABMA or chloroquine (positive control drug), was added to 90% confluent Vero cells in 96-well plates 5 h before infection with HSV-2 (MOI = 0.04, which was determined to cause appropriate CPE (100%) and cell viability reduction (60%) on Vero cells after infection for 72 h), while acyclovir (positive control drug) was added at the same time as infection. After incubation for 72 h post-infection in the presence of the compounds, cell viability was measured as described in Section 2.2. In the plaque reduction assay, serially diluted ABMA or chloroquine (positive control drug), was added to Vero cell monolayers in 12-well plates 5 h before infection with HSV-2 (50-100 PFU, which was determined to ensure that appropriate numbers of plaques form in the plates and are counted accurately), while acyclovir (positive control drug) was added at the same time as infection. After infection for 1 h, DMEM-2% FBS-1% low-melting agarose containing the above compounds at corresponding concentrations was overlaid in place of infected medium. Plaque numbers were counted after the cells were fixed with 4% paraformaldehyde and stained with 0.5% crystal violet when plaques formed. CPE inhibition and plaque reduction rates were measured by the following equations: CPE inhibition (%) = (T − V) /(C − V) × 100%, where C, V and T are the luminescence intensities of the untreated cell control, the untreated virus infected control and compound-treated cells, respectively. Plaque reduction (%) = [1 − (plaque number) T/(plaque number) V ] × 100%, where (plaque number) T and (plaque number) V are plaque numbers of compound-treated cells and the untreated virus-infected control, respectively. The 50% effective concentration (EC 50 ), which refers to the concentration of a drug that induces a response halfway between the baseline and the maximum after a specified exposure time, was calculated by regression analysis of the dose-response curves [26]. Western Blotting Cell samples were lysed in RIPA buffer (Beyotime Biotech Co., Ltd., Shanghai, China) and the lysates were cleaned by centrifugation at 12,000 rpm. The proteins were separated by SDS-PAGE and transferred onto nitrocellulose membranes. After being blocked with 3% non-fat milk for 1 h, the membranes were incubated with an anti-HSV-2 VP5 (major capsid protein of HSV-2 [27]) mouse monoclonal antibody (EastCoast Bio, North Berwick, ME, USA) or an anti-β-tubulin mouse monoclonal antibody (Covance, Emeryville, CA, USA) for 2 h. Blots were subsequently incubated with an alkaline phosphatase (AP)-conjugated anti-mouse IgG antibody (SouthernBiotech, Birmingham, AL, USA) for 1 h and developed by the interaction between AP and AP substrates, followed by termination with the exposure to light. Time of ABMA Addition Assay In the antiviral activity assay against HSV-2 based on the measurement of viral protein and DNA content in the cell cultures, 3.13 µM ABMA or 15 µM chloroquine was added to 90% confluent Vero cells in 24-well plates 5 h before infection with HSV-2 (MOI = 1, which was determined to ensure synchronized infection in a single replicative lifecycle as reported [28]), while 1 µM acyclovir was added at the same time as infection. After infection for 1 h, DMEM-2% FBS containing the above compounds at their corresponding concentrations was overlaid in place of the medium. Proteins and HSV-2 DNA were extracted and quantified as described in Sections 2.4 and 2.5 at 18 h post-infection, when a single lifecycle had been completed without the occurrence of obvious CPE [29]. In the effective stage assay, ABMA (3.13 µM) and HSV-2 (MOI = 1) were added to 90% confluent Vero cells in 24-well plates following different treatment schemes, as reported with some modifications [29]. To study a prophylactic effect (pre: −5-0 h), the cells were pretreated with ABMA for 5 h, then infected with HSV-2 after removal of ABMA by washing. To study an inhibitory effect on virus binding or entry (simultaneous: 0-1 h), the cells were treated with ABMA and infected with HSV-2 at the same time, then overlaid with DMEM-2% FBS after removal of the medium by washing at 1 h post-infection. To study the effect on virus replication (early post: 1-6 h), the infected cells were treated with ABMA from 1 h to 6 h post-infection, then overlaid with DMEM-2% FBS after removal of ABMA by washing. To study the effect on late stage infection (late post: 6-18 h), the infected cells were cultured with DMEM-2% FBS for 5 h, then treated with ABMA from 6 h to 18 h post-infection. Besides, HSV-2 was pre-incubated with ABMA at 4 • C for 5 h before infection (direct) to study the direct interactions in a cell free system. HSV-2 infection was performed during 0-1 h for all procedures, except for direct procedure, in which the cells were infected with HSV-2 that had been pretreated with ABMA. HSV-2 DNA in the cell cultures was extracted and quantified as described in Section 2.5 at 18 h post-infection. Binding and Entry Assays Binding and entry assays were performed as reported previously, with some modifications [30]. The 90% confluent Vero cells in 24-well plates were pretreated with 3.13 µM ABMA, 15 µM chloroquine or 1 µM acyclovir for 5 h before addition of HSV-2. In the binding assay, the cells were exposed to HSV-2 (MOI = 1) at 4 • C for 2 h. Unbound viruses were removed by washing twice with sterile PBS buffer at 4 • C, and HSV-2 DNA from the original virus inoculum and the unbound virus supernatant were extracted separately and quantified as described in Section 2.5 to calculate the amount of bound HSV-2. In the entry assay, the cells were further incubated at 37 • C for 1 h after the binding process. After two freeze-thaw cycles of the infected cells, HSV-2 DNA from the internalized virus was extracted and quantified to calculate the amount of HSV-2 that was able to enter the cells. Late Stage Infection Assay To study the effects of ABMA on the late stages of the HSV-2 lifecycle, 90% confluent Vero cells in 24-well plates were infected with HSV-2 (MOI = 1) for 1 h, then treated with 3.13 µM ABMA, 15 µM chloroquine or 1 µM acyclovir during 6-18 h post-infection. At 18 h post-infection, the supernatants and the infected cells were collected and subjected to direct extracellular virus titration and to intracellular virus titration after two freeze-thaw cycles. Virus titers were determined by the Reed and Muench dilution method and expressed as 50% tissue culture infectious doses per milliliter (TCID 50 /mL). Antiviral Efficacy Assay of ABMA against HSV-2 In Vivo Female BALB/c mice (6-8 weeks old, n = 10-12 per group) were injected subcutaneously with 2 mg of Depo-Provera (XianJu Pharmaceutical Co., Ltd., Taizhou, China) per mouse to induce a diestrus phase in the genital tract. Seven days later, the mice were inoculated intravaginally with 50,000 PFU of HSV-2 in 10 µL of PBS after anesthesia. At 1 h post-inoculation, and subsequently once daily for seven consecutive days, 1.25 mg/kg or 5 mg/kg of ABMA (the doses were determined to ensure sufficient dissolution of ABMA in the injections), or 150 mg/kg of acyclovir (positive control) [31] was administered intraperitoneally. The compounds were all dissolved in PBS supplemented with 10% DMSO and PBS supplemented with 10% DMSO was administered as an untreated virus infected control. The mice were monitored daily for survival rate and clinical score. Signs of disease were evaluated as: 0, healthy; 1, genital erythema; 2, moderate genital inflammation; 3, genital lesion; 4, hind-limb paralysis; 5, death [32]. Vaginal swab samples were collected at day 5 and day 10 and transferred to 200 µL of Hank's buffer. HSV-2 titers from the swab samples were determined by plaque assay in Vero cells as reported [33]. Protocols for animal experiments were approved by the Committee on Animal Experimental Ethics of School of Life Sciences at Jilin University [permission code: 2017-nsfc019, 15 January 2017]. Statistical Analysis In vitro experiments were conducted in technical triplicate and repeated three times independently. A one-way ANOVA test was used for statistical analysis to compare the differences between test groups and untreated virus infected control groups. A log-rank test (Mantel-Cox) was used for comparisons of the survival curves. Statistical significance is represented by asterisks and was marked correspondingly in the figures (* p < 0.05, ** p < 0.01, *** p < 0.001). Reductions of HSV-2-Induced Cytopathic Effects and Plaque Formation Were Detected in ABMA-Treated Cells ABMA was tested for cytotoxicity before assessing its antiviral activity. As shown in Figure 2A and Table 1, Vero cells responded to ABMA in a dose-dependent manner with a CC 50 value of 34.75 µM. No cytotoxicity was observed at the concentrations effective against HSV-2 infection. It is current practice in drug discovery processes to pretreat cells with the compound before infection, in order to better monitor a positive effect. ABMA was tested for anti-HSV-2 activity with treatment administered from 5 h before infection to the end of the assays, as reported previously [15]. As shown in Figure 2B and Table 1, ABMA inhibited HSV-2-induced CPE in a dose-dependent manner with an EC 50 value of 1.66 µM and a maximum inhibition rate of 93.36% at 3.13 µM. The selective index (SI) measuring the safety of a compound to be developed as an antiviral agent was calculated to be 20.93 by CC 50 relative to EC 50 [34], which was higher than that of the positive control drug, chloroquine. A plaque reduction assay was performed subsequently to confirm the anti-HSV-2 activity. As shown in Figure 2C and Table 1, ABMA inhibited HSV-2-induced plaque formation in a dose-dependent manner with an EC 50 value of 1.08 µM, which was in accordance with the results obtained in the CPE inhibition assay. Morphological changes of the cells also confirmed the protective effects of ABMA against HSV-2 infection. As shown in Figure 3, untreated virus-infected cells appeared all rounded up and detached from the plates, uninfected cells looked all spread out, while virus infected cells treated with the drugs were mostly spread out and partially rounded up. Thus, ABMA is a safe and effective antiviral agent against HSV-2 in vitro, which protects cells from HSV-2 infection below its toxic concentration. Viruses 2018, 10, x 6 of 15 plaque reduction assay was performed subsequently to confirm the anti-HSV-2 activity. As shown in Figure 2C and Table 1, ABMA inhibited HSV-2-induced plaque formation in a dose-dependent manner with an EC50 value of 1.08 μM, which was in accordance with the results obtained in the CPE inhibition assay. Morphological changes of the cells also confirmed the protective effects of ABMA against HSV-2 infection. As shown in Figure 3, untreated virus-infected cells appeared all rounded up and detached from the plates, uninfected cells looked all spread out, while virus infected cells treated with the drugs were mostly spread out and partially rounded up. Thus, ABMA is a safe and effective antiviral agent against HSV-2 in vitro, which protects cells from HSV-2 infection below its toxic concentration. plaque reduction assay was performed subsequently to confirm the anti-HSV-2 activity. As shown in Figure 2C and Table 1, ABMA inhibited HSV-2-induced plaque formation in a dose-dependent manner with an EC50 value of 1.08 μM, which was in accordance with the results obtained in the CPE inhibition assay. Morphological changes of the cells also confirmed the protective effects of ABMA against HSV-2 infection. As shown in Figure 3, untreated virus-infected cells appeared all rounded up and detached from the plates, uninfected cells looked all spread out, while virus infected cells treated with the drugs were mostly spread out and partially rounded up. Thus, ABMA is a safe and effective antiviral agent against HSV-2 in vitro, which protects cells from HSV-2 infection below its toxic concentration. Reductions of HSV-2 Protein and DNA Content Were Detected in ABMA-Treated Cells The effects of ABMA on HSV-2 proliferation were measured by quantifying HSV-2 protein and DNA content in the cell cultures. ABMA treatment was administered from 5 h prior to infection, through to the end of the assays. HSV-2 protein and DNA content were assayed by Western blot and qPCR assays, respectively, after a single replicative cycle, before the occurrence of obvious CPE at 18 h post-infection [29]. As shown in Figure 4A, a significant reduction in the content of HSV-2 VP5 (main capsid protein of HSV-2 [27]) was observed in ABMA-treated cells, while that of β-tubulin (constitutive protein essential for cell function) was not affected. As shown in Figure 4B, a significant reduction in HSV-2 DNA content was also detected in ABMA-treated cells. The positive control drugs, chloroquine and acyclovir reduced HSV-2 protein synthesis and DNA replication as expected [19][20][21][22][23][24]. Based on those data, ABMA appears to be an effective antiviral agent against HSV-2 in vitro, which can cause reduced HSV-2 protein and DNA content in the cell cultures. Reductions of HSV-2 Protein and DNA Content Were Detected in ABMA-Treated Cells The effects of ABMA on HSV-2 proliferation were measured by quantifying HSV-2 protein and DNA content in the cell cultures. ABMA treatment was administered from 5 h prior to infection, through to the end of the assays. HSV-2 protein and DNA content were assayed by Western blot and qPCR assays, respectively, after a single replicative cycle, before the occurrence of obvious CPE at 18 h post-infection [29]. As shown in Figure 4A, a significant reduction in the content of HSV-2 VP5 (main capsid protein of HSV-2 [27]) was observed in ABMA-treated cells, while that of β-tubulin (constitutive protein essential for cell function) was not affected. As shown in Figure 4B, a significant reduction in HSV-2 DNA content was also detected in ABMA-treated cells. The positive control drugs, chloroquine and acyclovir reduced HSV-2 protein synthesis and DNA replication as expected [19][20][21][22][23][24]. Based on those data, ABMA appears to be an effective antiviral agent against HSV-2 in vitro, which can cause reduced HSV-2 protein and DNA content in the cell cultures. ABMA Blocks HSV-2 Entry into Cells The effects of ABMA on different stages of the HSV-2 lifecycle were measured by a mode of action assay following different ABMA treatment schemes ( Figure 5A). ABMA was added at time points corresponding to different events in the HSV-2 lifecycle. HSV-2 DNA content in the cell cultures for all assay conditions were measured at 18 h post-infection. Significant reductions in HSV-2 DNA content were detected when the cells were pre-treated with ABMA prior to infection (pre), or when ABMA was introduced from 6-18 h post-infection (late post) ( Figure 5B). The results strongly suggested that ABMA affected the early events in the HSV-2 lifecycle by acting on the cells directly. ABMA also had an effect on the late stages of the HSV-2 lifecycle as a result of treatment during 6-18 h post-infection ( Figure 5B). As there was no loss in cell viability at the concentration of ABMA used in the experiments (3.13 μM) (Figure 2A), the inhibitory effects of ABMA were not due to cytotoxicity. Therefore, ABMA affects both early and late stages of the HSV-2 lifecycle. The latter mechanism was discussed further in Section 3.4. and analyzed by Western blot using an anti-HSV-2 VP5 antibody or an anti-β-tubulin antibody. "+" and "−" represented "with" and "without" the additions, respectively. (B) HSV-2 DNA was extracted from the cell cultures at 18 h post-infection and quantified by qPCR. Statistical significance was compared between test groups and the untreated virus infected control group and was represented by asterisks marked in the figures, where *** p < 0.001. ABMA Blocks HSV-2 Entry into Cells The effects of ABMA on different stages of the HSV-2 lifecycle were measured by a mode of action assay following different ABMA treatment schemes ( Figure 5A). ABMA was added at time points corresponding to different events in the HSV-2 lifecycle. HSV-2 DNA content in the cell cultures for all assay conditions were measured at 18 h post-infection. Significant reductions in HSV-2 DNA content were detected when the cells were pre-treated with ABMA prior to infection (pre), or when ABMA was introduced from 6-18 h post-infection (late post) ( Figure 5B). The results strongly suggested that ABMA affected the early events in the HSV-2 lifecycle by acting on the cells directly. ABMA also had an effect on the late stages of the HSV-2 lifecycle as a result of treatment during 6-18 h post-infection ( Figure 5B). As there was no loss in cell viability at the concentration of ABMA used in the experiments (3.13 µM) (Figure 2A), the inhibitory effects of ABMA were not due to cytotoxicity. Therefore, ABMA affects both early and late stages of the HSV-2 lifecycle. The latter mechanism was discussed further in Section 3.4. As antiviral agents targeting the essential early stages of the HSV-2 lifecycle (binding and entry) may be more effective than those targeting the late stages, events in the early stages of the HSV-2 lifecycle that could be targeted by ABMA were further investigated [35]. HSV-2 binding and entry assays were performed at 4 °C and 37 °C, respectively, after pretreatment of the cells with the compounds. This was followed by quantification of bound and entered viruses by measuring HSV-2 DNA content in the original virus inoculum, the unbound virus supernatant from the binding assay (unbound virus) and the internalized virus after freeze-thaw cycles of the infected cells from the entry assay (entered virus), respectively ( Figure 6A). As shown in Figure 6B,C, HSV-2 entry was significantly reduced by ABMA, similarly to chloroquine, which is known to affect HSV-2 entry [20,21,24]. As expected, acyclovir that blocks virus replication had no effect on virus binding or entry [19]. Therefore, ABMA blocks HSV-2 entry into cells. As antiviral agents targeting the essential early stages of the HSV-2 lifecycle (binding and entry) may be more effective than those targeting the late stages, events in the early stages of the HSV-2 lifecycle that could be targeted by ABMA were further investigated [35]. HSV-2 binding and entry assays were performed at 4 • C and 37 • C, respectively, after pretreatment of the cells with the compounds. This was followed by quantification of bound and entered viruses by measuring HSV-2 DNA content in the original virus inoculum, the unbound virus supernatant from the binding assay (unbound virus) and the internalized virus after freeze-thaw cycles of the infected cells from the entry assay (entered virus), respectively ( Figure 6A). As shown in Figure 6B,C, HSV-2 entry was significantly reduced by ABMA, similarly to chloroquine, which is known to affect HSV-2 entry [20,21,24]. As expected, acyclovir that blocks virus replication had no effect on virus binding or entry [19]. Therefore, ABMA blocks HSV-2 entry into cells. As antiviral agents targeting the essential early stages of the HSV-2 lifecycle (binding and entry) may be more effective than those targeting the late stages, events in the early stages of the HSV-2 lifecycle that could be targeted by ABMA were further investigated [35]. HSV-2 binding and entry assays were performed at 4 °C and 37 °C, respectively, after pretreatment of the cells with the compounds. This was followed by quantification of bound and entered viruses by measuring HSV-2 DNA content in the original virus inoculum, the unbound virus supernatant from the binding assay (unbound virus) and the internalized virus after freeze-thaw cycles of the infected cells from the entry assay (entered virus), respectively ( Figure 6A). As shown in Figure 6B,C, HSV-2 entry was significantly reduced by ABMA, similarly to chloroquine, which is known to affect HSV-2 entry [20,21,24]. As expected, acyclovir that blocks virus replication had no effect on virus binding or entry [19]. Therefore, ABMA blocks HSV-2 entry into cells. h post-infection with HSV-2 at 37 °C following the binding process and quantified to calculate the amount of HSV-2 that was internalized in cells in the entry assay. Statistical significance was compared between test groups and the untreated virus infected control group and was represented by asterisks marked in the figures, where *** p < 0.001. ABMA Inhibits the Late Stages of the HSV-2 Lifecycle The effects of ABMA on the late stages of the HSV-2 lifecycle were further studied using the late stage infection assay at 18 h post-infection. HSV-2-infected cells were treated with the compounds during 6-18 h post-infection, which corresponds to the late stages of the HSV-2 lifecycle. Following this, intracellular and extracellular virus titers were measured ( Figure 7A). As shown in Figure 7B, both intracellular and extracellular HSV-2 titers were significantly reduced by ABMA. As chloroquine was reported to interact with the late stages of the HSV lifecycle, virus titers were significantly reduced, as expected [20,22,23]. As the target of acyclovir was demonstrated to be HSV-2 DNA replication, which mainly takes place during 3-6 h post-infection, virus titers were reduced to a lesser extent [19]. These results confirmed the inhibitory effects of ABMA on the late stages of the HSV-2 lifecycle, as described in Section 3.3. Additionally, these results also suggested that the HSV-2 packaging and egress process was most likely to be blocked by ABMA, as capsid formation and progeny infectious particle packaging gradually take place from 5 h post-infection onwards [36]. ABMA Inhibits the Late Stages of the HSV-2 Lifecycle The effects of ABMA on the late stages of the HSV-2 lifecycle were further studied using the late stage infection assay at 18 h post-infection. HSV-2-infected cells were treated with the compounds during 6-18 h post-infection, which corresponds to the late stages of the HSV-2 lifecycle. Following this, intracellular and extracellular virus titers were measured ( Figure 7A). As shown in Figure 7B, both intracellular and extracellular HSV-2 titers were significantly reduced by ABMA. As chloroquine was reported to interact with the late stages of the HSV lifecycle, virus titers were significantly reduced, as expected [20,22,23]. As the target of acyclovir was demonstrated to be HSV-2 DNA replication, which mainly takes place during 3-6 h post-infection, virus titers were reduced to a lesser extent [19]. These results confirmed the inhibitory effects of ABMA on the late stages of the HSV-2 lifecycle, as described in Section 3.3. Additionally, these results also suggested that the HSV-2 packaging and egress process was most likely to be blocked by ABMA, as capsid formation and progeny infectious particle packaging gradually take place from 5 h post-infection onwards [36]. ABMA Protects BALB/c Mice from Intravaginal HSV-2 Challenge Having identified the anti-HSV-2 potency of ABMA in vitro, we next evaluated the protective efficacy of ABMA against intravaginal challenge of HSV-2 in BALB/c mice using a reported mouse model [37]. As shown in Figure 8A, ABMA significantly improved the survival rates of drug-treated infected mice compared to the untreated virus infected control, whose survival rate was 8.33%. ABMA given daily intraperitoneally at 5 mg/kg provided the best survival rate of 50%. As shown in Figure 8B, ABMA at both doses tested reduced the clinical score in the same trend as the survival rate. Mice treated with 5 mg/kg of ABMA showed the lowest clinical score, below 3, while the value of the untreated virus infected control reached 4.58. Several mice with clinical score below 2 (moderate genital inflammation) recovered from the infection along with time. However, mice with clinical score higher than 2 did not recover. As clinical scores in Figure 8B were presented as the mean value of 10-12 mice, there was no reduction in clinical scores along with time as a result. Despite that, ABMA significantly reduced the severity of the disease and slowed the progress time course of mice compared to the untreated virus infected mice. As acyclovir is normally used as the standard treatment of HSV-2 infection, it was used as the positive control drug in the in vivo experiment [31]. The protective rate of acyclovir at 150 mg/kg was shown to be 100%, as expected [31]. Body weight changes of mice were also recorded over 20 days, but there was no significant difference among these groups (data no shown). HSV-2 titers from the vaginal swabs at day 5, when viral load reached its peak, and at day 10, for a later time point, were also detected to confirm the protective efficacy of ABMA Protects BALB/c Mice from Intravaginal HSV-2 Challenge Having identified the anti-HSV-2 potency of ABMA in vitro, we next evaluated the protective efficacy of ABMA against intravaginal challenge of HSV-2 in BALB/c mice using a reported mouse model [37]. As shown in Figure 8A, ABMA significantly improved the survival rates of drug-treated infected mice compared to the untreated virus infected control, whose survival rate was 8.33%. ABMA given daily intraperitoneally at 5 mg/kg provided the best survival rate of 50%. As shown in Figure 8B, ABMA at both doses tested reduced the clinical score in the same trend as the survival rate. Mice treated with 5 mg/kg of ABMA showed the lowest clinical score, below 3, while the value of the untreated virus infected control reached 4.58. Several mice with clinical score below 2 (moderate genital inflammation) recovered from the infection along with time. However, mice with clinical score higher than 2 did not recover. As clinical scores in Figure 8B were presented as the mean value of 10-12 mice, there was no reduction in clinical scores along with time as a result. Despite that, ABMA significantly reduced the severity of the disease and slowed the progress time course of mice compared to the untreated virus infected mice. As acyclovir is normally used as the standard treatment of HSV-2 infection, it was used as the positive control drug in the in vivo experiment [31]. The protective rate of acyclovir at 150 mg/kg was shown to be 100%, as expected [31]. Body weight changes of mice were also recorded over 20 days, but there was no significant difference among these groups (data no shown). HSV-2 titers from the vaginal swabs at day 5, when viral load reached its peak, and at day 10, for a later time point, were also detected to confirm the protective efficacy of ABMA against HSV-2 infection in vivo [38]. As shown in Figure 8C, HSV-2 was detected in all groups indicating a successful HSV-2 challenge. ABMA at 5 mg/kg significantly reduced HSV-2 titers by 0.55 log and 0.60 log at day 5 and day 10, respectively. Acyclovir at 150 mg/kg significantly reduced HSV-2 titers by 0.97 log and 1.20 log at day 5 and day 10, respectively. Based on these data, ABMA effectively protects BALB/c mice from intravaginal HSV-2 challenge. Overall, ABMA is an effective antiviral agent against HSV-2 in vitro and in vivo, with two putative modes of action, affecting both virus entry and the late stages of the HSV-2 lifecycle. Viruses 2018, 10, x 11 of 15 ABMA against HSV-2 infection in vivo [38]. As shown in Figure 8C, HSV-2 was detected in all groups indicating a successful HSV-2 challenge. ABMA at 5 mg/kg significantly reduced HSV-2 titers by 0.55 log and 0.60 log at day 5 and day 10, respectively. Acyclovir at 150 mg/kg significantly reduced HSV-2 titers by 0.97 log and 1.20 log at day 5 and day 10, respectively. Based on these data, ABMA effectively protects BALB/c mice from intravaginal HSV-2 challenge. Overall, ABMA is an effective antiviral agent against HSV-2 in vitro and in vivo, with two putative modes of action, affecting both virus entry and the late stages of the HSV-2 lifecycle. Discussion The prevalence, severity of complications and the close association with cervical cancer and HIV infection make HSV-2 infection a global health concern [39]. The lack of an available vaccine and the emergence of drug resistance highlight the importance of developing alternative antivirals against HSV-2 with distinct modes of action [10]. ABMA has been demonstrated previously to be active against several intracellular pathogens exploiting host-vesicle transport [15]. Here we demonstrated that ABMA is an effective inhibitor of HSV-2, in vitro and in vivo. Discussion The prevalence, severity of complications and the close association with cervical cancer and HIV infection make HSV-2 infection a global health concern [39]. The lack of an available vaccine and the emergence of drug resistance highlight the importance of developing alternative antivirals against HSV-2 with distinct modes of action [10]. ABMA has been demonstrated previously to be active against several intracellular pathogens exploiting host-vesicle transport [15]. Here we demonstrated that ABMA is an effective inhibitor of HSV-2, in vitro and in vivo. The anti-HSV-2 potential of ABMA was first identified in vitro with EC 50 values below 2 µM and an SI value of 20.93, which is suitable for an antiviral agent [40] (Figure 2 and Table 1). Treatment with ABMA also resulted in significant reductions of HSV-2 protein and DNA content in the cell cultures ( Figure 4). Subsequently, ABMA was found to target both early and late stages of the HSV-2 lifecycle in the time of compound addition assay, which was similar to the effects of SPL-2999 (a dendrimer with the active surface group of naphthyl 3,6-disulfonic acid sodium salts to interact with biological surfaces) on HSV-2 [41] (Figure 5). ABMA was found to affect the early stages of HSV-2 infection by acting on cells directly, as reported in previous studies on the effects of polysaccharide extracts from algal species and SPL-2999 on HSV-2 [41,42]. The specific event targeted by ABMA in the early stages of HSV-2 infection was found to be virus entry into cells, while binding was unaffected by ABMA treatment, which was similar to the effect of Dynasore (a small-molecule inhibitor of dynamin, which is a GTPase that controls multiple endocytic pathways and also plays a role in actin assembly and reorganization) on HSV-2 entry [30] (Figure 6). Although herpes simplex virus (HSV) may enter cells by direct fusion between the virus envelope with the cell membranes, substantial evidences for HSV entry through an endocytic-dependent mechanism have come to light [43]. HSV begins endocytic entry by using the host membrane machinery to envelop viruses. The viruses trapped in endocytic vesicles can then be released to the host cytoplasm by fusion of the virus envelope with the vesicle membranes [44]. The inhibitory effect of ABMA on HSV-2 entry might be related to its effect on host-vesicle transport [15], which is involved in the endocytic entry pathway of HSV-2. Antivirals targeting early stage infection have attracted significant attention because reduced entry of viruses into cells translates to decreased replication and spread to other cells [35]. Most early-stage prevention drugs that show promise in treating HSV-2 infection are binding inhibitors, targeting the host cell receptor or the viral glycoprotein required for binding [43,44]. Agents preventing HSV-2 entry into cells, including SPL-2999, PM-19 (a keggin-type heteropolyoxotungstate K 7 [PTi 2 W 10 O 40 ]·6H 2 O), Dynasore and ABMA reported in our study may provide alternative early stage inhibitors [30,41,45]. In addition to virus entry, ABMA was also found to inhibit the late stages of HSV-2 infection, which was confirmed by significantly reduced intracellular and extracellular virus titers when ABMA was applied during 6-18 h post-infection ( Figure 7). Additionally, because HSV-2 DNA replicates rapidly between 3-6 h and the formation of both the capsids and the progeny infectious particles of HSV-2 gradually takes place from approximately 5 h post-infection onwards [36], the results also suggested that ABMA was most likely to hinder the HSV-2 packaging and egress process, as reported in a study on the effects of Nelfinavir on HSV-1 [46]. The HSV-2 packaging and egress process begins with capsid assembly in the nuclei, then infectious particles are packaged by budding the capsids into specialized vesicles derived from the trans-Golgi network to gain an envelope and an outer vesicular membrane. Finally, the virus-containing vesicles move to and fuse with the cell membranes to release the viruses into the extracellular medium [47][48][49]. The fact that ABMA was likely to block the HSV-2 packaging and egress process suggested two possible explanations: the target of ABMA present on host-endosomal transport is also present on the HSV-2 packaging and egress process. Alternatively, host-endosomal transport, which can be affected by ABMA, participates in the HSV-2 packaging and egress process. Significant protective efficacy of ABMA against intravaginal HSV-2 challenge in female BALB/c mice was demonstrated by an improved survival rate, reduced clinical score and reduced vaginal virus load compared to the untreated virus infected control (Figure 8). ABMA administered at the highest tested dose of 5 mg/kg showed the best survival rate of 50%, while that of the untreated virus infected control was 8.33%. Several recently developed inhibitors of HSV-2 have undergone, or are currently undergoing clinical trials, but most of them have yet to gain licensure due to adverse effects on the host [13]. Based on our study, ABMA might provide an alternative. As ABMA is an initial hit from a high throughput screening, further medicinal chemistry and pharmaceutical optimizations are still required and may lead to candidates with higher anti-HSV-2 activities. Multi-drug therapy, combining drugs with different modes of action to limit the manifestation of drug resistance and to increase the selectivity by reduced dose, is common practice in the treatment of viral infections, including HIV and HCV (hepatitis C virus), but not HSV, for which currently available drugs have the same target, the viral DNA polymerase [50,51]. ABMA was demonstrated to be an effective inhibitor of HSV-2 with the targets of virus entry as well as the late stages of viral lifecycle, which were different from commonly used acyclovir. It might well be possible to develop combinations of ABMA with acyclovir, or other drugs targeting stages different from ABMA in the HSV-2 lifecycle, such as virus replication. These novel combinations might be more effective and might provide an alternative to HSV-2 infection treatment. In conclusion, AMBA has been identified as an effective inhibitor of HSV-2 in vitro and in vivo, by inhibiting virus entry, as well as the late stages of the HSV-2 lifecycle. Our study expands the list of pathogens against which ABMA is active and exemplifies the potential of ABMA to be developed as a broad-spectrum inhibitor. As the target of ABMA is a host component rather than the pathogens themselves, drug resistance may be less likely to arise [52].
9,843.4
2018-03-01T00:00:00.000
[ "Biology" ]
Integrative transcriptomic, proteomic, and machine learning approach to identifying feature genes of atrial fibrillation using atrial samples from patients with valvular heart disease Background Atrial fibrillation (AF) is the most common arrhythmia with poorly understood mechanisms. We aimed to investigate the biological mechanism of AF and to discover feature genes by analyzing multi-omics data and by applying a machine learning approach. Methods At the transcriptomic level, four microarray datasets (GSE41177, GSE79768, GSE115574, GSE14975) were downloaded from the Gene Expression Omnibus database, which included 130 available atrial samples from AF and sinus rhythm (SR) patients with valvular heart disease. Microarray meta-analysis was adopted to identified differentially expressed genes (DEGs). At the proteomic level, a qualitative and quantitative analysis of proteomics in the left atrial appendage of 18 patients (9 with AF and 9 with SR) who underwent cardiac valvular surgery was conducted. The machine learning correlation-based feature selection (CFS) method was introduced to selected feature genes of AF using the training set of 130 samples involved in the microarray meta-analysis. The Naive Bayes (NB) based classifier constructed using training set was evaluated on an independent validation test set GSE2240. Results 863 DEGs with FDR < 0.05 and 482 differentially expressed proteins (DEPs) with FDR < 0.1 and fold change > 1.2 were obtained from the transcriptomic and proteomic study, respectively. The DEGs and DEPs were then analyzed together which identified 30 biomarkers with consistent trends. Further, 10 features, including 8 upregulated genes (CD44, CHGB, FHL2, GGT5, IGFBP2, NRAP, SEPTIN6, YWHAQ) and 2 downregulated genes (TNNI1, TRDN) were selected from the 30 biomarkers through machine learning CFS method using training set. The NB based classifier constructed using the training set accurately and reliably classify AF from SR samples in the validation test set with a precision of 87.5% and AUC of 0.995. Conclusion Taken together, our present work might provide novel insights into the molecular mechanism and provide some promising diagnostic and therapeutic targets of AF. Background Atrial fibrillation (AF) is the most common cardiac arrhythmia and is a leading cause of stroke, heart failure, and dementia [1]. AF currently affects over 30 million individuals worldwide [2], and this number is projected to grow dramatically over the next 20 years [3]. Despite > 100 years of basic and clinical research, the fundamental mechanisms of AF remain poorly understood. Microarray expression analysis of atrial tissues can provide a global unbiased framework to characterize the transcriptional changes associated with AF. Advancement of high-throughput microarray technology is producing a large number of gene expression data, which are powerful tools for discovering and studying novel biomarkers for AF. Nonetheless, analysis based on high throughput data may face the dreaded 'curse of dimensionality' . This refers to the phenomenon that the amount of sample size is relatively small while the number of features increases greatly, which will increase the probability of making statistical errors [4]. Recently, integrated transcriptomic and quantitative proteomic analyses have been widely used to promote a better understanding of the molecular mechanisms driving biological processes in cells and tissues [5]. Advances in mass-spectrometry (MS) provide an unprecedented opportunity for antibody-independent proteome profiling with approximately 80% of all proteins in major human tissues quantifiable by this technique [6]. By integrating the transcriptomic and proteomic data, the 'curse of dimensionality' can be solved through cross-validation in the two levels. Besides, combining datasets from different origins by meta-analysis to extend the sample size and using some machine learning algorithms to select and reduce features could also help solve the 'curse' [7]. Due to the difficulty in obtaining atrial tissue from healthy populations, the majority of atrial transcriptomic and proteomic studies of AF used atrial tissue from patients undergoing open-heart surgery with or without AF [8,9]. By controlling other variables such as the comorbidity, severity of mitral valve disease, age, and sex, analyzing differentially expressed genes (DEGs) or differentially expressed proteins (DEPs) could also help explain the associations between genes expression and this complex disease phenotype. Another commonly applied method is to use samples that are more available in healthy people such as peripheral blood. However, the expression profiles from different cells and tissues could be quite different due to cell/tissue-specific epigenetic regulation mechanism [10]. Hence, we propose to identify feature genes from local atrial tissue as it can directly depict the altered gene expression profiles of atria, and so able to identify the atrial remodeling process of AF. Here, our objective was to elucidate a more complete understanding of molecular mechanisms underlying AF and to find potential diagnostic and therapeutic targets. The integration of multi-omics data, along with the application of the machine learning approach, vouched for the identification of key pathways and feature genes in AF, which may help to investigate the underlying mechanism of AF and to discover potential diagnostic and therapeutic targets. Microarray data collection and preprocessing For the meta-analysis, AF microarray expression data sets were collected from NCBI Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/ geo/). Only microarray data that met the following criteria were included: (1) Data sets were produced by Genome-wide mRNA expression profiling by microarray; (2) The experimental platform was GPL570 (Affymetrix Human Genome U133 Plus 2.0 microarray); (3) Data sets should be gene expression profiles of human atria tissues between AF and sinus rhythm (SR); (4) The minimum number of cases and controls was three. Then, the raw CEL files were downloaded and preprocessed using robust multi array average (RMA) algorithm with 'affy' package [11] implemented in R software. The quality of individual samples was assessed using the 'array-Qualitymetrics' packages [12]. The outlier samples were excluded if it was detected by array intensity distribution criteria. After that, raw CEL files of the rest samples were preprocessed again using RMA algorithm for background correction, quantile normalization, and summarization. We then reannotated the probes of GPL570 as it improves accuracy and makes it possible to identify new transcripts. In brief, the probe sequences were downloaded from Affymetrix (affymetrix.com) and were remapped to the human genome (GRCh38 release 99 primary assembly) using the R package 'Rsubread' [13]. Then, the chromosomal positions of these probes were matched to the corresponding genome annotation database in Ensembl using the R package 'GenomicRanges' [14]. Probe sets that were mapped to > 1 gene were removed to ensure the reliability of the reannotation. The median expression values among all multiple probe IDs were selected to represent the corresponding gene symbol. After that, 19,557 unique genes were retained. The normalized and annotated datasets containing 19,557 rows and 130 columns were used for further meta-analysis. GSE2240, which contained microarray expression profiles from 10 AF and 20 SR atrial samples, were preprocessed using RMA algorithm and annotated using 'annotate' and 'hgu133a.db' packages. The median expression values among multiple probe IDs were selected to represent the corresponding gene symbol. Microarray meta-analysis using GeneMeta 'GeneMeta' Bioconductor package [15] in R was used to perform a microarray meta-analysis of data sets from different 'origins' . This package is based on the meta-analysis method proposed by Choi et al. [15] using fixed or random effects. In this study, samples regarded as the same 'origin' must come from the same tissue (left atria, right atria, etc.) and the same microarray study. The Random effect model (REM) was used [15]. The false discovery rate (FDR) for each gene was obtained with the function "ZscoreFDR" using 1000 permutations. Genes with FDR < 0.05 were considered as DEGs. Proteomics study 18 left atrial appendage (LAA) tissue samples were obtained as surgical specimens from patients with mitral stenosis undergoing cardiac surgery at the Second Xiangya Hospital of Central South University, including 9 with chronic AF and 9 with SR. The characteristics of all patients are presented in Table 2. For each clinical group, three samples were mixed into one pooled sample. Qualitative and quantitative proteomic analysis was performed using dimethyl label-coupled high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) and MaxQuant software [16]. Benjamini-Hochberg's method was used to calculate the FDR. DEPs were identified using a criterion of FDR < 0.1 and fold change > 1.2. The detailed procedure for proteomic study was described in Additional file 1. Pathway enrichment analysis Metascape (https ://Metas cape.org/) is a web-based portal designed to provide a comprehensive gene list annotation and analysis resource for biologists [17]. It is one of the most effective tools to conducted muti-omics level enrichment analysis. To gain more insights into the biological roles of identified DEGs and DEPs, we conducted pathway enrichment analysis of Gene Ontology biological process (GO BP), Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, and Canonical pathway in Metascape tools. By inputting the lists of DEGs and DEPs simultaneously, Metascape can identify commonlyenriched and selectively-enriched pathways from two levels, which enables a comprehensive assessment of the molecular features of the biological process. Cross-validation between the transcriptomic and proteomic study The DEGs and DEPs were further analyzed using VennDiagram to compare and identify the shared genes. To make the selected biomarkers more significant, we only select genes that have consistent expression trends (upregulated or downregulated) between the transcriptomic and proteomic levels for further analysis. Feature selection and classification algorithm The 130 samples involved in the meta-analysis were selected as the training set. The correlation-based feature selection (CFS) method [18] implemented in WEKA solfware [19] was used using the training set to select feature genes. Three popular state-of-the-art supervised classification methods (NB, Naive Bayes; SMO, sequential minimal optimization; and RF, random forest) were used for generating the classification models using WEKA with the default parameter settings [20]. The three algorithms were trained with the training set and their performances were further validated by sixfold cross-validation. The best classifier generated in the training set with the highest accuracy was then validated on the independent test set GSE2240, which contained right atrial appendages samples from 10 AF patients and 20 SR patients undergoing open-heart surgery. The performance of the classifier was evaluated using criteria including precision, recall, F-measure, Matthews correlation coefficient (MCC), AUC (area under receiver operating curve), and auPRC (area under precision-recall curve), true positive rate, false positive rate, and Kappa statistic. Microarray data description and preprocessing In the transcriptomic meta-analysis study, four microarray data sets were included containing a total of 54 SR and 79 AF paired atrial samples (Table 1) from patients with valvular heart disease. The included raw CEL files were pre-processed and quality control analysis of the data sets (after normalization) led to the removal of 3 samples including GSM1005420, GSM3182694, and GSM3182707. After removing the outliers and reprocessing, the normalized data sets consisting of 130 samples were taken for further meta-analysis approach. Identification of DEGs As shown in Table 1, we only considered samples from the same study and the same tissue as the same 'origin' , which led to a total of 7 different origins. We then performed a meta-analysis by using the R package 'GeneMeta' and DEGs were detected by comparing the differential expression levels between the AF and SR group. The results identified 863 genes as DEGs (FDR < 0.05; 485 up-regulated: z-score > 0; 378 down-regulated: z-score < 0) (Additional file 2). Results of proteomic study The characteristics of the patients included in the proteomic study were balanced between the two groups, except for the left atrial (LA) size (Table 2). Figure 1a shows the procedure of the proteomic study. Pearson's correlation analysis indicated good repeatability between the samples (Fig. 1b). The mass accuracy of the MS data met the requirement (Fig. 1c) and the distribution of peptides' length agreed with the properties of tryptic peptides (Fig. 1d). In total, we identified 4489 proteins including 3606 quantifiable proteins (Fig. 1e). Proteins with FDR < 0.1 and fold change > 1.2 were considered significant, which led to the identification of 482 DEPs (301 upregulated and 181 downregulated) (Fig. 1e, f ) (Additional file 3). Pathway enrichment analysis and visualization Pathway enrichment analysis helps researchers gain mechanistic insight into gene lists generated from genome-scale (omics) experiments. This method identifies biological pathways that are enriched in a gene list more than would be expected by chance. Metascape helps to integrate different omics data such as genomics, transcriptomics, and proteomics, which enables a comprehensive understanding of a biological process. Unlike other methods, Metascape clusters enriched terms into non-redundant groups that will be critical for informing future studies. We visualized the top 20 clusters and chose the most significant (lowest p value) term within each of the 20 clusters to represent the cluster. For the upregulated proteins and mRNAs, most of the top 20 clusters (19) were enriched in both protein and mRNA levels, which highly suggested the importance of these pathways in AF pathogenesis (Fig. 2a). While for the down-regulated ones, the top 20 clusters were mainly involved in energy metabolism-related pathways, and these pathways were only enriched in the protein level (Fig. 2b). To further capture the relationships between the terms, we selected a subset of representative terms from each of the 20 clusters (up to the 10 best scoring terms) and convert them into a network layout which was visualized within Cytospace (Fig. 2, right part). Cross-validation To make the selected biomarkers more significant, we only select genes that have consistent expression trends (upregulated or downregulated) between the transcriptomic and proteomic levels for further analysis. As VennDiagram showed (Fig. 3), 23 up-regulated genes/ proteins, and 7 down-regulated genes/proteins were identified to have consistent trends from two-level. These 30 genes/proteins were considered important biomarkers for AF. Performance evaluation of AF classifier After feature selection using training set, the number of features reduced from 30 to 10 including CD44, CHGB, FHL2, GGT5, IGFBP2, NRAP, SEPTIN6, YWHAQ, TNNI1, and TRDN. After removing the bath effect using 'sva' packages in the R solfware, the expression values of these 10 features were used to generate classifiers with three supervised machine learning algorithms-NB, SMO, and RF, based on the training set. We first conducted sixfold cross-validation to classify AF and SR samples. All classifiers performed well with a precision of 86.9% for NB, 86.3% for SMO, and 76.8% for RF a b Fig. 2 Pathway enrichment analysis. a Top 20 clusters with the smallest p value of upregulated mRNAs/proteins; b Top 20 clusters with the smallest p value of downregulated mRNAs/proteins right. The right part displays the network of selected enriched terms. Each term is represented by a circle node, where its size is proportional to the number of input genes that fall into that term, and its color represents its cluster identity (i.e., nodes of the same color belong to the same cluster). Terms with a similarity score > 0.3 are linked by an edge (the thickness of the edge represents the similarity score) Fig. 3 Venn diagram of DEGs and DEPs ( Discussion To our knowledge, this is the first integrated transcriptomic and proteomic analysis of human AF atrial tissue, and the first to identify feature genes of AF using machine learning approach. Previous transcriptomic studies have provided insights into the pathogenesis of AF [21,22]. However, these experiments are generally analyzed through a single data source or restricted to a fewer sample which can lead to biological and technical biases. Thus, the microarray meta-analysis was used in this study to integrate four microarray data sets of AF from GEO which led to the identification of 863 DEGs. To elucidate a more complete understanding of AF pathogenesis, we also conducted a proteomic study of local atrial tissue which identified 482 DEPs. Pathway enrichment analysis can help to characterize physiological and functional changes associated with the changes in mRNA and protein expression in AF atrial tissues. For the upregulated mRNAs or proteins, the top 19 scoring items were enriched in both transcriptomic and proteomic levels, which vouched for the importance and significance of these pathways. Some of the items, such as 'PDGFRB PATHWAY' , 'activation of immune response' , 'muscle structure development' , 'regulation of actin cytoskeleton' , and 'leukocyte degranulation' , have been proved to play key roles in AF progression [3,23]. For the downregulated mRNAs or proteins, the top 19 scoring items were only enriched in the proteomic level, and these pathways were mainly involved in metabolism regulation, such as 'mitochondrial respiratory chain complex assembly' , 'TP53 regulates metabolic genes' , and 'response to oxidative stress' . Besides, the 'Metabolism of lipids' pathway was enriched in two levels. These are in accord with the recent studies which highlighted the role of metabolic remodeling in AF [24][25][26]. The reason why these pathways are only identified in the protein level may be caused by some post-transcriptional and translational regulations. After cross-validation between the two omics data. We identified 30 genes or proteins with the same trends between two levels. To make the selected features more significant and informative, the machine learning CFS feature selection method was adopted in the training set which led to the final 10 features, wherein 8 are upregulated (CD44, CHGB, FHL2, GGT5, IGFBP2, NRAP, SEPTIN6, YWHAQ) and 2 are downregulated (TNNI1, TRDN). The NB classifier base on the expression values of these features in the training set can classify AF and SR samples with a precision of 87.5% and AUC of 0.995 in the independent test set. Some of these feature genes have been reported to be associated with AF or its related pathogenesis. The CD44 related pathways including CD44/STAT3 and CD44/ NOX4 signaling pathways can lead to atrial fibrosis [27] and Ca 2+ -handling abnormalities [28] during AF. Secretogranin-1 (CHGB) presents in the secretory granules in atrial myoendocrine cells and is co-localized with atrial natriuretic peptide (ANP) while CHGB genetic variation results in oxidative stress [29] and hypertension [30]. The four and a half LIM domains protein 2 (FHL2) is a component of the hypertrophic response and is found to be protective in cardiac hypertrophic through inhibiting MAPK/ERK signaling [31]. MAPK has been proved to function in AF context by mediating oxidative stress [32,33], epicardial adipose tissue remodeling [34], atrial fibrosis [35], load-induced hypertrophic response [36], and ionic channel remodeling [37]. Gamma-glutamyltransferase-5 (GGT5) is confirmed to be closely associated with immune cell activation [38] and oxidative stress [39,40] and can be a potential biomarker of myocardial infarction [41]. Insulin-like growth factor-binding protein 2 (IGFBP2) belongs to the insulin-like growth factor-binding protein (IGFBP) family. Two recent studies observed a higher hazard of incident AF associated with higher mean levels of plasma IGFBP1 protein [42] and IGFBP3 protein [43]. Nebulin related anchoring protein (NRAP) is present in myofibril precursors during myofibrillogenesis and thought to be involved in myofibril assembly [44], and its genetic variance is associated with cariomyopathy [45]. Septin-6 (SEPTIN6) is invovled in extracellular matrix remodeling [46]. 14-3-3 protein theta (YWHAQ) is a gene in the P53 network and has been shown to promote apoptosis directly upon genotoxic stress [47]. Another proteomic also identified YWHAQ as an important biomarker in AF [47]. TNNI1 encodes a troponin-I protein that is the dominant form of troponin-I expressed in the fetal/neonatal/infant heart, and its participants in AF remains unknown. Triadin (TRDN) is a stable subunit of the ryanodine receptor 2 (RyR2) and is involved in the regulation of Ca 2+ release [48]. The loss or dysfunction of RyR2 stable subunits was demonstrated to cause the occurrence of spontaneous calcium elevation in AF atrial cells [49]. Our present study further proved and emphasized the importance of these markers. There are some limitations to the current study. Firstly, the number of samples included in the microarray meta-analysis remains relatively small (n = 130), which is caused by the limited number of available studies in the GEO database. Secondly, there is no corresponding clinical information of the samples, we were not able to make a prognostic analysis of these biomarkers. Third, the samples used in the transcriptomic and proteomic studies came from patients with valvular heart disease. This is due to the difficulty in acquiring atrial samples from healthy cohorts. The psychophysiology of AF in patients with valvular heart disease may have some differences from those with non-valvular AF. We recommend further study to identify gene expression profiles using atrial samples from non-valvular AF patients and healthy donors. Finally, the transcriptomic and proteomic can only indicate the potential causes for a phenotypic response, but they cannot predict what will happen at the next level. Thus, one should consider the metabolomic that provides a functional view of an organism as determined by the sum of its genes, RNA, proteins, and environmental factors [50]. Nonetheless, the integrated analysis of multi-omics data along with the machine learning method makes sure the selected genes as important features for AF. Further studies are needed to clarify their functions in AF pathogenesis. Conclusions In conclusion, the current study identified a list of significantly dysregulated feature genes associated with AF using a multi-omics analysis. The machine learning feature selection identified 10 feature genes. Naive Bayes prediction model built in the training set using the expression profiles of 10 features performed accurately and reliably classified AF from SR samples in the independent test set. These findings could provide novel insight into the pathogenesis of AF and suggested that the feature genes might be diagnostic and therapeutic targets for AF. Additional file 1. Detailed procedure of the proteomic study.
4,929.4
2020-11-11T00:00:00.000
[ "Medicine", "Computer Science" ]
Study on the Impact of the Export of China’s Final Use Products on Domestic SO2 Emissions Since China’s accession to the World Trade Organization (WTO), its export volume has achieved rapid growth. Meanwhile, the manufacturing of export products has also resulted in a large amount of SO2 emissions in China. To explore the relationship between the export of China’s final use products (ECFuP) and SO2 emissions, this paper first used the Multi-Regional Input–output (MRIO) model to study the SO2 emissions caused by ECFuP during 2003–2011. Then, this paper uses Structural Decomposition Analysis (SDA) to decompose the factors affecting SO2 emission into technical effect, structural effect and scale effect. The results showed that (1) the amounts of China’s SO2 emissions caused by the ECFuP have increased (2003–2007), declined (2007–2009), and increased again (2009–2011). (2) Scale effect is the main factor that causes the increase of SO2 emissions in China; technical effect mainly resulted in a decrease of emissions, whereas structural effect has less impact. Specifically, from 2003 to 2011, scale effect increased domestic SO2 emissions by 2.2 million tons; technical effect and structural effect reduced by 2.4 million tons and 0.5 million tons of emissions, respectively. (3) For different regions, there is a positive correlation between the consumption of the ECFuP and China’s SO2 emissions. Among them, NAFTA (accounting for 33.77%) leads to the largest SO2 emissions, and OTHER EU (5.79%) is the least. (4) From the industrial aspect, some industries with relatively small ECFuP have caused high SO2 emissions. The specific performance is as follows, among the 17 industries, Electricity, Gas and Water Supply (EGW) only occupied 0.6% of the total ECFuP, but it has the largest SO2 emissions (55%); in contrast, while Electrical and Optical Equipment (EOE) occupied 42% of the total ECFuP, its SO2 emissions only accounted for 0.2% of the total. In 2003–2011, the export trade volumes of all the industries increased, but the growth rates of less polluted industries are higher than that of heavy polluted industries. Based on the above findings, the paper also proposed some policy recommendations. Methods and Data Sources To analyze the relationship between the ECFuP and SO 2 emissions, and to decompose the influencing factors, this paper uses the Multi-Regional Input-Output (MRIO) model and Structural Decomposition Analysis (SDA) to study them. Basic Input-Output Model This paper uses MRIO model [32][33][34] to study the impact of the ECFuP on China's SO 2 emissions. First, a multi-regional environmental input-output table is constructed. The specific structure is shown in Table 1. Note: V 1 , V 2 , . . . , V m represent the added value of region 1, region 2, ..., region m, respectively. In the MRIO table, i and j are used to indicate the production industries, and r and s are countries or regions. The main input-output relationship equation of production in country r is X r i = j x r ij + y rr i + s y rs i (i, j = 1, 2, · · · , n)(r, s = 1, 2, · · · , m) (1) where x r ij represents the intermediate demand of the industry j for the industry i in country r, y rr i represents that the industry i satisfies the final use volume of country r itself, y rs i represents the amount of export required by the industry i of the country r to meet the final use needs of the country s, and X r i represents the total output of the industry i in country r. Combined with the input-output table used in this paper, we get Equation (2) where (I − A) −1 is the Leontief inverse matrix, also known as the complete consumption coefficient represents the total output matrix of m countries or regions; Y = represents the direct consumption coefficient matrix between industries, which is provided by p-zone to q-zone. To meet the final consumption of the region s, the production required in each region is Then, we can deal with Equation (3) and get the following series of equations, In this paper, we study the impact of the ECFuP on China's industrial SO 2 emissions. We set region 1 as China, m = 7. So, in the case of considering only region 1 (see the shaded column in Table 1), we can get the following equations, Considering that the trade volumes of intermediate goods in many industries between China and other regions are very small. Based on the above reason, and for the convenience of calculation, we assume that A 12 X 2s , . . . , A 17 X 7s are zero in this paper. Then, we get the following Equation (4), Then, to meet the final use capacity of the region s, the amount China needs to produce is Using i. After that, the SO 2 emission coefficient column vector can be obtained. Let E T be the transformation matrix of E 1 , and make G = diag e 1 1 , · · · , e 1 n = . Therefore, to meet the final use of region s, the SO 2 emission coefficient matrix of China is Furthermore, to meet the final use of region s, China's total SO 2 emissions are as follows, At the same time, it can be concluded that in order to meet the final use of the region s, the SO 2 emission vector for different sectors in China is Structural Decomposition Analysis (SDA) In the specific application of the SDA method, it includes various deformation forms such as LMDI, SSA, and D&L. Based on the research of Pang Jun [30], this paper uses the LMDI method to decompose and analyze the factors affecting SO 2 emissions in China. In this paper, we assume that M 1s is the total amount of final use products that China provides to region s, and P 1s i is the proportion of the final usage provided by China's industry i to the total final usage. Then we can get following equations, In formula (12), ES 1s i indicates SO 2 emissions caused by China's i industry due to the final use products supplied to the region s. It can be further expressed as the product of SO 2 emission coefficient, export structure, and export scale. Therefore, in order to provide the final use products for the region s, the SO 2 emissions change from t 1 to t 2 in the industry i is After that, ∆ES 1s i can be further decomposed into the sum of technical effect, structural effect, and scale effect. T e f f i i , respectively, represent the impact of SO 2 emission intensity change, supply structure change, and total supply scale change on the change of SO 2 emissions in industry i, that is, the contribution of technical effect, structural effect, and scale effect to the change of SO 2 emissions. The total SO 2 emissions change is Data Sources The [36], because the release 2013 in this database is only counted to 2011, thus the data before 2011 is selected. Based on the above two factors, the research period of this paper is from 2003 to 2011. In addition, this paper takes 2000 as the base period and uses GDP deflator to process the data in input-output tables of different years. The GDP deflator index used in the paper comes from China Statistical Yearbook 2018 [37]. It should be noted that this paper refers to the research of Huang [38], assuming that the SO 2 emissions of the primary and tertiary industries are zero, and only the SO 2 emissions of the secondary industry are studied. Besides, based on 17 industrial sectors listed in the MRIO table, this paper integrates the industries listed in China Statistical Yearbook on Environment and abbreviates the names of the 17 industries used in the paper (see Table 2). The "China Statistical Yearbook on Environment" does not show data on SO 2 emissions in the "Construction" industry. Thus, this paper considers "Other Sectors" as "Construction". Results and Discussions To understand the impact of the ECFuP on industrial SO 2 emissions during 2003-2011, we conduct our studies from overall perspective, regional perspective, and industrial perspective in this section. Analysis of the Overall SO 2 Emissions Caused by the ECFuP This section will analyze the SO 2 emissions caused by the ECFuP from an overall perspective, that is, we consider the six regions as a whole. Figure 1 shows the volume of the ECFuP and its changes during 2003-2011. The gray module shows the increase of export volume in that year compared with the previous year, and the dotted line module shows the decrease of export volume in that year compared with the previous year. Analysis of the ECFuP and SO 2 Emissions from the Overall Perspective As can be seen from Figure 1, the export volume in 2009 had declined, whereas it has grown in other years. Specifically, the volume of the ECFuP increased steadily in 2003-2008, with an average annual growth rate of 18.96%. Among them, the faster growth was from 2005 to 2007, with an annual growth rate of more than 20%. The above data showed that China had fully utilized the demographic dividend and comparative advantage after joining the WTO in 2001 [39], and its export volume had achieved rapid growth. 3.1.1. Analysis of the ECFuP and SO2 Emissions from the Overall Perspective Figure 1 shows the volume of the ECFuP and its changes during 2003-2011. The gray module shows the increase of export volume in that year compared with the previous year, and the dotted line module shows the decrease of export volume in that year compared with the previous year. Although the export volume was still growing in 2008, the growth rate had dropped to 6.89%. In 2009, there was negative growth. This showed that under the influence of the 2008 financial crisis, the economic development of various countries had been adversely affected to varying degrees, and the demand for Chinese products had also decreased. In 2010-2011, with the gradual recession of the financial crisis and the recovery of regional economy, the demand for China's final use products (CFuP) began to rise in all regions. With the stimulating effect of a series of policy measures such as Chinese government's export tax subsidies [40], the export volume began to grow rapidly again, and the growth rate also turned positive again. To study the SO 2 emissions caused by the ECFuP, this paper shows the SO 2 As can be seen from Figure 1, the export volume in 2009 had declined, whereas it has grown in other years. Specifically, the volume of the ECFuP increased steadily in 2003-2008, with an average annual growth rate of 18.96%. Among them, the faster growth was from 2005 to 2007, with an annual growth rate of more than 20%. The above data showed that China had fully utilized the demographic dividend and comparative advantage after joining the WTO in 2001 [39], and its export volume had achieved rapid growth. Although the export volume was still growing in 2008, the growth rate had dropped to 6.89%. In 2009, there was negative growth. This showed that under the influence of the 2008 financial crisis, the economic development of various countries had been adversely affected to varying degrees, and the demand for Chinese products had also decreased. In 2010-2011, with the gradual recession of the financial crisis and the recovery of regional economy, the demand for China's final use products (CFuP) began to rise in all regions. With the stimulating effect of a series of policy measures such as Chinese government's export tax subsidies [40], the export volume began to grow rapidly again, and the growth rate also turned positive again. To study the SO2 emissions caused by the ECFuP, this paper shows the SO2 emissions and their changes from 2003 to 2011 in Figure 2. The gray module indicates the increase of SO2 emissions in that year compared with the previous year, and the dotted line module indicates the decrease of SO2 emissions in that year compared with the previous year. To analyze the factors affecting the SO2 emissions and the influence degree of different factors, this paper used formula (10) to formula (19) to calculate the influencing factors from three aspects: technical effect, structural effect and scale effect. Figure 3 shows the results of the calculation, which To analyze the factors affecting the SO 2 emissions and the influence degree of different factors, this paper used formula (10) to formula (19) to calculate the influencing factors from three aspects: technical effect, structural effect and scale effect. Figure 3 shows the results of the calculation, which shows the impact of technology, export structure, and export scale on SO 2 emissions in different periods. The column above the abscissa axis represents the increase of SO 2 emissions, whereas the column below the axis represents the decrease of SO 2 emissions. The inflection point of the black line in the figure represents the total effect of three factors on SO 2 emissions. Table 2 for the full name) all caused a decrease of the structural effect, while the increase of the structural effect was negligible. The above data showed that, compared with 2005, the export share of high-polluting industries, such as EGW, CPN, ONMM, and BMFM, significantly decreased in 2007, which was also the main reason for the overall reduction of the structural effect. In general, in 2003-2011, the scale effect was the main factor causing the increase of SO2 emissions in China, while the technical effect was the main factor causing the reduction of emissions. Furthermore, we can give a more precise explanation of the results of Section 3.1.1. In 2005-2007, the volume of the ECFuP increased rapidly, but the growth of SO2 emissions gradually slowed down; this was due to the dual role of export structure and technological progress. In 2008, with the increase of export volume, SO2 emissions decreased dramatically, which was due to the progress of production technology. In 2009, the emission of SO2 decreased greatly, which was the dual effect of technological progress and the reduction of export scale. After 2009, advances in technology continued to play a pivotal role in reducing SO2 emissions. Table 2 for the full name) all caused a decrease of the structural effect, while the increase of the structural effect was negligible. The above data showed that, compared with 2005, the export share of high-polluting industries, such as EGW, CPN, ONMM, and BMFM, significantly decreased in 2007, which was also the main reason for the overall reduction of the structural effect. In general, in 2003-2011, the scale effect was the main factor causing the increase of SO 2 emissions in China, while the technical effect was the main factor causing the reduction of emissions. Furthermore, we can give a more precise explanation of the results of Section 3.1.1. In 2005-2007, the volume of the ECFuP increased rapidly, but the growth of SO 2 emissions gradually slowed down; this was due to the dual role of export structure and technological progress. In 2008, with the increase of export volume, SO 2 emissions decreased dramatically, which was due to the progress of production technology. In 2009, the emission of SO 2 decreased greatly, which was the dual effect of technological progress and the reduction of export scale. After 2009, advances in technology continued to play a pivotal role in reducing SO 2 emissions. 3.2. Analysis of the ECFuP and SO 2 Emissions from the Regional Perspective From a regional perspective, the analysis of the ECFuP and SO 2 emissions not only revealed China's major trade areas, but also the impact of the region on China's SO 2 emissions. In addition, we can also analyze the change of trade degree between different regions and China from a vertical perspective, and get the change of China's trade direction. Figure 4 shows the proportion of CFuP exported to six regions, with different colors representing different regions. North America has always been an important export area for China. In 2018, China's total exports to the United States and Canada reached 513.58 billion US dollars, accounting for more than 5 North America has always been an important export area for China. In 2018, China's total exports to the United States and Canada reached 513.58 billion US dollars, accounting for more than 20% of China's total exports in the same year [41]. With the establishment of the North American Free Trade Zone (NAFTA), the "creative effect" has led to a steady increase in labor productivity and technical efficiency in the region. At the same time, it has also stimulated demand for Chinese products. However, with the continuous development of the integration process of free trade area, preferential treatment of Mexican textiles, household appliances, and other products by the United States and Canada put China in a relatively unfavorable competitive position [42]. In addition, after the 2008 financial crisis, trade frictions between China and developed countries such as the United States increased, and anti-dumping lawsuits faced by Chinese enterprises also increased [43]. Therefore, although NAFTA has always been the largest export area of China's final use products, its proportion is gradually decreasing, which also leads to a simultaneous decrease in the proportion of SO 2 emissions. In 2003-2011, China's exports to BRIIAT and ROW increased year by year. By 2011, the consumption shares of the two regions had increased by 9.76% and 9.5%, respectively. There are some reasons for this change. For BRIIAT, one reason is that China is establishing free trade zones with countries in BRIIAT step by step, reducing trade barriers and increasing trade exports. Such as China-Australia Free Trade Area Negotiation [44], China-India Joint Trade Arrangement Project [45], and the China-ASEAN Free Trade Agreement. Another reason is the convergence of political positions and the complementary resources, which also promotes a more stable trade between China and some countries of BRIIAT [43]. These factors provide more opportunities for China to export to these countries, and also promote more final use products to enter these countries. For ROW, the region includes many developing countries. As the fastest-growing developing country, China's influence on these countries has become more obvious, and trade with these countries has become increasingly close. The increase in the proportion of exports in the region also indicates that trade between China and developing countries is deepening. Table 2. industries. In this figure, the 17 industries on the left are arranged in descending order of SO2 emissions, and the right side lists the six regions of trade. The size of the 17 modules on the left represents the amount of SO2 emissions caused by 17 industries, respectively. Similarly, the size of the six modules on the right side represents the SO2 emissions caused by each of the six regions. The width of the connecting bar (from left to right) represents the amount of SO2 emission affected by different regions in the industry. The full names of all the 17 industries are shown in Table 2. As can be seen from Figure 6, high pollution industries, such as electric power and gas production (EGW), metal smelting (BMFM), textile industry (TTP), chemical industry (CCP, CPN), paper-making and printing (PPP), and ore mining and processing (MQ, ONMM), were the industries that cause large SO2 emissions in China. Among them, the most discharged industry was EGW, which had an average emission of 1.28 million tons/year, accounting for 55% of the total annual emissions of all industries. Then followed by BMFM (14%), CCP (9%), TTP (5%), ONMM (5%), CPN (4%), PPP (3%), MQ (2%), and FBT (2%). Figure 7 shows the average export volume of 17 industries to six regions during the 9-year period (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011). In this figure, the left side is still sorted according to the order of SO 2 emissions, and the right side represents six regions. The module size and the width of the connecting bar represent the export amount. Metal manufacturing (MN, MNR, EOE, etc.) and wood processing (WPWC) industries caused relatively less pollution. The emissions from LLF, WPWC, RP, MN, EOE, TE, MNR, and CON were only between 46200 and 68200 tons per year, accounting for 2.07-3.01% of the total. In particular, the annual SO2 emissions from MNR and CON were only 0.02-0.04%. This paper calculates the export volume of 17 industries to six regions during 2003-2011. Then, we use the nine-year average of export to map the trade between China and six regions, as shown in Figure 7. Figure 7 shows the average export volume of 17 industries to six regions during the 9-year period (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011). In this figure, the left side is still sorted according to the order of SO2 emissions, and the right side represents six regions. The module size and the width of the connecting bar represent the export amount. Combining the export volume of various industries, industries with relatively small export volumes have led to high SO2 emissions, such as EGW and BMFM. The sum of the seven industries increased from 13.59 billion in 2003 to 32.58 billion in 2011, but their shares of total exports fell from 7% to 6%. However, the export volume of industries that contributed less to SO2 emissions, such as EOE, MN, MNR, LLF, TE, and RP, was very large. Their sum increased from 132.21 billion in 2003 to 373.40 billion in 2011, and the export share increased from 65.10% to 69.60%, with an average of 68.25%. The above data shows that although the foreign trade of all industries is increasing, the export growth rate of less polluted industries is significantly higher than that of heavily polluted industries, which also means that China's export structure is continually optimized. Moreover, this article found that Textiles and Textile Products (TTP) industry is a relatively special industry. It is not only an industry with high SO2 emissions, but also an industry with a large export volume. Through the analysis of this industry, we find two main reasons for the situation. First, the chemical fiber process requires a large number of sulfurized synthetic materials, coupled Combining the export volume of various industries, industries with relatively small export volumes have led to high SO 2 emissions, such as EGW and BMFM. The sum of the seven industries increased from 13.59 billion in 2003 to 32.58 billion in 2011, but their shares of total exports fell from 7% to 6%. However, the export volume of industries that contributed less to SO 2 emissions, such as EOE, MN, MNR, LLF, TE, and RP, was very large. Their sum increased from 132.21 billion in 2003 to 373.40 billion in 2011, and the export share increased from 65.10% to 69.60%, with an average of 68.25%. The above data shows that although the foreign trade of all industries is increasing, the export growth rate of less polluted industries is significantly higher than that of heavily polluted industries, which also means that China's export structure is continually optimized. Moreover, this article found that Textiles and Textile Products (TTP) industry is a relatively special industry. It is not only an industry with high SO 2 emissions, but also an industry with a large export volume. Through the analysis of this industry, we find two main reasons for the situation. First, the chemical fiber process requires a large number of sulfurized synthetic materials, coupled with the imperfection of techno logy and equipment, resulting in a large number of SO 2 emissions. Second, as the world's factory, China's textile and garment industry exports have long been ranked first in the world, and the scale of exports has been expanding, which has brought a large amount of exports [46]. These indicate that if China wants to develop textile manufacturing under the condition of guaranteeing the export trade volume, it still needs to invest more efforts in equipment renewal and innovation funds. Conclusions and Recommendations Based on the MRIO model and LMDI method, this paper analyzed the domestic SO 2 emissions caused by the ECFuP from 2003 to 2011, and divided the factors affecting SO 2 emissions into three aspects: technology, export structure, and export scale. The conclusions are as follows. First Fourth, from the industrial aspect, there is no positive relation between ECFuP and SO 2 emissions. Some industries with relatively small ECFuP have led to high SO 2 emissions. The specific performance is as follows; among the 17 industries, EGW only occupied 0.6% of the total ECFuP, but it has the largest SO 2 emissions (55%); in contrast, although EOE occupied 42% of the total ECFuP, its SO 2 emissions only accounted for 0.2% of the total. Fifth, the export volumes of all industries are increasing, but the growth rate of less polluted industries are significantly higher than that of heavy polluted industries. The export structure of China is constantly optimized. From 2003 to 2011, although the total export volume of seven heavy polluted industries, namely, EGW, BMFM, CCP, ONMM, CPN, PPP, and MQ, increased from 1.359 billion to 3.258 billion, the proportion of total exports decreased from 7% to 6%. At the same time, the proportion of low-pollution industries (EOE, MN, MNR, LLF, TE, and RP) increased from 65.10% in 2003 to 69.60% in 2011. In response to the above research conclusions, this paper proposes the following recommendations. First, as the scale effect of ECFuP may still be negative in the near future, technical effect and structural effect are the main methods for decreasing SO 2 emissions. Thus, more works are needed to improve related technologies, and to optimize the export structure. Second, in three regions (i.e., BRIIAT, EURO-ZONE, and ROW) during 2003-2011, the increased proportion of SO 2 emission is less than that of the ECFuP. It implies that, from a regional perspective, there are higher environmental efficiencies when exporting products to these three regions. Therefore, the Chinese government should consider more about developing trade with these three regions, especially with the ROW region, which includes many developing countries. Third, the export volumes of EGW and BMFM are smaller than other industries, but they bring more SO 2 emissions. Therefore, the Chinese government needs to pay more attention to its tariff
6,112.4
2019-10-19T00:00:00.000
[ "Environmental Science", "Economics" ]
Fermionic response in nonlinear arcsin electrodynamics We consider certain blackhole solution in non-linear arcsin electrodynamics coupled with gravity and axions. We have studied the behaviour of the fermionic operators in the dual (2+1)-dimensional theory. We consider holographic spectral function for both the backreacted solutions and probe limit over the range of physical parameters. We find that with a variation of the charge density the system changes from Fermi liquid to non-Fermi liquid and the transition point depends on the temperature. Introduction Strongly coupled systems arose in condensed matter turns out to be suitable arena [1][2][3][4][5][6][7][8] for application of holographic techniques of gauge/Gravity duality [9][10][11][12]. These techniques can address strongly coupled non-gravitational systems in the set up of a gravity theory with a black hole background. In particular, analysis of fermionic excitations [13][14][15] exhibits scaling behaviour of non-Fermi liquids in this set up. Further studies regarding scaling exponents and dimension of dual theories appear in [16]. Introduction of dipole coupling on the gravity side leads to dynamically generated gap [17,18]. Charged Lifshitz black branes were considered in [19,20]. Effect of doping parameters on fermionic excitations in the holographic set up was studied in [21,22] and gives rise to transition between Fermi liquid to non-Fermi liquid phase. Similar studies of fermions in top down approach appear in [23][24][25][26][27][28]. All these studies involve Maxwell electrodynamics in the gravity setup. Even though these linear electrodynamic models are successful in explaining various features in the context of condensed matter systems, it fails to capture the behaviour of some of the systems, one such is cuprates high T c superconductors in the strange metal phase where the resistivity depends linearly on temperature and so called the anomaa e-mail<EMAIL_ADDRESS>lous scaling behaviour of cuprates. Perhaps that triggers further explorations of the non-linear electrodynamic models. A string inspired Dirac-Born-Infeld (DBI) model was analysed in [29] to obtain DC conductivity along the method purposed in [30,31]. This was further generalised in [32] and [33] leading to linear scaling of resistivity with a temperature. More general non-linear electrodynamics was considered [34,35]. Considering the success of non-linear electrodynamic models it is natural to study the Fermi surface and excitations around it. On that score, [36] has studied the fermionic behaviour for string inspired DBI model. In the present work, we will consider another non-linear model of electrodynamics known as arcsin electrodynamics, which was introduced in [37]. An attractive feature of this model is the finite electric field of a pointlike charge at the origin and static electric energy of a particle is also finite [38,39]. The dual boundary theory of this model has been analysed in several works. By introducing a scalar field in the gravity theory, it was shown [39], that the boundary theory may undergo condensation, that corresponds to spontaneous breaking of a U (1) symmetry and thus it admits a superconducting phase [39]. This model was further extended by introducing a neutral scalar field and axions, where the backreaction on the metric was also considered. This extended model exhibits metal/insulator transition with a variation of certain parameter of the theory, showing additional phase structure. Thus among the possible non-linear electrodynamics, it has been established that this particular theory has a wide spectrum of phases. In particular, as has been shown in [35], this model shows a linear temperature dependence of resistivity in a probe limit and thus it can capture the properties of direct conductivity of cuprates in the strange metal phase. However, analysis of Fermi surface and excitations around it has not been done yet and the aim of the present work is to fill in this gap. For this model, we have studied the nature of fermionic behaviour over the range of several parameters. We find there is a transition/crossover from Fermi liquid to non-Fermi liquid phase occurring due to variation of charge density of the system. We have also studied the fermionic excitations in this probe limit as well, which may give some insight on the underlying mechanism for such anomalous behaviour of cuprates. The plan of the article is as follows. In the next Sect. 2, we will briefly introduce the nonlinear arcsin electrodynamics model and its blackhole solution. We will also discuss the probe limit and its background solution. In Sect. 3, we introduce Fermions and compute the Green's function of the dual operator. In Sect. 4, we present our results where we numerically solve the Dirac equation to obtain the Green's function and study its nature with the variation of different parameters. In Sect. 5 we conclude with a discussion. Bosonic part In this section, we revisit the model of non-linear arcsin electrodynamics which is interacting with gravity introduced in [35]. The action is given by where ψ (I ) are the two axions and φ is the scalar field whose potential is given by V (φ). S 0 is the action for non-linear electrodynamics where F = 1 4 F μν F μν . Z 1 (φ) and Z 2 (φ) are the coupling constant which depends on the scalar φ. One may observe that the Eq. (2.2) reduces to the usual Maxwell action at the limit of vanishing gauge field A μ → 0 with the choice of Z 2 = 1 and Z 1 = 1. The equation of motion for the above action turns out to be as follows. From the Einstein part of the action we have where θ μν can be expressed in terms of energy momentum tensor of gauge fields T μν as μν = T μν − 1 2 g μν T λ λ and in the second term on right hand side sum over I is implied. The equation for scalar and axion fields φ and ψ I are given by The equation for gauge field turns out to be The quantity θ μν , which is related to the energy-momentum tensor of the gauge field is given by The expression of the current J μ in the dual field theory is which is to be evaluated at the boundary. In order to get the black hole solution, we will consider the following ansatz for the metric, gauge field, scalar field and axions where h is the magnetic field which lies on x − y plane. The scalar field φ only depends on radial direction r . The axions ψ I is set up in x and y direction with the magnitude of momentum dissipation k p which breaks the translational symmetry at the boundary. Since we are only interested on the electrically charged solution of the black hole, we are setting h to be zero. The equations are quite involved, so we will consider the particular family of the solution by following the ansatz given in [32,35] (2.9) z 1 and V 0 are constants where we consider z 1 = 1 for further simplification. After taking these assumptions, the above equation of motion can be solved exactly. The solution for the metric coefficients are given in [35], where C(r ) is solved exactly and is given by (2.10) Substituting C(r ) in the above equation of motion for scalar, the equation for the metric coefficient D(r ) becomes Solving the above first order differential equation and the equation of motion for the gauge field the coefficients D(r ) of the metric and the gauge field are given by where ρ is the charge density of the system and M mass of the blackhole. M is fixed by D(r ) which vanishes at the horizon (D(r = r h )=0). The temperature of the blackhole is given by where we have defined = 1 √ 1−F 2 . In the expression of temperature, F is evaluated at horizon r = r h . Probe geometry As has been pointed out that strong momentum dissipation limit may be considered as a probe limit [35], where one can ignore the backreaction on the metric. In this probe limit we will consider hyperscaling violating geometry as the background, which may reproduce linear temperature dependence of resistivity [34]. The hyperscaling geometry appears as a solution [34] with the coupling given by V (φ) ∼ −V 0 e ηφ and Y (φ) ∼ e αφ , where α and η are two constants. The ansatz for the metric and other fields for hyperscaling violating geometry are given by (2.14) where κ, k p and L are constants. z and θ are the Lifshitz scaling and hyperscaling violation exponents respectively. The solution with given ansatz has been studied in [35], where the metric coefficient f (r ) and other parameters are Here, r h is the radius of horizon satisfying f (r h ) = 0. The gauge field can be easily obtained by solving the equation of motion given in [26]. Here, we will only give the expression for the gauge field for this background which turns out to be where we have definedC = (Z 1 Z 2 r 2(1−θ) ) 2 with Z 1 ∼ e γ φ and Z 2 ∼ e δφ . The temperature is given by We would like to comment on the parameters z and θ before proceeding. z = 1 and θ = 0 corresponds to AdS background. To have a well-defined geometry and resolvable singularity, the range of these two parameters is restricted. This restriction arises by considering the Gubser criterion in conjunction with the null energy condition In addition to this, in order to have linear temperature dependence of resistivity of cuprates one need to satisfy (2.20) provided λ ≥ 0, for simplicity we will be taking λ = 0. Even though this behaviour arises in high temperature limit, one can choose the appropriate value of the parameters to retain the behaviour of cuprate at sufficiently low temperature. Near Horizon limit The nature of the fermionic excitation is closely related to the near horizon structure of the metric. In this regard, we will analyse near horizon structure of the metric given in Eqs. (2.8) and (2.14). To study the fermionic spectral function we will consider two different backgrounds; First, we will consider fully backreacted background which is given in (2.8) and second we will consider the hyperscaling violating geometry which arises in the probe limit. We will begin with the discussion of the former with extremal case (T = 0). Backreacted geometry The full metric for fully backreacted geometry is given in (2.8), we would like to rewrite this metric in the following form vanishes, similarly we will define a point r * such that the derivative of f (r ) at r * vanishes ( f (r * ) = 0). However, in the limit of zero temperature, these two points coincide with each other i.e. r * = r h . First, we begin the near horizon expansion of the metric in this limit of zero temperature. The metric coefficient f (r ) at the near horizon (r → r h ) develops a double zero which is given by where all the quantities in L 2 are evaluated at (r = r h ). A natural scaling factor L 2 of the metric occurs in the near horizon limit. We have used Eq. (2.11) to find D(r ) and L 2 . Following [16] we will consider the scaling limit (2.23) In this limit the metric (2.8) and the gauge field (2.12) are given by where e = √ −2F L 2 2 evaluated at horizon (r = r h ). We can see from the above Eq. (2.24), that the structure of the metric (2.8) at near horizon becomes Ad S 2 × R 2 , with L 2 being the radius of curvature of Ad S 2 . To generalise this in the case of finite temperature, where r * = r h one considers the following equation in addition to (2.23) [16,22] the metric under the additional equation turns out to be blackhole in Ad S 2 × R 2 given by where the temperature is defined as T = 1 2πζ 0 . It may be interesting to check the asymptotic limit of this geometry which turns out to be Ad S 4 given by where we have defined b 1 as with c 2 1 = −1 + 1 + ρ 4 . Probe geometry In the hyperscaling violating geometry, one can consider the similar near horizon expansion (r → r h ). Before proceeding, we will consider the following general coordinate transformation r = u 1 z−θ , under which the metric (2.14) turns out to be In the extremal limit, the metric at near horizon turns out to be Ad S 2 × R 2 with the metric and gauge field given by where e = −2F and a 1 = F (u) are both evaluated at u * . 1 To deduce the above near horizon metric, we have chosen the following scaling limits In the finite temperature, one can consider the additional scaling limit given in (2.25) and follow the similar procedure. On doing so, the metric turns out to be a blackhole in Ad S 2 ×R 2 , with metric and the gauge field given by where L 2 and e are given in (2.30). Green's function In order to probe the system, we will consider Dirac action in fully backreacted geometry and hyperscaling violating geometry as well. The action is given by where the first term corresponds to the kinetic term, the second term is the mass of the spin 1/2 particle and the last term corresponds to the coupling of Dirac field with the electrodynamic field, where the strength of the coupling is given by the dipole coupling p. The covariant derivative and Gamma matrices are given by, 2) (e μ ) a is the vielbein, ω μν is the spin connection and q is the charge of the spin 1/2 particle. The equation of motion for the above action is given by, To eliminate the spin connection in the Dirac Eq. (3.3), we will consider the following ansatz ψ = (−gg rr ) 1 4 e −iωt+ik i x i λ(r ). For further simplification, we will now introduce the projection operator [24,26] 1 u * =u h in the case of extremal limit (T = 0). and rewrite λ(r ) as λ ± = ± λ(r ), wherek is the unit vector along the spatial direction. Our choices of Gamma matrices are We can write λ ± in terms of two component spinor given . In addition, we use the spatial rotational symmetry to set k i = k for i = x and zero for i = y. With the above choices together with Gamma matrices the Dirac equation reduces to Before discussing solution of this Dirac Eq. (3.5) let us consider the near horizon limit of it. We will consider only the backreacted geometry for this purpose. For the extremal case (T = 0), using (2.24), Dirac equation simplifies into The solution to this equation is given by λ ± = a E + ζ −ν + bE − ζ ν , where E ± are the real eigen vector of U and the exponent is the eigen value of U given by For the finite temperature, calculation remains similar except one has to consider the near horizon metric and gauge field for the finite temperature given in Eq. (2.26). In this case, the eigenvalue ν is given by ν plays a central role in determining the nature of the Fermi surface. The lifetime of the fermionic excitations around the Fermi surface is given by [22] τ = ω −ν at zero temperature and τ ∼ T −2ν at finite temperature. As the lifetime of the excitation depends on ν, the metallic behaviour of the system is controlled by ν. In the region of phase diagram with ν ≥ 1, we have a normal metal phase, where the excitation of the quasiparticles is governed by Landau Fermi liquid. However, in the region where 1/2 < ν < 1 we get stable quasiparticles, whose lifetime scales differently from that of Landau Fermi liquid theory. In the region with ν < 1/2, we have a short lived quasiparticles which are the characteristic of strange metal phase, this region is called non-Fermi liquid region. In between the regime of stable and non-stable quasiparticle, there is a transition point where ν = 1/2, this region corresponds to a marginal Fermi liquids regime. The conformal dimension of the dual operator in the IR CFT will be = ν + 1 2 . There is a range of momentum k, known as oscillatory region [24,26], for which ν becomes complex thus leading to complex dimension of the dual operator. Coming back to the solution of the Dirac Eq. (3.5) for the full metric, we need a boundary condition. For that purpose we define η± = λ 1± λ 2± , which yield the flow equation given by where v ± = g rr g x x (ω + q A t ) ± p∂ r A t . (3.10) To solve the above Eq. (3.9) numerically, we impose the infalling boundary condition given by The boundary retarded Green's function is given by In the background with hyperscaling violating geometry in the UV limit, the boundary Green's function in the regime of linear response has been derived in [40] and is given by Comparing (3.12) and (3.13), one may notice that for the massless case, the two Green's function matches. We can see from the Eq. (3.9) that the two diagonal part of the retarded Green function are related to each other by flipping the sign of k i.e. G 11 (ω, k) = G 22 (ω, −k). So it is sufficient to evaluate the Green's function for only one from the two, hence we will be only considering G 22 for our calculation. In the next section, we will be studying the behaviour of the spectral function given by A(ω, k) = I m(G 22 ) with respect to variation in charge density ρ, charge of the particle q and the momentum dissipation term k p in zero and finite temperature for fully backreacted background. In case of the hyperscaling violating geometry, we will study the fermionic spectrum for a specific choice of background parameter which leads to linear temperature response of resistivity. Result In this section, we consider the behaviour of the Fermi surface associated with the operators dual to the fermionic modes. As mentioned earlier, we will limit ourselves to G 22 . Considering G 22 means that we only consider η − , which is sufficient, as the behaviour of the other Fermion for (η + ) will be similar i.e. if we find a Fermi surface for η − at k F than the Fermi surface for η + will be in −k F . For the sake of simplicity, we will be considering the massless case with m = 0. We will also set the value of the scalar potential to be V 0 = 6 and Pauli's coupling p = 0. To begin, we numerically solve (3.9) where we impose the infalling boundary condition given in Eq. (3.11), and obtain the spectral function A(ω, k) as function of ω and k. In order to locate the Fermi surface, we will look for the poles of the Green's function in case of the zero temperature and in case of finite temperature we will follow [24,28], where the position of Fermi surface is given by sharp peak in the plot of a spectral function around k = k F at ω = 0, which has a small enough width. In addition, if we plot the spectral function vs. ω at k = k F that should show a peak around ω= 0. Backreacted background In this subsection, we have studied the behaviour of the fermionic spectrum with variation of the parameters for the fully backreacted background. From (2.13) fixing the value of the temperature T , we get a relation between the charge density ρ and the momentum dissipation term k p and on choosing the value of one determines the value of the other. Here, the value of k p is determined by choosing the value of ρ and vice versa. First, we begin with the extremal case T = 0, where we vary the charge density ρ and look for the poles of the Green's function for two cases with charge q = 2 and q = 1. We have plotted k F versus the inverse of charge density (1/ρ) for q = 2 in Fig. 1a and q = 1 in Fig. 2a. Blue dots correspond to Fermi momenta (k F ), which corresponds to poles of the Green's function. In the same figure, the shaded portion which is tapering toward the right represents an oscil- Fig. 2a, with the decrease of charge density of the system Fermi surface, moves toward the oscillatory region and enters around ρ = 1.333. In order to study the change in nature of the fermionic excitations further, we have plotted ν versus inverse of charge density (1/ρ) of the system in Fig 1b for q = 1 and Fig 2b for q = 2. As we can observe that on decreasing the charge density (ρ), ν starts decreasing and vanishes around ρ = 0.8849. Initially, at sufficiently large charge density, the system was in a Fermi regime, as one start decreasing the charge density the system starts to move from the regime of Fermi liquid enters the marginal Fermi regime at ρ = 1.1628 where ν = 1/2 and reaches the non-Fermi regime. Further decrease of ρ leads to vanishing of ν signalling non existence of the Fermi surface, where Fermi momenta are inside the oscillatory region. Thus with the decrease of charge density ρ, there is a transition from Fermi to the non-Fermi regime. This transition/crossover is taking place at zero temperature and may be related to a quantum phase transition. In addition, we can see from the comparison of the Figs. 1b and 2b, that with the increase of charge of the particle q the point of transition from Fermi to non-Fermi get shifted to the right of the parameter space. In order to make the change of behaviour with momentum dissipation more explicit, we have plotted ν vs. √ ρ/k p given in Fig. 3a ν = 1/2, the upper plane above ν = 1/2 is the Fermi regime and lower plane correspond to the non-Fermi regime. There is a critical value of the momentum dissipation at which ν = 1/2. As we move toward the upper plane, the quasiparticles become stable and finally enters the region of Landau Fermi liquid which is given by the region with and above the green dashed line in Fig. 3a, b. Comparing Fig. 3a, b we see for higher charge, q = 2 the transition is occurring at a higher value of k p . The transition is driven by the changing the magnitude k p of axionic scalars and can be thought of as a disorder driven transition [32]. After discussing the zero temperature case we will move to the finite temperature. We have plotted spectral function A(ω = 0, k) vs k for four different temperatures starting from T / √ ρ = 0.035. We have set the value of charge density ρ = 2 and q = 2. We can see that with the increase in temperature the height of peak starts decreasing and becomes broader and move towards the left. With the increase in temperature, the density gets spread out due to thermal excitation leading to a broader peak (Fig. 4). In order to see the change in the nature of fermionic excitations over the range of momentum dissipation with variation of temperature, we have given a density plot in T / √ ρ vs. √ ρ/k p plane in Fig. 5a, b, with a colour gradient representing values of ν, for two different values of charge q = 1 and q = 2. The value of ν increases from left to right, with darker and lighter shades corresponding to lower and higher values of ν respectively. The white region on the left is the oscillatory region and the green dashed line is the curve for ν = 1/2, where on the left of the curve we have strange metal phase and on the right, we have stable quasiparticle phase. The density plot extends for the higher value of ν > 1 which is given by the yellow dashed line representing a normal metal phase. We can see from the figures that, with an increase in temperature the critical value of momentum dissipation k p for a transition from the strange metal to the normal metal phase decreases. Even though for a given temperature the transition occurs on the specific value of momentum dissipation for different values of charge of the particle q, the qualitative behaviour does not depend on this variable q. This is similar to the normal metal to strange metal transition found in [22] where the transition occurred due to tuning of doping instead of a variation of the magnitude of axionic scalar. In order to see the similar changes with charge density, in Fig. 6a we have a density plot in T / √ ρ vs. ρ plane, where we can see the critical transition value of ρ from strange √ ρ/k p plane for q = 1. metal to normal metal phase increases with the increase in temperature. It may be worth mentioning that, as the system moved from the Fermi to the non-Fermi region, the height of the peak of the spectral function decreased exponentially. This can be observed from Fig. 6b, where we have plotted height of the spectral function (A(ω = 0, k = k p )) vs. ρ for q = 1 at T /ρ = 0.025, with the blue dots corresponding to the Fermi momenta. In order to see the role played by the arcsin parameter Z 2 in the fermionic behaviour, in addition to Z 2 = 1 we have also considered Z 2 = 0.25 and 0.5, where we have chosen T = 0, q = 2 and z 1 = 1. In Fig. 7a we have plotted 1/ρ vs. ν for different values of Z 2 , with blue, brown and purple curve showing Z 2 = 1, 0.5 and 0.25 respectively, with red and green dashed line corresponding to ν = 1/2 and ν = 1 line respectively. We can see from this Fig. 7a that with an increase of Z 2 the transition point from Fermi to Non-Fermi liquid regime changes, for higher value of Z 2 the transition occurs at higher value of charge density ρ. From the Fig. 7a or more explicitly from Table 1, one may also observe that for a given value of ρ there is a transition from Fermi to Non-Fermi liquid with an increase of arcsin parameter Z 2 . We have also plotted Fermi momenta k F vs. inverse of charge density 1/ρ in Fig. 7b, with blue, brown and purple curve representing Z 2 = 1, 0.5 and 0.25 respectively, with black dashed line showing ρ = 0.91. From this Fig. 7b we can see that for a given value of ρ the Fermi momenta k F decreases with the increase of arcsin parameter Z 2 as given in Table 1. A similar kind of behaviour has been observed in [36], where the Fermi momenta increases with the decrease of BI parameter and the system which is already in the non-Fermi liquid regime becomes "more non Fermi". Probe geometry In this background, we will follow a strategy similar to the one mentioned in the previous section of finite temperature. We will set p = 0 and study the massless fermionic spectrum in finite temperature with q = 2, where the temperature is controlled by the position of the horizon (r h ). In this background, we will move our UV to r → 0 by considering the transformation r → 1/r . In this regards, we will consider the condition given in (2.19) z > 2 and θ < 2. We will set λ = 0, α = 1 and z = 2.2 which in turn determines the value of other parameters for cuprates like scaling given in (2.20). We have plotted spectral function A(ω = 0, k) vs k for ρ = 1 in Fig. 8a where we find a very sharp peak at k F = −6.305, which is confirmed as the existence of Fermi surface from A(ω, k F ) vs k plot given in Fig. 8b where we again see a very sharp peak at ω = 0 with k = k F . Our analysis of probe limit at finite temperature shows existence of distinct Fermi surface. On comparison with the result of fully backreacted geometry in finite temperature, it can be observed that the peak is much sharper for the probe limit, which may be due to fact that the hyperscaling violating geometry leads to sharper peak. On the other hand, comparing the result with [40] where they have considered a linear Maxwell electrodynamics with the same background of hyperscaling violating geometry as used in our probe limit, we find that the present analysis leads to much sharper peak. . ω for q = 2, ρ = 1. In this case, one can attribute sharpness of the peak to the non-linear electrodynamics considered in this work. Discussion We have studied the fermionic behaviour of the nonlinear arcsin electrodynamics model given in [35]. We have evaluated the spectral function to determine the existence of a Fermi surface in zero and finite temperature. In case of zero temperature, we find that for a given value of charge density ρ the particle with higher charge q has a higher chance of forming a Fermi surface. For a given charge of the particle q, the system with large charge density ρ is in the region of normal metallic phase and on decreasing the charge density the system starts to move toward the marginal Fermi liquid regime where ν = 1/2. On the further reduction of the charge density ρ results in the transition/crossover toward the strange metal phase. The system shows a similar kind of behaviour with the variation of the momentum dissipation k p , where there is a transition/crossover from normal metal to strange metal phase, which may be considered as a disorder driven transition [32]. Next, we investigated the finite temperature case, where we see for a given charge q with the increase in temperature the height of the spectral function A(ω, k) starts decreasing and becomes broader and broader, showing that as the temperature rises the density gets spread out due to the thermal excitation. We have also studied the nature of the Fermi surface with the variation of momentum dissipation k p and charge density of the system ρ for finite temperature case. We see that with an increase in temperature the critical value of momentum dissipation for the transition decreases. In case of variation with the charge density ρ, with increase in temperature the critical value of charge density for the transition increases. For a given value of momentum dissipation and charge density with the increase in temperature, there is a transition from normal metal to strange metal phase, which is similar to the case in cuprates high T c superconductors where the transition depends on the doping. In the present model as already mentioned in the introduction, we have considered a particular phase of the dual boundary theory. Comparing these results with the results of [39], we see that the boundary charge density ρ plays an important role in both the phases of the dual boundary theory. In the case of [39], ρ determines the critical phase transition temperature of the superconductor and also in the present case ρ determines the critical transition temperature from the regime of Fermi to Non-Fermi. In both cases, the transition temperature increases with the increase of charge density ρ. In particular, we have observed that in [39] for a given temperature as β increases the critical value of charge density increases similarly in the present model if we keep the temperature fixed at T = 0 and increase the analogous parameter Z 2 the critical value of charge density also increases. We have also studied the fermionic behaviour with the variation of the arcsin parameter Z 2 , where we find that with the increase of the arcsin parameter there is a transition from Fermi to non-Fermi regime. We have also analysed the behaviour of Fermi momenta k F with the variation of the arcsin parameter Z 2 , where we find that for a given boundary charge density ρ, Fermi momenta decreases with the increase of arcsin parameter. Comparing this result with another model of nonlinear electrodynamics studied in [36], where the nonlinearity was controlled by BI parameter, we see that in both the cases with the increase of nonlinear parameter the Fermi momenta decreases and the system which is already in the non-Fermi regime moves toward "more non-Fermi" regime. We have also considered the probe limit of this nonlinear system with hyperscaling violating geometry as the back-ground. It was shown in [33,35] that with the particular choices of the parameters in this background, the system exhibits a linear temperature dependence of resistivity as in the case of cuprates. We have chosen those particular parameters and studied the fermionic spectrum in a finite temperature, where we find a very sharp peak in the spectral function indicating the existence of the Fermi surface. Comparing these results with [40], where linear Maxwell electrodynamics was considered in the same background as our probe background, we find that the height of the peak in the our case is much higher and sharper. This feature can be attributed to non-linear electrodynamics. In this work, we have chosen the neutral scalar field φ to vanish in the case of backreacted geometry and consider a simpler solution. Considering more general configurations with non-zero neutral scalar φ along with non-trivial choices of the coupling Z 1 may lead to further interesting solution and it would be interesting to study the fermionic behaviour for such backgrounds. It has been shown in [35] that by turning on the magnetic field h may give rise to metal-insulator transition with a variation of h. Analysis of fermionic excitations in presence of magnetic field may lead to further insight on such transitions. In [39], the author have analysed the superconducting phase of this nonlinear electrodynamic model in finite temperature. It may be interesting to study the fermionic behaviour in this superconducting phase at finite temperature as well as its zero temperature limit which may correspond to a domain wall solutions along the line of [41][42][43]. As we have mentioned the dual boundary theory associated with this model admits a rich phase structure and it would be interesting to incorporate all of these phases in a single gravity theory. Another natural extension of the present work is to consider U (1) × U (1) gauge fields in this setup and introduce doping along the line of [22]. We would like to report on some of these in future.
8,282.4
2019-11-01T00:00:00.000
[ "Physics" ]
Clinical Presentation and Severity of SARS-CoV-2 Infection Compared to Respiratory Syncytial Virus and Other Viral Respiratory Infections in Children Less than Two Years of Age The spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the implementation of restrictive measures led to a dramatic reduction in respiratory syncytial virus (RSV) occurrence together with rare and mild bronchiolitis induced by SARS-CoV-2. We described the respiratory picture of SARS-CoV-2 infection and evaluated the frequency and the severity of SARS-CoV-2 bronchiolitis comparing it with other respiratory viral infections in children less than two years of age. The severity of respiratory involvement was evaluated based on the need for oxygen therapy, intravenous hydration, and the length of hospital stay. A total of 138 children hospitalized for respiratory symptoms were enrolled: 60 with SARS-CoV-2 and 78 with RSV. In the group of SARS-CoV-2-infected children, 13/60 (21%) received a diagnosis of co-infection. Among the enrolled children, 87/138 (63%) received a diagnosis of bronchiolitis. The comparative evaluation showed a higher risk of the need for oxygen therapy and intravenous hydration in children with RSV infection and co-infection compared to children with SARS-CoV-2 infection. In the children with a diagnosis of bronchiolitis, no differences in the main outcomes among the groups were observed. Although children with SARS-CoV-2 infection have less severe respiratory effects than adults, the pediatrician should pay attention to bronchiolitis due to SARS-CoV-2, which could have a severe clinical course in younger children. Introduction The coronavirus disease in 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is a matter of big concern in global public health. Italy was the first European country affected by the pandemic; from the first cases in March 2020, several waves were observed, resulting in 23 million total cases being registered throughout the four "pandemic waves" [1]. COVID-19 generally has a milder course in children than in adults, with broad clinical manifestations, ranging from asymptomatic infections to mild or moderate illness (fever, headache, cough, vomit, diarrhea, and dyspnea). Severe or critical diseases have rarely been seen, especially in children with underlying medical conditions [2,3]. The percentage of pediatric cases in Italy is 19.7% of the total. Since the beginning of the pandemic, 4.819.122 cases in the population of individuals 0-19 years of age have been diagnosed and reported by the COVID-19 surveillance system of the Italian National Institute of Health (Istituto Superiore di Sanità, ISS), of which 25.389 were hospitalized, 573 were hospitalized in intensive care, and 93 children died [1]. Respiratory syncytial virus (RSV) is a common respiratory pathogen and a leading cause of bronchiolitis among young children [4]. Bronchiolitis is inflammation of the bronchioles usually caused by an acute viral illness. It is the most common lower respiratory tract infection in children younger than 2 years of age. Respiratory distress impedes appropriate oral intake resulting in frequent clinician visits [5]. Bronchiolitis is the most common cause of hospitalization in infants in high-income countries, and RSV is the most common cause of the disease. It is estimated that nearly all children contract the first and most severe RSV infection before reaching 2 years of age and then subsequently experience milder infections later in life [6]. Most children have been infected with RSV by the time they are 2 years of age, and, although most have mild respiratory symptoms, RSV infection can cause severe disease mainly in young children. RSV infection has a substantial global impact, representing the second most frequent cause of death in infants. The transmission of RSV is seasonal, with annual epidemics occurring during the winter months in the northern hemisphere. Like SARS-CoV-2, RSV is primarily transmitted through respiratory droplets (i.e., coughs and sneezes), including indirect contact through contaminated surfaces. Widely implemented non-pharmaceutical interventions (NPIs) against SARS-CoV-2-e.g., stay-at-home orders, the wearing of face coverings, physical distancing, and the promotion of improved hygiene, such as hand washing-all have the potential to prevent the transmission of other communicable (particularly respiratory) diseases. During the SARS-CoV-2 pandemic, a marked decrease in the number of bronchiolitis cases and the disappearance of the RSV winter epidemic were observed, and SARS-CoV-2-related bronchiolitis, although rare, with a mild clinical course, was observed [7]. A significant effect on respiratory and non-respiratory admissions, particularly with decreases in hospital admissions of respiratory infections, was observed with a steep reduction in all RSV indicators, fewer laboratory-confirmed cases, fewer hospital admissions, and fewer instances of emergency department access by children younger than 5 years of age during the pandemic [8,9]. The extraordinary absence of RSV during winter 2020-21 probably resulted in a cohort of younger children without natural immunity to RSV, thereby raising the potential for increased RSV incidence and virulence. During the strict application of NPIs, the overall frequency of community-acquired infections was reduced [10], and after withdrawal of these measures, a rebound rise in some of them (enteroviral infections, bronchiolitis, gastroenteritis, and otitis) was observed whose peaks were beyond the prepandemic level [11,12]. In Italy, in the 2021-2022 autumn-winter season, an increase in pediatric hospitalization rates due to respiratory symptoms was observed [13]. In our COVID-19 regional HUB pediatric ward, we observed an increase in moderate respiratory illnesses, which could be caused either by SARS-CoV-2 or by other viral pathogens responsible for similar clinical pictures. Previous studies reported that the prevalence of viral co-infections associated with SARS-CoV-2 was approximately 10%, with generally poorer clinical outcomes in comparison with isolated SARS-CoV-2 infection. However, viral epidemiology could considerably vary locally, and SARS-CoV-2 may have variable clinical outcomes based on the different variants assessed [14,15]. Li et al., describes the fact that co-infection was relatively common in children with COVID-19. The most frequent co-infected pathogen was mycoplasma pneumoniae (25%) followed by virus (7%) and bacteria 5% co-infection [16]. In a cohort of 93 children with SARS-CoV-2 infection, in 7 (7.5%) patients, co-infection was detected. According to some authors, co-infection is associated with an increase in hospital stays and the worsening of symptoms and complications in patients of older ages (over 65 years). The same study showed that co-infection is more frequent in patients aged 0-5 years (59%) [17]. In a recent systematic review and meta-analysis of pediatric patients, bacterial, fungal, and respiratory viral coinfection rates were 4.73%, 0.98% and 5.41%, respectively. There was increased male predominance, and most of the cases belonged to white (Caucasian) ethnicities. The most common identified virus and bacterium in children with COVID-19 were RSV (31.4%) and mycoplasma pneumonia (23.1%) [18]. Our aims were to describe the clinical respiratory picture of SARS-CoV-2 infection, to compare it with other respiratory viral infections, and to evaluate the frequency and the severity of SARS-CoV-2 bronchiolitis, in children less than two years of age. Study Design and Population A retrospective cohort study was performed, collecting data on all children under the age of 2 years, hospitalized between November 2021 and April 2022 for respiratory symptoms in two different hospitals of the Campania region, Italy. Children with SARS-CoV-2 infection and respiratory symptoms were admitted to the COVID-19 regional HUB of the Department of Pediatrics of AOU Federico II in Naples; children hospitalized for RSV infection and negative for SARS CoV-2 infection were enrolled in the pediatric unit of San Leonardo hospital in Castellammare di Stabia (Naples). We included children up to two years of age because the restrictions linked to the COVID19 pandemic during 2020-2021 led to a reduction in the circulation of respiratory viruses in the community, making even older children susceptible to RSV infection. The infections in entire our cohort originate from the Omicron rather than the Delta variant wave. Respiratory symptoms included both symptoms of upper respiratory tract infection (URTI), such as rhinorrhoea and coughs, and signs of respiratory distress (e.g., a high respiratory rate for the patient's age, the use of an accessory respiratory muscle, intercostal retractions, nasal flaring, crackles or wheezing, and low oxygen saturation levels). The diagnosis of SARS-CoV-2 infection was performed by a specific RT-PCR on a nasopharyngeal swab in all hospital-admitted patients independently of the reason of hospitalization; the diagnosis of other respiratory viral infections was performed, only in presence of respiratory symptoms, by a multiplex PCR on a nasal swab for the following viruses: coronavirus HCoV-NL63, HCoV-OC43, HCoV-229E, HKU1, RSV A-B, rhinovirus, metapneumovirus, influenza A-B, adenovirus, bocavirus, parechovirus, and enterovirus. For each patient, personal and clinical data were collected from medical records. Pre-existing risk factors (such as prematurity, atopy, the type of feeding, having parent who smokes, and comorbidity) for a higher risk of severe illness were evaluated [19]. The children with SARS-CoV2 infection were divided in two groups based on whether or not there was a presence of another viral infection. All the children with co-infection belonged to the group of children who were then referred to a referral center in Naples. Children with a clinical picture of bronchiolitis, defined according to the literature [20][21][22], were then selected from each group among the patients under the age of two, and the difference in the severity of clinical presentation was evaluated between groups. All children with documented a bacterial etiology or with clinical and laboratory signs of suspected systemic infection were excluded. In detail, we excluded all children with bacterial infection (such as urinary tract infections, sepsis, or gastrointestinal infection in which a bacterial culture was positive); we also excluded all children with documented bacterial pneumoniae (mycoplasma chlamydia, streptococcus pneumoniae or other bacteria detected by a Rt-PCR on a pharyngeal swab). Finally, we excluded children with severe clinical conditions suggestive of sepsis but without a confirmed microbiological diagnosis and children showing a significant increase in inflammatory markers such as severe leukocytosis with significant neutrophilia and increased C-reactive protein (CRP) above 10 times the normal value. The primary outcomes of our analysis were the evaluation of the length of hospitalization, the need for oxygen therapy, and the need for intravenous hydration; the secondary outcomes were the need for corticosteroid, antibiotic therapy, and inflammatory indexes among the three final groups of children: children with SARS-CoV-2 infection, children with RSV infection, and children with viral co-infection (SARS-CoV-2 plus any other viral respiratory infection). Oxygen supplementation was started when the peripheral oxygen saturation, measured with a pulse-oximeter, was <92%. The oxygen saturation levels used as a guide for commencing supplemental oxygen therapy varied from <90% to <95% Viruses 2023, 15, 717 4 of 12 among the guidelines. However, the most highly recommended cutoff was <92% [23]. When indicated, oxygen supplementation was performed using a high-flow nasal cannula. Intravenous fluid administration was started if the child could not ingest enough oral fluids. An inhaled bronchodilator was administered to children with a significant auscultatory finding of wheezing after an initial dose, demonstrating a good clinical response. We used systemic steroids only if the child's symptoms showed no improvement or worsened 2 days after starting oxygen supplementation. Finally, we started antibiotic treatment only if the child, during hospitalization, showed an increase in CRP or an X-ray suggestive of bacterial over-infection. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Biomedical Activities, University of Naples Federico II, Naples, Italy (protocol code 226/21). Written informed consent to the use of clinical data was obtained from the parents of all the children involved in the study. Statistical Analysis The original dataset was created and managed using Microsoft Excel ® . Differences between groups were evaluated by the Chi-square test or the Fisher's exact test, when appropriate, for categorical variables and by the non-parametric Mann-Whitney or one-way analysis of variance (ANOVA) for continuous variables, when appropriate. The relative risk for the main severity outcomes was also calculated. Statistical analyses were performed with IBM SPSS Statistics for Windows, Version 26.0 (Armonk, NY, USA: IBM Corp). The statistical significance level was set at p < 0.05. In the group of co-infected children, the most frequently observed virus was RSV, which was in 8 out of 13 children. Metapneumovirus and bocavirus were observed in two children, rhinovirus/enterovirus in two other children, and coronavirus OC 43 in one In the group of co-infected children, the most frequently observed virus was RSV, which was in 8 out of 13 children. Metapneumovirus and bocavirus were observed in two children, rhinovirus/enterovirus in two other children, and coronavirus OC 43 in one child. The main features of the enrolled children are shown in Table 1. Significant differences were observed in terms of age, between the RSV and co-infected children (mean age 4.5 ± 4.2 versus 7.3 ± 5.6 months), in terms of enteral feeding between the COVID-19-and RSV-infected children (23.4% versus 57.7 % formula feeding), and in terms of prematurity history between the RSV and co-infected subjects (6.4% versus 23.1%). No significant differences between the groups were observed in terms of risk factors for respiratory infections. Clinical Features Fevers were significantly more frequently observed in children with SARS-CoV2 infection compared to children with RSV infection (60% versus 9%, respectively; p < 0.001), whereas respiratory distress was significantly higher in children with RSV infection com- Biochemical and Radiological Findings No significant differences were observed in the laboratory findings between the groups except for an average neutrophil count which was significantly lower in COVID-19 children than in RSV-infected or co-infected children (3.30 ± 2.54/mm 3 versus 5.28 ± 3.36 versus 5.63 ± 3.07, respectively; p = 0.002). No significant differences were observed between the groups in inflammatory markers, even if in co-infected children the percent of children with an increase in CRP (>5 mg/L) was higher than in COVID-19-and RSV-infected children (38% versus 36.2% versus 19.2%; p = 0.071). Chest radiography was performed on all children and no difference in the frequency of radiographic abnormalities (interstitial disease, thickening, and lobar findings) between the groups were found. Overall, therefore, the number of children who required systemic steroids was significantly higher in the group of children with RSV infection. (Table 1). Difference in the Treatment among Children with SARS-CoV-2, RSV, and Co-Infection A total of 54/138 (39%) children received antibiotics, of whom 37 showed increased Creactive protein amounts suggestive of bacterial over-infection, and 28 of whose X-rays showed thickening suggestive of bacterial infection. Twenty-five children showed both increased C-reactive protein amounts and X-ray thickening suggestive of bacterial pneumoniae. The need for an antibiotic was significantly higher in children with RSV infection compared to SARS-CoV-2 children. Additionally, children with coinfection were more frequently treated with an antibiotic compared to those with SARS-CoV-2 infection, whereas no significant differences were observed between the group of co-infected children and the group of RSV-infected children, regardless of the hospital to which they were admitted ( Table 1). The need for bronchodilators was higher in children with RSV infection and coinfection compared to children with isolated SARS-C-V-2 infection (Table 1). Severity Outcomes of Respiratory Involvement among Children with SARS-CoV-2, RSV, and Co-Infection The management approach was linked to the severity of clinical presentation. We considered oxygen supplementation, intravenous fluid requirement, and the length of hospital stay the severity outcomes. All the outcomes were worse in co-infected and RSV children compared to children with SARS-CoV-2 (Figure 2 a,b). A total of 44/138 (32%) children needed oxygen supplementation with a mean duration of 4.18 ± 1.5 days. A total of 58/138 (42%) children needed intravenous hydration with a mean duration of 3.2 ± 1.5 days. In detail, oxygen supplementation was needed by 53.8% of children with coinfection, 43.6% of children with RSV infection, and only in 6.4% of children with isolated SARS-CoV-2 infection. A significantly higher number of children with RSV and co-infection required parenteral fluid supplementation compared to children with SARS-CoV2 infection. (Figure 2 a,b). No difference in the need for oxygen supplementation was observed between co-infected children and children with RSV infection (RR 1.24 (0.7 to 2.1) p = 0.7) (Figure 2c). and RSV infected children [RR 1.26 (0.8 to 1.9) p = 0.5] (Figure 2c). According to an ANOVA test, the length of hospital stay among the three groups was not statistically significant; however, the mean duration of hospital stay was significantly higher for co-infected children compared to children with SARS-CoV-2 infection, (6.0 ± 2.4 vs. 4.6 ± 1.9 days; p 0.01) (Figure 2b). No difference was observed between co-infected children and RSV-infected children (6.0 ± 2.4 vs. 5 ± 2.5 days; p = 0.152) (Figure 2c). Clinical Characteristics and Severity of Bronchiolitis in Children with SARS-CoV-2, RSV, and Co-Infection Finally, we evaluated the outcomes in children with a clinical diagnosis of bronchiolitis in the three groups. The number of children with bronchiolitis was significantly lower in the group of children with SARS-CoV-2 infection compared to the (Figure 2 a,b). No difference in the need for parenteral fluids was observed between co-infected children and RSV infected children [RR 1.26 (0.8 to 1.9) p = 0.5] (Figure 2c). According to an ANOVA test, the length of hospital stay among the three groups was not statistically significant; however, the mean duration of hospital stay was significantly higher for co-infected children compared to children with SARS-CoV-2 infection, (6.0 ± 2.4 vs. 4.6 ± 1.9 days; p 0.01) (Figure 2b). No difference was observed between co-infected children and RSV-infected children (6.0 ± 2.4 vs. 5 ± 2.5 days; p = 0.152) (Figure 2c). Clinical Characteristics and Severity of Bronchiolitis in Children with SARS-CoV-2, RSV, and Co-Infection Finally, we evaluated the outcomes in children with a clinical diagnosis of bronchiolitis in the three groups. The number of children with bronchiolitis was significantly lower in the group of children with SARS-CoV-2 infection compared to the children with bronchiolitis due to RSV infection or viral co-infections (10/47 (21.3%) versus 68/78 (87.1%) and versus 9/13 (69.2%), respectively; p < 0.001). Fever was more frequently observed in children with SARS-CoV-2 bronchiolitis compared to children with RSV bronchiolitis and compared to children with bronchiolitis and viral co-infections (60% versus 13.2% versus 44.4%, respectively; p = 0.001). Coughing and poor feeding were significantly more common in children with bronchiolitis due to RSV infection compared to children with SARS-CoV2 bronchiolitis (60.3% versus 20%; p < 0.001 and 67.2 % versus 20%; p < 0.05, respectively) suggesting an increased severity of respiratory involvement in RSV infection. No significant difference in the frequency of other symptoms was observed among the groups. For the laboratory evaluation, no major differences were observed even if in RSV infected children the mean number of white blood cells (WBC) was higher than that in COVID-19-infected children (13.4 ± 5.4/mm 3 versus 9.5 ± 1.5/mm 3 ; p = 0.028). To evaluate whether SARS-CoV-2 bronchiolitis was more or less severe than bronchiolitis induced by other viral infections, we analyzed the main clinical outcomes of severity among the three groups of children. No statistical differences in the relative risk were observed among the three groups in the length of hospital stay and oxygen requirement, whereas the requirement of parenteral infusion was significantly increased in the children with RSV infection compared to those with SARS-CoV-2 infection. (Figure 3a-c). Again, bronchodilators and systemic steroids were needed in a significatively higher percentage of children with RSV bronchiolitis compared to children with SARS-CoV-2 bronchiolitis (83.8% vs. 50%; p = 0.026 and 91.2% vs. 30%; p < 0.001, respectively). (60.3% versus 20%; p < 0.001 and 67.2 % versus 20%; p < 0.05, respectively) suggesting an increased severity of respiratory involvement in RSV infection. No significant difference in the frequency of other symptoms was observed among the groups. For the laboratory evaluation, no major differences were observed even if in RSV infected children the mean number of white blood cells (WBC) was higher than that in COVID-19-infected children (13.4 ± 5.4/mm 3 versus 9.5 ± 1.5/mm 3 ; p = 0.028). To evaluate whether SARS-CoV-2 bronchiolitis was more or less severe than bronchiolitis induced by other viral infections, we analyzed the main clinical outcomes of severity among the three groups of children. No statistical differences in the relative risk were observed among the three groups in the length of hospital stay and oxygen requirement, whereas the requirement of parenteral infusion was significantly increased in the children with RSV infection compared to those with SARS-CoV-2 infection. (Figure 3a-c). Again, bronchodilators and systemic steroids were needed in a significatively higher percentage of children with RSV bronchiolitis compared to children with SARS-CoV-2 bronchiolitis (83.8% vs. 50%; p = 0.026 and 91.2% vs. 30%; p < 0.001, respectively). Only two children had symptoms that worsened during hospitalization and were admitted to PICU, one child developed mild pneumothorax, and one child died. All children were in the group of RSV infection and were younger than 6 months old. Discussion Bronchiolitis is a potentially severe respiratory presentation of acute respiratory infections, induced by viruses, namely RSV, influenza, and others in infants and younger Only two children had symptoms that worsened during hospitalization and were admitted to PICU, one child developed mild pneumothorax, and one child died. All children were in the group of RSV infection and were younger than 6 months old. Discussion Bronchiolitis is a potentially severe respiratory presentation of acute respiratory infections, induced by viruses, namely RSV, influenza, and others in infants and younger children. Although COVID-19 in children is generally a mild respiratory infection, it may present respiratory distress like another viral bronchiolitis [24]. A recent report suggests that RSV-infected patients require a higher level of medical care and have to stay in hospital for longer than SARS-CoV-2-infected children [25]. In our study, we included patients admitted for respiratory symptoms with SARS-CoV-2 infection and compared them with patients with respiratory symptoms, RSV infection, and co-infection (SARS-CoV-2 and any other respiratory viruses). During the period between November 2021 and April 2022, 87 children under two years of age were admitted to our center, for SARS-CoV-2 infection. A total of 60/87 (69%) presented respiratory symptoms. This percent is quite high, probably due to the high prevalence of the Omicron variant, which was the main circulating variant of SARS-CoV-2 in Italy in that period. The comparative clinical evaluation confirms, as reported in the literature [25], that patients with SARS-CoV-2 infection have milder respiratory symptoms and a shorter duration of hospitalization compared to patients with RSV infection or co-infection. This difference was observed in all the main outcomes of severity, supporting the hypothesis that the severity of symptoms in the group of children with co-infection was related to the presence of RSV rather than SARS-CoV-2. Furthermore, the need for adjunctive drugs, such as systemic steroids, bronchodilators, and antibiotics, was less in the SARS-CoV-2 patients compared to the other two groups of children. These differences in the use of steroids and antibiotics may be perhaps interpreted as being due to the different management practices in different hospital settings. These differences could, at least in part, be attributed to different management practices in different hospitals. However, both antibiotics and systemic steroids were prescribed more frequently to the co-infected and RSV-infected children, compared to SARS-CoV-2 children, regardless of the hospital to which they were admitted. Co-infected children were managed at the COVID-19 regional HUB of the Department of Pediatrics of AOU Federico II in Naples, whereas RSV infected children were exclusively managed at the pediatric unit of San Leonardo hospital in Castellammare di Stabia (Naples). This suggests that different management practices were not able to influence the results. Overall, these data showed that patients infected with SARS-CoV-2 show mild respiratory effects compared to children with RSV infection and co-infection (SARS-CoV-2 and another respiratory virus). To assess the differences in the clinical severity of bronchiolitis due to SARS-CoV-2, we selected all patients with a clinical diagnosis of bronchiolitis and analyzed the main clinical outcomes of severity among children with a different etiology. About 60% of the entire population of enrolled children fulfilled the criteria for the clinical diagnosis of bronchiolitis (as defined in the Methods section). In our population, bronchiolitis was more frequently observed in the group of children with RSV infection 68/78 (87.1%) than in children with co-infection 9/13 (69.2%) or children with SARS-CoV-2 infection alone 10/47 (21.3%). This result agrees with data from the literature showing that bronchiolitis due to SARS-CoV-2 is less frequently observed than RSV-induced bronchiolitis is [7]. Despite this, the incidence in our population is still higher than that reported in the literature, also because the infections of our cohort originate from the Omicron rather than the Delta variant wave. The Omicron variant is more infectious, and it is described to cause a higher symptom burden in children compared with children with other variants and with adults, possibly due to previous vaccination [26]. The Omicron variant was associated with an increase in respiratory symptoms compared with the wild type/Alpha variant, with coughing, fever, sore throat, nasal congestion/runny nose, and fatigue being the more frequently described symptoms in children [26]. Finally, recent studies indicate disproportionately higher hospitalization rates in children after the emergence of Omicron [27,28] with more severe complications affecting the neurological and respiratory systems [29]. Surprisingly, in terms of severity, children with SARS-Cov2 bronchiolitis have a similar, not better, clinical course of disease compared to children with RSV and co-infection bronchiolitis. Notably, they did show statistically significant differences in the length of hospital stay and oxygen requirements. A significant difference was observed in the risk of systemic steroids, inhalers bronchodilators, and parenteral hydration being required, with RSV bronchiolitis having a higher risk for all the above-mentioned interventions. Again, this additional risk may likely be due to the different practices in the management of children with acute respiratory infection in different hospitals. However, no significant differences were observed between the group of co-infected children and the group of RSV-infected children in the requirement for steroids, bronchodilators, and antibiotics, regardless of the hospital, suggesting that the different management practices were not able to influence the results. The study has some limitations, of which the involvement of two centers in the same city is one, which did not allow the generalization of the results, the possible effects of the different practices of management of acute respiratory diseases in different hospitals is another, and the small sample size that neither allowed us to confirm the lower severity of bronchiolitis from SARS-CoV-2 nor allowed us to estimate its real incidence is one. Despite the limitations, these data showed that, in children under two years of age with SARS-CoV-2 infection, respiratory symptoms are less severe compared to those observed in children with RSV infection or co-infection; in addition, SARS-CoV-2 infection is confirmed to be a less common cause of bronchiolitis compared to SARS-CoV-2. Surprisingly, despite what reported in the literature [7] our data showed that SARS-CoV-2 bronchiolitis is not mild. Our cohort of children with SARS-CoV-2-induced bronchiolitis showed a similar risk of a severe clinical course compared to children with bronchiolitis induced by RSV and co-infections. Similarly, in our population, the presence of viral co-infection was not associated with an increased risk of worse outcomes [17,18], in-keeping with the findings by Halabi et al. [30] that account for a possible "negative" viral interference between SARS-CoV-2 and RSV. Furthermore, this discrepancy may also be explained by the small number of children with co-infection, and needs to be resolved by studies with a higher sample size. Conclusions In conclusion, although SARS-CoV-2 infection in people of a pediatric age is generally milder than in adults, respiratory effects in children also appear frequently when infected with the Omicron variant wave. SARS-CoV-2 should be included in the list of viral pathogens responsible for bronchiolitis. Clinical attention should be paid to patients presenting more severe respiratory symptoms for determining the risk of co-infection with other respiratory viruses. Finally, children with SARS-CoV-2 bronchiolitis may have a severe clinical course like that of children with RSV bronchiolitis. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Biomedical Activities, University of Naples Federico II, Naples, Italy (protocol code 226/21). Informed Consent Statement: Written informed consent was obtained from the parents of all children involved in the study. Data Availability Statement: The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
6,531
2023-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Performance Analysis of Feature Selection Techniques for Text Classification Internet is a suitable, highly available and low cost publishing medium. Therefore a significant data is hosted and published using websites. In this domain some amount of data is directly present for common people and some of data is not publically distributed. Such kinds of data are utilizable by service providers and administrators for business intelligence and other similar applications. In this presented work the web data analysis or mining is the key area of investigation and experimental study. The web data mining can be dividing in three major classes i.e. web content mining, web structure mining and web usages mining. In this work the web content mining and web usages mining is taken into consideration. First of all the web content mining is explored thus a system is developed for making comparative performance study of different content feature selection techniques. In this experiment the GINI index, Information Gain, DFS and Odd Ratio is compared using a real world collection of web pages. In order to classify the extracted features from the web contents the SVM (Support Vector Machine) is applied. The comparative study demonstrates the IG and GI is the suitable feature selection techniques that work well with the SVM classifier. Introduction Web is a backbone of new generation technology, research, education, medical, engineering and a number of areas getting benefitted by web. From web documents and services using data mining techniques information is extracted automatically, this is the process of Web Mining. To discover useful data from the web using patterns is the main purpose of web mining. In this presented work the different formats of web data is explored and web mining techniques are investigated for identifying effective, efficient and accurate techniques of web data mining. The web mining can be classified into three type's web content mining, web usages mining and structure mining [1]. A web mining process which extracts useful data from the web is called Content Mining. The contents are video, audio, text documents, structured records, and hyperlinks. In web content data delivered to the user in the form of a list, images, texts, tables, and videos. The number of webpages has increased to billions in the last few decades and it's still increasing. To search a query into billions of documents is a timeconsuming task. By performing different mining techniques and by narrowing down the search data content mining extracts queried data, so it's easy to find data required by the user [3].Similarly, web usages mining explores the domain of hidden knowledge in web access log files. And finally the structure mining helps to optimize the accessibility of web pages and structure of the work [2]. While the volume of data from heterogeneous sources develops impressively, foresight and its strategies infrequently advantage from such accessible data. This work focuses on textual data and considers its utilization in foresight to address new research questions and incorporate different partners. This textual data can be gotten to and methodically analyzed through content mining which structures and aggregates data in a generally robotized way. By exploiting new data sources (for example Twitter, web mining), more entertainers and perspectives are incorporated, and more accentuation is laid on the investigation of social changes. In this presented work the web content mining and web usages mining is the main area of investigation. Thus using a real world application the applicability of the web mining techniques is demonstrated in this work. The proposed work is a promising approach for motivating the researchers to employ different data mining techniques for solving real world issues. Literature Survey The exploration of potential of text mining for foresight by considering different data sources, text mining approaches, and foresight methods are used by authors [4]. In this paper authors extracts patterns and reduces data dimensions of BSS usage by exploring time series representation and clustering of BSS usage data [5]. This paper provides over three decades long (1983-2016) systematic literature review on clustering algorithm and its applicability and usability in the context of EDM [6]. The author's goal of review is to make available a comprehensive and semistructured overview of WCM methods, problems and solutions proffered. They have 57 publications including journals, conferences, and workshops in the period of 1999-2018 as a review on this subject [7]. This paper provide author's try to give a brief idea regarding web mining concerned with its techniques, tools and applications [8]. Two different feature selection methods are investigated in this paper on the spam reviews detection. Bagof-Words and words counts. Different machine learning algorithms were applied such as Support Vector Machine, Decision Tree, Naïve Bayes and Random Forest [9]. In this research an effort to address such uncertainty which is based on a data set derived from profiling data set available publicly. The conventional text feature extraction approach is applied to identify the most significant words in the data set [10]. This paper provide an improved global feature selection scheme (IGFSS) where the last step in a common feature selection scheme is modified in order to obtain a more representative feature set is proposed [11]. This paper shows an introduction of a fuzzy term weighing approach that makes the most of the HTML structure for document clustering [12]. Proposed Methodology The proposed investigation of web mining is now focused to explore the domain of content mining and relevant feature selection techniques. This includes the design of data model which is used for accomplishing the desired objective. In this context a web mining model is demonstrated as given figure 1. The different component of the model is explained here. 1) Web Page Dataset: we had downloaded a significant amount of web pages from different subjects and designed a syntactic dataset. The data set is organized in a way by which the subdirectory consist of the class labels and the directory contents or web pages are treated as data instances to be classify in target subjects or domains. 2) Data Preprocessing: The entire web data preprocessing involve three main steps: a) Removal of HTML tags, b) Removal of special characters, and c)Removal of stop words. 3) Feature Selection: That technique helps to reduce the data dimension and regulate the requirements of the computational resources such as time and memory. In this work we involve four popular feature selection techniques used for web content mining. a)GINI Index:let S is the set of samples and having k number of classes(c 1 , c 2 , … , c k ). According to the classes we define k sub categorize of data such that{1, 2, … , k}. Then GINI index of S can be defined using [16]. (1) Where p i is the probability which is calculated using i th sample of S and complete set of S. however the minimum value of GINI is 0, which shows maximum utility of data. Similarly if the distribution of class and data is uniform then the GINI demonstrate the maximum value to 1 which shows minimum utility of data. In order to use the technique for text classification it is used as a measuring function of data impurity with respect to class labels associated with data. So according to previous consideration the lower value of GINI indicates the higher applicability of the attribute for classification. In text analysis, IG is used to measure the relevance of attribute A in class C. The higher the value of IG between classes C and attribute A, demonstrate the higher the relevance between classes C and attribute A [17]. Where, H C = − p C log p(C) cEC , the entropy of the class, and H(C|A) =1 is the conditional entropy of class given attribute, Since Cornell movie review dataset has balanced class, the probability of class C for both positive and negative is equal to 0.5. As a result, the entropy of classes H(C) is equal to 1. Then the information gain can be formulated as: are not related at all. On the contrary, we tend to choose attribute A that mostly appears in one class C either positive or negative. On the other words, the best features are the set of attributes that only appear in one class. It means the maximum I(C, A) is reached when P (A) is equal to P(A|C 1 ) resulting in P(C 1 | A) and H(C 1 | A) being equal to 0.5. WhenP(A) = P(A|C 1 ), then the value of P(A) = P(A|C 2 )results in P C 2 A = 0 andP C 2 A = 0. The value of I(C, A) is varied c) DFS: The probabilistic feature ranking metric DFS. Its requirements emphasize that, terms present in a number of classes should be ranked higher than other terms; terms rarely occur in a single class and doesn't present in other classes are irrelevant and should be ranked lower; terms which frequently occur in a single class but doesn't occur in other classes are highly distinguishing, should be scored higher. DFS metric assigns score values between 0.5 and 1.0 Where, M is the number of classes, P(C j ) is probability of j th class and P(t |C j ) is probability of absence of term t when class C j is given while P(t|C j ) is feature likelihood when classes other than C j are given. d) Odd Ratio: It is a likelihood ratio. It's numerator is the multiplication of t p and t n and denominator is the multiplication of f p and f n . It presents the likelihood of feature occurrence to a class. It prioritizes those features having high occurrence rate to a particular class but ignores features which frequently occur in other classes. It also doesn't take into account irrelevant and redundant features. It's mathematical formulation is given as. Odds ratio performs well on small number of features. 4) Data Splitting: After feature selection of the approach the system returns a feature vector which is used further for experimentation or learning with the supervised learning algorithm. 5) Training Set: The data splitting create two sub sets of entire web content data features first 70% of randomly selected data instances are used here for the classifier training. 6) Testing Set: Additionally the 30% of randomly selected data is used for testing of the trained model. 7) SVM Training : the SVM is a supervised learning model which is mostly used for classification of binary data, which is used the concept of hyper plain for differentiating between two classes. 8)Trained SVM: the SVM algorithm used for make training on the extracted features from the different feature selection techniques. After taking training from the input features the algorithm can identify the similar patterns. 9) Classified Data and Performance : based on test data classification using the trained SVM the system measures the performance in terms of accuracy and error rate. At the same time the system also computes the efficiency of the system in terms of time consumed and memory usages. Implementation Using the developed user interface we have tried to deliver the functional aspects of the proposed framework. The design and explanation are given as: Figure 2 shows the selection of HTML data set which is available in local storage. Further in next figure 3 the feature selection technique is implemented with the concept of GINI Index. Fig. 4. Information gain calculated The figure 4 shows the implementation of Information Gain based feature selection. The figure 5shows the calculation of DFS based feature selection technique. Similarly the next figure 6 shows the odds ratio based feature selection approach. Result Analysis The aim of this experimental scenario is to obtain an efficient feature selection technique for implementing the web content mining based applications. In this context a comparative analysis is conducted between different feature extraction techniques. There are four parameters are used to compare the performnce. 1) Accuracy-That can be measured using the ratio of total correctly classified and the total patterns to be classified. That can also be represented using the following equation: = 100 (9) Chart.1.Accuracy (%) The accuracy of the implemented feature extraction techniques is given in chart 1and table 1. 2) Error rate-This is a ratio of misclassified test samples and the total samples for classification. That can be calculated using the following equation: = 100 (10) Chart 2. Error rate (%) 3) Time Consumption-The amount of time consumed for classification is calculated using the following formula: = − (11) The performance of the implemented feature selection algorithms in terms of time consumption is given using figure 3 and table 3. 4) Memory Usage -The amount of total memory utilized for execution of an algorithm is measured here as the memory consumption or usages. Conclusions and Future Work In this presented work the web data mining is the main area of investigation. Therefore the web usages mining and web content mining is studied and the relevant methods are demonstrated. In web content mining the web pages are involved for experimentation because the most of web contents are published using HTML pages. These web pages includes different formatting tags and text contents therefore it complex to process and classify using any basic machine learning method. Therefore first some feature extraction techniques are explored namely GINI Index, Information gain, DFS and odds ratio. All these methods are basically measuring the ranks of the text features for selecting most appropriate according to their defined class labels. The experimental study offers different techniques and methods that useful for different kinds of data mining approaches used in web data mining. Future work of this experiment is further extended to find suitable and efficient classifier for web content classification. Therefore the selected two feature selection techniques namely GINI Index and Information Gain will utilize with the three popular supervised learning classifiers namely SVM, SVR and k-NN.
3,236.8
2020-12-01T00:00:00.000
[ "Computer Science" ]
In-Plane Shear Characterization of Unidirectional Fiber Reinforced Thermoplastic Tape Using the Bias Extension Method Through an improved characterization methodology, this work contributes to better prediction quality in composite forming simulations for unidirectional thermoplastic composites. A better understanding of the forming behavior will aid in the adoption of these lightweight materials in aerospace applications. The bias extension method was implemented and applied to cross-ply laminates from unidirectional carbon fiber reinforced thermoplastic materials to characterize the in-plane shear deformation resistance of the molten material. Two commercially available materials were characterized at three rates and three temperatures. The shear deformation was measured directly on the specimen throughout the test using a video extensometer, avoiding the use of the pin-jointed net assumption to relate deformation to the clamp displacement. In addition, the distribution of shear deformation over the specimen surface was characterized after the test using image analysis. The observed deformation was similar to the typical deformation for woven materials, with some agreement to the pin-jointed net assumptions but also some identified deviations. Localization of shear deformation along the fiber direction of the outer ply was observed to occur at approximately 15° shear angle. The proposed bias extension method directly relates the measured force to the deformation on the specimen, ensuring the characterization of the correct deformation mechanism. This key benefit of the bias extension method solves a common issue found in other characterization methods for in-plane shear on the molten material. INTRODUCTION Composite forming simulations have become a useful tool for designers of parts and processes. By predicting the outcome of the manufacturing process, simulation software allows for virtual part and process optimization that can avoid time-consuming and costly design iterations on the shopfloor. Processing defects concerned with forming, consolidation, and shape distortions need to be predicted accurately for these tools to be effective. The most elaborate model descriptions are currently found in finite element-based forming simulations that consider the underlying physics, and that can accurately represent the tool and blank geometry, as well as the boundary conditions (Dörr et al., 2017). The accuracy of these models heavily relies on the quality of the material characterization, which serves as input for the constitutive models. This work aims to improve the experimental characterization for forming of unidirectional (UD) thermoplastic tape materials. The origin of this work is found in hot press forming of fiberreinforced thermoplastic composites for the aerospace industry. With forming of woven materials being relatively well established, the current trend is towards the use of UD tapes because of benefits in terms of improved mechanical performance and potential for process automation (Sloan, 2019). However, the application of UD tape materials for larger or more complex parts, in terms of both geometry and layup, comes with the risk of inducing wrinkling defects during the process. Improved prediction of wrinkling in forming simulations would therefore aid the aerospace industry to adopt UD materials and use their potential in more challenging applications. This article addresses the characterization methodology for the inplane shear deformation of UD thermoplastic composites at forming conditions. An overview of previous research was provided by Haanappel and Akkerman (2014) and Harrison et al. (2005), indicating that various methods have been applied over a relatively large timespan with varying degree of success. It appears that challenges include having a representative specimen at the right conditions, achieving proper load introduction, and triggering the appropriate material deformation. Methodologies have since continued to evolve. The torsion bar method as introduced by Haanappel and Akkerman (2014) initially featured an oscillatory loading with small deformation but was later refined with the use of a transient loading and large deformations. This transient torsion bar method became a frequently used method at the TPRC and was also applied in the work of Dörr et al. (2018). Another method based on the off-axis tensile test, introduced by Potter (2002a), was recently used by Wang et al. (2020) to characterize in-plane shear on a thermoset prepreg. Bending methods may also provide results that can be compared to in-plane shear methods since the dominant deformation in both is longitudinal intra-ply shear, which is clearly illustrated in the work of Dykes et al. (1998). Therefore, bending characterization methods like those featured in Margossian et al. (2015) and Sachs and Akkerman (2017) may also be of interest in the current context. A recurring issue with many of the characterization methods for composite forming is the inability to isolate the desired deformation mechanism. In practice, the measured response is caused by a combination of mechanisms, complicating subsequent constitutive modeling. As such, the validity of the imposed deformation mechanism will therefore receive additional attention in this research. To date, no standard is available for the in-plane shear characterization of UD materials, while the picture frame and bias extension methods are well established and accepted by the scientific community for fabric composite materials (Cao et al., 2008;Boisse et al., 2017). Early work by McGuinness and ÓBrádaigh (1998) shows that the picture frame method can be applied to thermoplastic UD prepreg, in both cross-ply and true unidirectional layups, but can be quite cumbersome. To the authors' knowledge, however, no reports have been published yet on the successful application of the bias extension method to thermoplastic UD prepregs. Interestingly, works by Potter (2002b) andLarberg et al. (2012) do show the applicability of bias extension on UD prepregs with a thermoset matrix in a cross-ply layup. The objective of this article is to demonstrate the application of the bias extension method on cross-ply UD thermoplastic composite specimens at forming conditions. The novelty of this work lies in the combination of the method and material. A second aspiration is to provide characterization datasets that can be used to develop constitutive models for input in composite forming simulations. The benefit of the bias extension method is the use of a representative specimen at forming conditions with loading that is representative for real forming processes. However, the typical pin-jointed net behavior observed in woven fabrics with interlocking fiber is not guaranteed in the case of UD materials because of specific spurious deformations that can occur for these materials, such as flow transverse to the fiber direction and relative slip between plies. Therefore, specimen deformations have been analyzed in this research to see if the appropriate in-plane shear deformation mechanism was dominant. This article is an extension of the preliminary results presented at the ESAFORM 2021 conference (Brands et al., 2021) and expands it with larger characterization datasets that include effects of temperature, insight into repeatability, improved clamping, and analysis on the onset of localization. brief general description is provided here. The bias extension method features a specimen with a rectangular gauge section and fibers in 45°and −45°directions with respect to the loading direction, as is schematically illustrated in Figure 1A. The initial gauge length H needs to be at least twice the specimen width W to prevent fibers from making a direct connection between the clamps. The specimen is clamped at either end and extended in the length direction. Figure 1B shows that the typical deformation consists of a homogeneous shear deformation in the center (I), two undeformed triangular regions near the clamps (III), and four triangular transition zones (II) with half the amount of deformation compared to the center section. The deformation often closely resembles that of a pin-jointed net (PJN), which assumes inextensible fibers rotating at the crossover points. According to the pin-jointed net assumption, the shear angle γ and shear rate _ γ can be calculated according to respectively (Boisse et al., 2017), using gauge length H, specimen width W, displacement u, and test rate _ u. The ability to introduce in-plane shear deformation in a straightforward manner is clearly a benefit of this method. However, having two distinct deformations in zones I and II does complicate the analysis. Materials Two commercially available carbon fiber reinforced UD thermoplastic tapes were used in the experiments. Toray TC1225 contains T700 fibers in a low-melt PAEK matrix (Toray Advanced Composites, 2020), and Solvay APC has AS4D fibers in a PEKK-FC (fast crystalizing) matrix (Solvay Composite Materials HQ, 2017). Both materials were supplied in a 12" roll format. The relevant material characteristics are collected in Table 1. Specimen Preparation Laminates, comprised of 8 plies in a [45/−45] 2s layup, were press consolidated inside a 12" picture frame mold. TC1225 was consolidated at 365°C and 15 bar for 30 min, while APC was consolidated at 376°C and 20 bar for 20 min. Three laminates, separated by caul sheets, could be processed in a single cycle. The Marbocoat 227CEE release agent was applied to all mold surfaces for easy release. After consolidation, the laminate edges were trimmed using a water-cooled diamond-coated saw to enable vacuum clamping of the flat laminate on a milling machine. The average laminate thickness after consolidation was 1.124 ± 0.015 mm and 1.130 ± 0.018 mm for TC1225 and APC, respectively. Three bias extension specimens were cut from each laminate using a CNC milling machine, with specimen dimensions according to Figure 2A. The specimen length direction is aligned with the edge of the laminate to maintain the [45/ −45] 2s layup. A dog-bone shaped specimen was used to increase the reliability of clamping and improved load introduction. A paint pattern was applied to enable deformation measurements during and after testing. Masking tape was applied over the entire specimen, and laser cutting was used to precisely cut a pattern for a paint mask. The laser cutting settings were tuned to cut the masking tape only. Then, heat-resistant spray paint was used to color the lines in silver and the dots in white, and the result is shown in Figure 2B. The reflective silver paint was filtered out by the video extensometer algorithm, enabling an improved detection of the white dots for strain measurements. Reliable clamping was promoted by applying metal foil, 25 μm thick, over the clamped areas with 10 mm excess foil on the sides. The excess material was removed in the gauge section, and the remaining edges were folded over twice to prevent squeeze out during testing. Figure 2B shows the prepared specimen with metal foil clamping protection. The specimens were dried at 120°C overnight (>12 h) to minimize deconsolidation during testing, adopted based on the recommendation of (Slange et al., 2018). Dried specimens were tested within 8 h after removing them from the oven. The authors note that it is possible to apply the method without drying but have observed minor delaminations, which could lead to inhomogeneous deformation and scatter in results. Because drying could have a pronounced effect on the results, the current characterization is only valid for forming processes that employ dried laminates. The influence of material conditioning on forming behavior requires additional research. Experimental Method An Instron universal testing machine was fitted with clamps for bias extension, as shown in Figure 3. The clamps consist of two flat metal plates, 100 × 50 mm 2 in dimension, with one moving side to close the clamp manually using a bolt. A climate chamber surrounds the setup to enable heating, with a rod connecting the upper clamp to the 1 kN force transducer outside the chamber. An Instron AVE2 video extensometer is connected to the door of the climate chamber to record the deformation of the specimen through the looking glass. For each test, two thermocouples were attached to the surface of the specimen to verify the temperature at the top and bottom of the specimen; the locations of the thermocouples are indicated in Figure 2B. Then, the specimen with thermocouples was quicky placed between the clamps in the preheated climate chamber. The clamps were tightened by hand with medium force to prevent excessive squeeze out of the tape material. Figure 3 shows a clamped specimen (after testing). The specimen was heated for 4 min. Typically, the melting temperature was reached within the first minute, while the last minute is considered dwell time as temperature changes were minimal. The desired temperature was achieved within ±5°C on the thermocouples, with the bottom always having a slightly higher temperature than the top due to the location of the specimen in the convection oven. The oven air temperature was set 5-10°C higher than the test temperature to reach the desired temperature values on the thermocouples. A minor change was made to the heating procedure for APC at 345°C since this temperature is just above the melting temperature of the polymer. For this condition, the specimens were first heated to 350°C before slowly settling towards 345°C while keeping the total heating time fixed at 4 min. This modification was employed to ensure that the specimen was fully molten before testing. After the predefined heating time, a constant speed was imposed on the top clamp up to a displacement of 27.6 mm, applying a theoretical 45°shear deformation according to Eq. (1) for the PJN assumptions. Figure 2C and Figure 3 show a deformed specimen after testing. The extension force and the resulting shear deformation in the center of the specimen (see Section 2.5.1) are considered the main outcomes of this characterization test. Additionally, after reaching the final displacement, the force and deformation were measured for another 10 s to record the relaxation behavior. The test matrix employed in this study includes the two beforementioned materials. Three distinct temperatures were used, namely, 345, 365, and 385°C. Also, three test rates are applied, namely, 100, 400, and 1,000 mm/min. Three specimens were tested for each condition to verify repeatability. Extensometer The video extensometer tracked four dots on the specimen throughout the bias extension test, measuring the longitudinal and transverse strain, ε L and ε T , respectively, over an area in the center of the specimen. The spacing of the dots on the undeformed specimen is shown in Figures 2A,B. The shear angle γ, defined as the change in angle between the two fiber directions, is readily calculated from the strain values using γ ε L − ε T in radians. The calculated shear angle is effectively the average shear angle over the area between the dots. The direct measurement of deformation and deformation rate on the specimen circumvents the use of the PJN assumption to relate the deformation to the displacement. Any inaccuracies from clamping or loading of the specimen that could lead to minor changes in the specimen deformation are thereby prevented implicitly. Shear Angle Distribution A second deformation measurement was included that quantifies the distribution of shear deformation over the gauge section of the specimen. This additional analysis step is not required for the characterization of the material but was added to the research in order to learn about the deformation behavior of cross-ply UD laminates. Any spurious deformations detected using this analysis could argument against the pin-jointed net assumptions that are typically used in modeling of the bias extension test. After testing, pictures were taken of the deformed specimens at 12× optical zoom to obtain orthographic images. A pattern of dots in the background was used to infer the scale of the image, whereafter a mesh was constructed between the cross sections of the silver lines. The deformation can be quantified by comparing the deformed mesh against the known locations of the silver lines before deformation. The same procedure was used and described in a previous publication (Brands et al., 2021). The shear angle can be quantified by calculating the deformation gradient for each triangular element in the mesh using where v ij and V ij are the column vectors representing the distance between nodes i and j in two dimensions for the deformed and undeformed configurations, respectively. The subscript indicates the nodes used to construct the vector. The deformation gradient relates both fiber directions in the deformed state, that is, a and b, to the undeformed fiber directions A and B using Subsequently, the shear angle γ in radians is calculated using Onset of Localization Finally, a third deformation analysis involves processing of the saved footage from the video extensometer to examine the onset of localization. This inhomogeneous deformation is readily observed from the discontinuous silver lines on the deformed specimen after the test, as illustrated in Figure 2D. However, the previous two methods are unable to identify the onset and progression of localization because it occurs on a smaller scale than considered in those analyses. Still pictures from the video extensometer were saved approximately 10 times per second during testing. The silver lines are hardly visible in these pictures due to the built-in reflection filtering of the system but could be retrieved by contrast stretching based on a portion of the picture with only lines. This allows observation of any discontinuities in the lines during testing. The accompanying shear angle was retrieved by calculating the longitudinal and transverse strain between the white points followed by interpolation in the measurement data. The shear angle at which the onset of localization occurs can be estimated based on the processed picture where significant discontinuities in the silver lines become apparent. Data Averaging Three repetitions for each setting were included in the test matrix. Experimental data from the same settings were interpolated over 50 equidistant points in time to enable the calculation of a mean and standard deviation to simplify visualization. The results for individual specimens are available in the dataset published alongside this article. The error bars represent one standard deviation around the mean value unless stated otherwise. Figure 4 shows the average force per ply over the average shear angle, as measured by the video extensometer, for both materials for the three rates and temperatures. The standard deviation is indicated by the shaded area in the figure, suggesting a good repeatability of the experiment for most settings. The results for both materials show a significant dependence of the measured force on the rates and temperatures applied, with higher forces measured at higher rates or lower temperatures. It is noted that the largest standard deviations are found for APC at 345°C, which also used a modified heating procedure because it is close to the melting temperature of the material. Bias Extension Characterization The curves in Figure 4 all have a signature s-shape, starting with a steep slope from the origin with a smooth transition in the first 5-10°of shear towards a region of low to moderate slope, which extends up to 20-25°, whereafter the slope increases again. The smooth transition at small deformations is a bit quicker for the TC1225 material compared to the APC material, whereas the initial slope appears to be steeper. The incline at 10°shear angle positive for TC1225 at 345°C and near zero for 365°C and even shows a minor decline at 385°C. For APC, all slopes at 10°of shear are positive but much higher for 345°C than for the other two temperatures. The increase in force observed after 25°of shear is most significant for lower temperatures and higher rates. The obtained characterization results demonstrate the applicability of the bias extension method for cross-ply UD laminates in molten condition. Repeatable results with acceptable variability can be obtained, which show a clear dependence of the material behavior on the temperature and test rate. Large in-plane shear deformation was introduced, which will be analyzed more in-depth in the next sections. Figure 5 shows the shear angle, as measured by the extensometer, related to the cross-head displacement. The black dashed line indicates the expected shear angle according to the pin-jointed net theory through Eq. (1). The shaded areas indicate one standard deviation around the mean based on three repetitions. Due to the variability, no clear correlation is found between the test rate and the deformation. However, the amount of shear deformation recorded by the extensometer does seem to decrease at higher temperatures, which is most pronounced for the APC material. In the case of TC1225, the observed deformation is consistently lower than predicted by the pinjointed net assumptions, with the deviation becoming more significant for large deformations, which is also consistent with earlier findings (Potter, 2002b;Machado et al., 2016). The deformation in APC follows the same trend as TC1225 but is much more temperature dependent; at 345°C, the measured shear angle even exceeds the pin-jointed net prediction. Extensometer The deformation measured on the surface of the specimen is believed to be representative for the deformation throughout the thickness of the specimen since the edges of the specimens did not show significant signs of ply slip. It is also noted that global out-of-plane buckling of the specimen, which could invalidate the measurement result, did not occur during any of the conducted experiments. Minor local out-of-plane buckling was observed on the final deformed specimens in a few locations but is regarded as insignificant in the analysis of the specimen deformations. Shear Angle Distribution An overview of all obtained shear angle distributions after the test is shown in Figure 6, with some enlarged examples shown in Figure 7. The typical deformation for bias extension, according to the pin-jointed net, is readily observed, with the triangle near the clamps (III) having hardly any deformation and four intermittent areas (II) with half the amount of shear compared to the center (I). Figure 8A shows the average shear deformation in the central area I for all conditions tested. Interestingly, it is again observed that the final shear angle in region I depends strongly on temperature for APC and shows only a minor reduction at 385°C for TC1225. The same was concluded from the video extensometer results. In fact, the average shear angle in area I from the current analysis, which is performed after the test, differs little from the deformation at Figure 8B shows the standard deviation of the shear angle, as a measure to characterize the inhomogeneity, in the central area I for all conditions tested. A standard deviation of approximately 1-3°was found for the shear deformation in the central area, with no clear influence of the test temperature or rate. The highest inhomogeneities are found for APC at 345°C, which all seem to have a noticeable pattern in Figure 6, with consistently higher shear angles near the bottom clamp. Presumably, this is related to the temperature distribution over the specimen, with the bottom being consistently 6-8°C warmer than the top. Apparently, this temperature difference has had a significant role in combination with the close proximity to the melting temperature. Perhaps, a more even temperature distribution could have resulted more homogenous deformation. The inhomogeneous deformation found for the other settings could not be related to the test conditions directly and might arise from variability in clamping and loading or from the material itself. The distribution of shear deformation over the different areas of the specimen is shown in Figure 9 for two material and three temperatures. Results were averaged over three rates and three repetitions to focus on studying the effect of temperature. The shear angle in area I was already discussed, but the same trend in temperature is also observed for area II. According to the PJN, the ratio between shear angles in areas I and II should equal exactly two, but Figure 9B shows that this ratio was found to be slightly less than two in the measurements, indicating that the shear angle in area II is slightly more than half of the value in area I. The PJN also predicts zero deformation in area III, whereas Figure 9A shows that some minor deformation was still observed. A higher temperature seems to allow for more spurious deformation in area III, which could be a possible reason for a lower shear angle in area I. Also, the amount of deformation in area III seems to correlate with a lower ratio between the shear angles in areas I and II. Overall, the distribution of deformation over the bias extension specimens did not reveal any major deviations from the pin-jointed net theory, i.e., the typical deformation regimes are observed and have reasonably homogenous deformation with a shear angle ratio of approximately two between the areas. The test temperature does seem to have a significant effect on the amount of deformation and its distribution, where spurious deformations become more predominant at higher temperatures. Onset of Localization The gridlines on the deformed specimen show discontinuities that indicate inhomogeneous shear deformation along the fiber direction of the outer ply. Figures 2C and D show one of the specimens as an example where jagged lines indicate localization at higher shear angles. The localized deformation occurs on a scale smaller than the grid cell size and with frequent spacing such that the macroscopic deformation appears homogenous. However, on a micro-scale, the deformation will no longer be uniform, which should affect the way the materials' in-plane shear resistance is perceived. All specimens tested had a comparable degree of localization, being most prominent in the central area but also present in the four adjacent areas. Footage saved by the video extensometer was post-processed to detect the lines otherwise filtered out by the system. An accompanying shear angle value was found by using the deformation between the dots and interpolation within the measured data. Figure 10 shows a selection of the resulting images from a single specimen as an example. The analysis provides a visual means to detect the onset of localization in the center area throughout the bias extension test. Unfortunately, developing a quantitative measure for the localization deemed troublesome due to low picture quality after contrast enhancement; therefore, the onset of localization could only be estimated manually using these results. Visual inspection of the obtained images reveals that the onset of localization, detected as pronounced jaggedness of the lines, happens approximately halfway through the measurement. The lines are generally still smooth and continuous for images up to 10°of shear, while images above 20°of shear have clear signs of localization. At 15°of shear, the images for APC do not indicate much localization yet, whereas the images for TC1225 already have minor signs of localization. Therefore, 15°of shear would seem a reasonable estimate for the onset of localization, with the remark that it occurs at smaller deformation for TC1225 compared to APC. No correlation was found between the onset or degree of localization and the test parameters. Relaxation Relaxation measurements were obtained by recording data for an additional 10 s after ending the bias extension test. Figure 11 shows how the pulling force diminishes over time as the clamps remain stationary. All curves in this figure have a steep initial drop and a gradual transition to a seemingly constant residual force, once more highlighting the viscoelastic material behavior. The bias extension method can thus capture more information about the material behavior by incorporating the relaxation measurement. However, the deformation history at the start relaxation is different for each specimen because the shear deformation is not directly controlled. Moreover, different rates and temperatures also result in a different stress state at the start of relaxation, further complicating the analysis. For this reason, additional research is recommended on the incorporation of a relaxation measurement into the bias extension test method, with a proper regard for the deformation history. However, the relaxation results can still be used in the design and development of advanced viscoelastic constitutive models for in-plane shear, if the unique deformation history for each specimen is taken into account. The dataset published alongside this article can be used as a starting point for this purpose. The main focus of a constitutive model for in-plane shear should be towards the start-up and steady deformation processes such as those encountered during press forming. However, path dependencies as observed from the relaxation measurements may also be relevant for composite deformations during forming. DISCUSSION Consecutively, the applicability of the method, the implementation of heating and clamping, the obtained characterization results, and the observed deformations are discussed. Application of the Bias Extension Method This article is aimed at demonstrating the applicably of the bias extension method to characterize in-plane shear for thermoplastic UD materials. However, previously Haanappel and Akkerman (2014) have already stated that their attempts using the bias extension method on a similar type of material were unsuccessful due to the weak integrity of the laminate at high temperature, despite the use of a cross-ply layup. A comparison between this previous implementation and the current one has taught us that the measurement hardware, specimen design, and material conditioning all play an important role in the successful application of this methodology. Most critical appear to be the heating and clamping of the specimen, which will be discussed in more detail in the next section. The increased specimen thickness and use of the drying treatment are also believed to have promoted laminate integrity but were not found to be the main cause for a successful application. The in-plane shear kinematics observed for the stack of UD plies in the presented bias extension results is similar to the kinematics of a woven fabric and closely resembles a pin-jointed net up to relatively large deformations. A singular UD ply in melt lacks the structural integrity in the transverse direction to deform in the trellis shear deformation; thus, an additional ply is required with fiber in the transverse direction to form a cross-ply layup. The bonding between the adjacent plies ensures that the individual plies remain intact under shear loading and causes the deformations of the plies to be interconnected. The friction between UD plies therefore plays a similar role to the interlocking of tows in a woven fabric. The in-plane loading of the cross-ply specimen in the bias direction (middle of the two fiber directions) thereby only promotes the in-plane shear deformation because other in-plane directions are dominated by inextensible fibers. As mentioned in the introduction, a variety of other measurement methods has been used in past research to measure the in-plane shear resistance of thermoplastic UD composite materials (Harrison et al., 2005;Haanappel and Akkerman, 2014). None of these methods provide the means to validate to what extent the measured response is caused by the desired deformation mechanism. A key benefit of the proposed bias extension methodology is that the measured force is directly related to the desired in-plane shear deformation, which is verified using an extensometer and image analysis. The certainty of measuring the correct deformation and being able to quantify the amount of deformation in-situ provides reliable characterization data. Additionally, the bias extension method is conceptually simple to implement and well established for the inplane shear characterization of composite materials with woven reinforcements. Implementation of Heating and Clamping Specimen conditioning and load introduction have proven difficult during the development of the presented bias extension method, and both will be briefly discussed in this section. The first challenge was to heat the specimen homogenously to the desired temperature before the start of the test, and, at the same time, preventing degradation of the polymer by keeping the heating and dwell times to a minimum. Overall, the high heating rate and short dwell times used came with the risk of an inaccurate and inhomogeneous temperature distribution. Especially, the area in the vicinity of the clamps might be heating slowly due to the large thermal mass of the clamps. In addition, the very design of the convection oven used already gave rise to an inherent temperature variation. Within the confines of the hardware, a heating time of 4 minutes was achieved with an approximate dwell time of 1 min and a temperature difference of roughly 8°C between the thermocouples on the top and bottom of the specimen. For future characterization work, it is advised to perform the experiments in a nitrogen environment to allow for larger dwell times that could potentially minimize the temperature variations. The second challenge was the actual clamping of the (molten) specimen to achieve proper load introduction. The molten thermoplastic polymer will easily adhere to the clamps, making swapping of specimens time-consuming. Hence, a disposable material was used in between the clamp and the specimen. The metal foil was found to work well; however, folding of the edges was required to prevent squeeze out of the polymer under the applied clamping force, making for a cumbersome procedure and adding a significant amount of time to the specimen preparation. Despite the folded edges, a low clamping force was still required to prevent squeeze out from the clamps into the gauge section. Load introduction was enhanced with the use of a dog-bone shaped specimen and a ratio of H/W close to 2:1, with the final specimen dimensions based on the available clamping hardware. Finally, the specimens were dried in order to promote their structural integrity in melt by reducing deconsolidation and the formation of inter-ply voids. The drying step only makes for a representative specimen if the same procedure is also applied during manufacturing processes, which is not always the case in industrial settings. Different implementations of the clamping and loading of the specimen could lead to different results; hence, in order to work towards a standardized method, more research is required to realize reliable boundary conditions in bias extension. Alterations of the specimen dimensions, the number of plies and dog-bone shape have not been part of this research but could provide useful insights into the robustness of the methodology. Characterization Results The characteristic s-shaped curves observed in the characterization results from Figure 4 were also frequently found for other materials with a viscoelastic matrix at forming conditions. For example, Wang et al. (2014) measured carbon fiber woven fabrics with PPS and PEEK matrices and found similarly shaped curves. Larberg et al. (2012) also found similarly shaped curves for bias extension tests on cross-ply specimens of UD carbon epoxy prepreg, both at room temperature and at 70°C. Picture frame experiments by McGuinness and ÓBrádaigh (1998) on the UD-C/PEEK material in molten condition showed a gradual increase of the force towards a steady state, which is reached after about 5°of shear. The current characterization results show comparable start-up behavior within 5-10°of shear. The direct comparison between results from these various characterization measurements is complicated due to the use of different specimen dimensions, material thicknesses, and testing conditions. The bias extension results presented in this article were obtained in a similar way to previous results presented at the ESAFORM 2021 conference (Brands et al., 2021). Figure 12 shows a direct comparison of the results from two materials in this work at 365°C against the Toray TC1320 UD/C/PEKK material at 375°C with a slightly longer specimen from the previous work. A significantly higher pulling force was required to deform the TC1320 material, an unexpected result seeing how similar the material characteristics are from the data sheets. Specifically, Solvay APC and Toray TC1320 are both carbon fiber reinforced materials using a PEKK matrix in the same fiber volume fraction; therefore, the differences must be sought in the exact constituents used, their interface, or their distribution. The influence of degradation, although unlikely within the short heating times applied, was not investigated in the current research. The datasets presented offer detailed responses for cross-ply laminates from the UD material at different rates and temperatures with predominantly trellis shear deformation. The material behavior under in-plane shear deformation is fundamental to this response but is not easily extracted in terms of stress as a function of strain and strain rates due to the presence of two distinct deformation zones having complex non-linear material behavior. The in-situ deformation data obtained using the video extensometer combined with the assumed distribution according to the PJN give a complete and reasonably accurate picture of the local strains and strain rates, but the resulting distribution of stresses is the great unknown. Future work is aimed at understanding the materials' in-plane shear behavior through a constitutive modeling study based on the bias extension results presented. With the improved characterization quality using the bias extension method and appropriate constitutive models for the in-plane shear material behavior, we aim to improve the predictive quality of composite forming simulations for thermoplastic UD materials. Deformation The deformation measurement techniques applied in this research provide useful insight into the deformation behavior of thermoplastic UD composite material. Overall, the in-plane shear deformation observed on cross-ply laminates in the UD material shows a great resemblance to the typical deformation of woven fabrics in bias extension. The zones of homogeneous trellis shear deformation with a near exact factor of two difference in shear angle are exemplary of their similarity. Also, the near linear relation between the displacement and shear angle in Figure 5 matches with common findings for woven composites; therefore, comparable discrepancies with the PJN prediction have been found. The distribution of shear deformation in the different shear areas was found to be homogenous on a macroscopic scale and consistent with the 2:1 ratio in shear deformation from the PJN. Therefore, the measurement of the central shear angle seems to suffice to accurately determine the deformation over the full specimen. Hence, any mismatch with the PJN is circumvented by the in-situ measurement of the shear angle using the video extensometer and the simple application of four dots on the specimen, a procedure that is recommended for future implementations of the bias extension experiment. Any deviation from the PJN deformation must be related to spurious deformations that negate the underlying assumptions. Flow transverse to the fiber direction is a possible source of spurious deformation specific to UDs, as it could invalidate the bidirectional inextensibility assumption from the PJN. However, neighboring plies from the cross-ply laminate used in this research hinder the transverse flow due to a difference in the fiber direction and inherent ply-ply friction. The slip between plies is therefore another source of spurious deformation that is specific to cross-ply UD laminates and would invalidate the no-slip condition from the PJN. Moreover, commonly identified spurious deformations in bias extension such as inaccuracies in clamping, inter-tow slippage, and non-discrete transitions between shearing zones due to the FIGURE 12 | Measurement data from this article on Solvay APC and Toray TC1225 at 365°C and specimen dimensions 170 × 80 mm 2 , compared to previous data from Brands et al. (2021) on Toray TC1320 UD/C/PEKK at 375°C and dimensions 180 × 80 mm 2 . Frontiers in Materials | www.frontiersin.org June 2022 | Volume 9 | Article 863952 material's bending stiffness are also expected to play a role for UD laminates. On the other hand, woven fabrics might suffer from extensibility due to fiber undulations, a spurious deformation that is likely not associated with the UD material. The in-situ shear angle measurements in Figure 5 showed clear deviations from the PJN, which increased with testing temperature. Apparently, the resistance of the spurious deformations changes in relation to the in-plane shear resistance at different temperatures. Inplane shear deformation and spurious deformations such as transverse flow, ply-ply slip, and in-plane bending are all dominated by the same matrix behavior. However, the fibermatrix distribution and fiber-matrix interface could ensure that the influence of temperature on matrix viscosity affects these differently. The relation between matrix viscosity and macroscopic properties is something that is not readily understood yet but appears to make the spurious deformations more compliant at higher temperatures compared to in-plane shear. Inhomogeneity in the temperature distribution, as recognized in the presented results, could also cause deviations from the PJN. However, the results from this research were inconclusive on the origin for the mismatch with the PJN and unable to identify the contributions of individual spurious deformations. Further research into the relation between constituent properties and macroscopic behavior could contribute to a better understanding of forming mechanisms in composites and improvements in material development. Inhomogeneous shear deformation was also found on the mesoscale in the form of localized shear deformation along the fiber direction in the outer ply. The localization was estimated to occur after approximately 15°of shear deformation in the center region. Similar localization was observed by Potter (2002b) and Larberg et al. (2012) on bias extension measurements of cross-ply thermoset UD laminates. The localization observed in this study had no pronounced effect on the macroscopic deformation and might be ignored for modeling purposes using continuum methods. However, it is important that the bias extension test performs the characterization under conditions representative of actual processing. As such, additional research is required to study the occurrence of localization during manufacturing to validate the characterization data at large deformations as presented. Nevertheless, shear deformation above 15°is not commonly found in industrial products from these materials, making the data at smaller deformations more relevant for the application. Micro-mechanics approaches to model the materials in-plane shear resistance may be applied most easily up to the initiation of localization at 15°of shear because homogeneous shear deformation can be assumed. CONCLUSION This research demonstrated the successful application of the bias extension method on cross-ply laminates from the thermoplastic UD material to characterize its in-plane shear deformation resistance at forming conditions. This first unique application of the bias extension method to unidirectional thermoplastic composite material provides a conceptually simple experiment with representative conditions, wherein the proper deformation mechanism is characterized. Characterization datasets are presented for two commercial thermoplastic UD tapes with data at three test rates and three temperatures. In addition, the data have been made available for use by other researchers. The response curves show a typical s-shape that was also observed for other composites with a viscous matrix. Viscoelastic material behavior was observed that is both rate and temperature dependent. The in-plane shear deformation introduced in the stack of unidirectional plies showed a great similarity to the typical deformations of woven composites. The center shear angle was measured throughout the test using a video extensometer, observing large shear deformations with the final shear angle being dependent on the test temperature. This direct measurement of deformation increased the accuracy of the method by implicitly addressing the occurrence of spurious deformations. The relation between the shear angle and displacement deviated only slightly from the pin-jointed net assumption, while, in addition, the distribution of deformation over the specimen was found to be consistent. The localization of shear deformation along the fiber direction on the outer ply was found to become pronounced after 15°of shear in the center area. The localization did not significantly affect homogeneity of the macroscopic shear deformation but should be taken into account for microscopic considerations. The presented bias extension method provides reliable characterization results for the in-plane shear behavior of thermoplastic UD materials in molten condition. The key benefit compared to existing methods is that the measured force can be related directly to the measured deformation on the specimen, providing certainty that in-plane shear was measured. The improved characterization quality for the in-plane shear of thermoplastic UD composite tapes at the processing condition can be used to develop appropriate constitutive models that improve the prediction quality of composite forming simulations. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: 4TU.ResearchData repository "Data underlying the article: In-plane shear characterization of unidirectional fiber reinforced thermoplastic tape using the bias extension method." https://doi.org/10.4121/18586019. AUTHOR CONTRIBUTIONS DB: Conceptualization, methodology, investigation, and writing-original draft. SW, WG, and RA: Conceptualization, supervision, project administration, and writing-review and editing. All authors contributed to the manuscript revisioning and have approved the submitted version. FUNDING This work is part of the research program "MaterialenNL" with project number 17880, which is
10,087
2021-04-02T00:00:00.000
[ "Engineering", "Materials Science" ]
LncRNA MALAT1 promotes high glucose-induced inflammatory response of microglial cells via provoking MyD88/IRAK1/TRAF6 signaling Although a large number of studies have confirmed from multiple levels that diabetes mellitus (DM) promotes cerebral ischemic reperfusion (I/R) injury, but the precise mechanism is still unclear. A cerebral I/R injury model in diabetic rats was established. The neurological deficit scores and brain edema were monitored at 24 and 72 hours after injury. The peri-infarct cortical tissues of rats were isolated for molecular biology detection. The rat primary microglia and microglia line HAPI were cultured to establish the cell model of DM-I/R by high glucose (HG) and hypoxia-reoxygenation (H/R). The endogenous expression of MALAT1 and MyD88 was regulated by the transfection with pcDNA-MALAT1, si-MALAT1 and si-MyD88, respectively. The cerebral I/R injury model in diabetic rats had more severe neuronal injury as shown by the significantly higher neurological deficit scores and an obvious increasing brain edema at 24 and 72 hours after injury. Moreover, the microglia were activated and induced a large number of inflammatory cytokines TNF-α, IL-1β and IL-6 in the peri-infarct cortical tissues during cerebral I/R injury associated with DM. The expression of MALAT1, MyD88, IRAK1 and TRAF6 protein were significantly up-regulated by DM-I/R in vitro and in vivo. Furthermore, the HG-H/R-induced MALAT1 promoted the inflammatory response in microglia via MyD88/IRAK1/TRAF6 signaling. Our results suggested that MALAT1 mediated the exacerbation of cerebral I/R injury induced by DM through triggering the inflammatory response in microglia via MyD88 signaling. that MALAT1 is closely relates to the DM-induced complications, the involvement of MALAT1 in DM-associated cerebral ischemic reperfusion injury is not yet known. The inflammatory reaction is implicated in the occurrence of DM-associated cerebral ischemic reperfusion injury 10,11 . The inflammatory response of the central nervous system (CNS) is mainly characterized by the activation of microglia and astrocytes 12 . And the excessive activation of microglia induced a large number of neurotoxic substances and proinflammatory cytokines, further aggravating the diabetic cerebral infarction injury 13,14 . Thus, the regulation of microglia activation may be one of the effective interventions for ischemic brain injury. Myeloid differentiation factor-88 adaptor protein (MyD88) mediated the activation of interleukin-1 receptor (IL-1R) signaling pathway and NF-κB to induce inflammatory cytokines such as TNF-α 15 . IL-1R is expresses not only in immune cells but also in microglia 16 . Moreover, MyD88 signaling promoted the inflammatory responses induced by cerebral ischemic reperfusion in murine models 17 . In the present study, we hypothesized that lncRNA MALAT1 participated in the pathogenesis of the cerebral ischemic reperfusion injury induced by DM, the mechanism of which may be related to the MyD88-mediated inflammatory response in microglia. Methods Animals. Forty-eight healthy Sprague-Dawley (SD) rats (sex: male, weight: 210-230 g, age: 6-8 weeks) were purchased from Shanghai Laboratory Animal Center (SLAC; Shanghai, China). SD rats were raised in the individual cages with the standard laboratory rearing temperature held at 23-25 °C and free food and water that were offered under the 12 h cycle of light and dark. All SD rats were suffered from acclimatization for a week before treatment. Experiments performed on animals obtained the approval of the experimental animal ethical committee of the First Affiliated Hospital of Zhejiang University. We confirmed that all methods were performed in accordance with the relevant guidelines and regulations. The establishment of animal models. The rats were divided into four groups at random: Control-sham group (n = 6), Control-I/R group (n = 6), DM-sham group (n = 6) and DM-I/R group (n = 6). To establish the DM model, the SD rats were intraperitoneally injected with streptozotocin (STZ; Sigma-Aldrich, MO, USA) dissolved in 0.1 mol/l citrate buffer at the dose of 60 mg/kg. After 72 h, the serum glucose level in rats was more than 16.7 mmol/L, which indicated that establishment of DM model was successful. The rats in Control-sham group and Control-I/R group were received the injection of 0.1 mol/L citrate buffer. The rats of DM-I/R group and Control-I/R group were anesthetized using chloral hydrate (350 mg/kg, i.p.), followed by performed to establish I/R model using middle cerebral artery occlusion (MCAO) method 18 . After anesthetized, the rats were exposed to the left common carotid artery, internal carotid artery (ICA) and external carotid artery (ECA). Whereafter, the latter of ECA were ligated while the branch vessels were blocked. The monofilament nylon suture thread with a length of 18-20 mm and a diameter of 0.24-0.26 mm was inserted into the right ICA via the ECA until a slight resistance was obtained while the CCA and ECA were blocked by clips. The suture was left in place for 2 h and removed to the reperfusion. The rats of the other two groups were modeled with the same procedure as above but not inserting the suture thread. At 24 h or 72 h after reperfusion, neurological deficit score of each rat was evaluated according to the criteria of Longa 5 scores 19 . The neurological deficit of rats was graded as follows: 0 score, normal walk without any neurological symptoms; 1 score, impaired in extending contralateral forelimb; 2 scores, circling toward the contralateral side; 3 scores, fall toward the contralateral side; 4 scores, impaired in walk and unconsciousness, most severe neurological deficit. And the brain tissues were isolated from the killed rats, used for the next experiment. In addition, the brain tissues was cut at 2 mm consecutively afer being frozen at −20 °C for 15 mins in the cryostat. Volume of encephaledema. The 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) (2%) solution was used to strain the brain slices at 37 °C for 30 mins in the dark chamber, followed by that 4% polyformaldehyde was used to fix the brain slices for 24 h. AUTOCAD2000 (Autodesk) was used to analyze the images of the brain slices. The contralateral and ipsilateral hemispheres of the ischemia brain were presented as V1 and V2, respectively. The volume of cerebral edema was equal to V1 minus V2 (mm 3 ). Enzyme-linked immunosorbent assay (ELISA). The concentrations of IL-1β, IL-6 and TNF-α in the brain tissue were measured using specifc ELISA kits following the instructions of the manufacturer (ShengGong Biological Technology, Shanghai, China). For detection of IL-1β, IL-6 and TNF-α in the ischemia brain tissue, the tissue was homogenized on ice to collect the supernatant by centrifugation at 2,500 × g for 20 mins. The amounts of IL-1β, IL-6 and TNF-α were detected using ELISA kits with an ELISA reader (Bio-Rad Laboratories, Richmond, CA) at 450 nm. Each experiment was repeated three times. The measurement of MALAT1, Emr1, CD68, IL-1β, IL-6 and TNF-α. Trizol reagent (Invitrogen) was used to extract the total RNA. Then the reverse transcription reactions were performed with the PrimeScript RT Enzyme mix kit (Takara) to obtain the cDNA for the next reaction. The synthesized cDNA was used with Fast SYBR Green PCR kit (Applied Biosystems) to qRT-PCR on ABI PRISM 7300 RT-PCR system (Applied Biosystems). GADPH served as endogenous control gene for the normalization of the gene levels. Western blot assay. Western blot assay was performed as described in our previous study 20 . Briefly, the protein extracts were heated with the sample buffer for 10 mins, followed by divided on a 10% polyacrylamide gel, and then transferred into the PVDF membrane. The membranes were blocked with 5% BSA, followed by maintained with primary antibodies for MyD88 (1:1000, Cell Signaling Technology), IRAK1 (1:2000, Cell Signaling Technology) and TRAF6 (1:1000, Santa Cruz Biotechnology). In addition, antibodies against IL-1β (1:2000, Santa Cruz Biotechnology), IL-6 (1:1000, Cell Signaling Technology) and TNF-α (1:2000, Santa Cruz Biotechnology) were used. Then the membranes were rinsed in TBST to be incubated with corresponding secondary antibodies at room temperature for 1 h. β-actin was used to act as a loading control. LI-COR Odyssey System was used to vision the protein bands in the membranes. Cell culture and treatment. Primary microglia were isolated from the CNS tissue of neonatal rats as described in our previous study 20 . Rat immortalized microglia cell line (HAPI cells) were re-suspended in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS) at 37 °C in 5% CO 2 . The medium with 30 mM glucose and 1% FBS was used to simulate diabetes environment for cells, and the medium containing normal glucose (5.5 mM) was used as a negative control. To establish hypoxia/reoxygenation (H/R) injury cell model, HAPI cells in glucose-free culture medium were cultured at 37 °C with 5% CO 2 , 1% O 2 and 94% N 2 for 4 h, and then cultured at 37 °C with 5% CO 2 , 21% O 2 and 74% N 2 for 2 h in full culture medium. In the following experiments, HAPI cells were transfected with si-MALAT1 using siRNA transfection reagent, followed by treated with high glucose and H/R. For transfection experiment, HAPI cells (2 × 10 5 cells/well) were cultured in 24-well plates overnight and then transfected with siRNA-MALAT1 or siRNA-MyD88 (1 μg) using siRNA transfection reagent (4 μL), Chromatin immunoprecipitation (ChIP) assay. Approximately 1 × 10 6 HAPI cells were cross-linked with a 1% final concentration of formaldehyde (Sigma-Aldrich) at 37 °C for 10 mins. ChIP assay was performed with the commercial kit (Beyotime, China) in accordance to the manufacturer's protocol. The antibodies acetyl-histone H3/H4 or and control normal IgG were purchased from Santa Cruz Biotecnology. ChIP-purified DNA was amplified by standard PCR using primers specific for the MyD88 promoter and the 2 × PCR Master Mix (Promega). After PCR reaction, PCR products were separated on 1.2% agarose gels and visualized by Gel imaging system software (Tanon, Shanghai). Specific enrichment is calculated using the cycle threshold (Ct): 2 (Ct of control ChIP-Ct of control Input) /2 (Ct of AcH3/H4 ChIP-Ct of AcH3/H4 Input) . Cell proliferation and apoptosis assays. To assess the survival of HAPI cells, MTT assay were performed. HAPI cells (1 × 10 4 cells/well) were cultured into 96-well plates under corresponding conditions, and then MTT (5 mg/mL, 20 μL) was added into each well and incubated for 4 h at 37 °C. The supernatant was then aspirated, and added with dimethylsulfoxide (DMSO) (150 μL) to agitation for 10 mins to dissolve crystals at room temperature. The absorbance values were surveyed using an ELISA reader (Bio-Rad Laboratories, Richmond, CA) at 490 nm. Cells that need to be performed with apoptosis assay were collected by centrifugation at 1,500 × g for 3 mins, and washed twice with ice-cold phosphate buffered solution (PBS) and then colourred by the Annexin V- Student's t test, and one-way ANOVA with Bonferroni post-hoc test was used for multiple comparison. P < 0.01 was considered statistically significant for the differences. DM exacerbated cerebral I/R injury. The neurological deficits in the rats with cerebral I/R injury and/ or DM were assessed using Longa 5 scores. Results showed that the rats (n = 6) with cerebral I/R injury and DM (DM-I/R) had a significantly higher neurological deficit scores compared with the control (n = 6) and cerebral I/R injury rats (n = 6) both at 24 and 72 hours after injury (Fig. 1A). Moreover, the cerebral edema volume in DM-I/R rats (n = 6) was significantly larger than the control (n = 6) and cerebral I/R injury rats (n = 6) (Fig. 1B). HG-induced inflammatory response was confirmed to implicate in the pathogenesis of various complications of DM, such as diabetic retinopathy (DR) and diabetic nephropathy (DN). In the present study, it was observed that the expressions of the pro-inflammatory cytokines TNF-α, IL-1β and IL-6 were dramatically increased in the peri-infarct cortical tissues of DM-I/R rats (n = 6) compared with the control (n = 6) and cerebral I/R injury rats (n = 6) ( Fig. 1C-F). These findings suggested that DM exacerbated cerebral I/R injury accompanied by inflammatory response. The expression profile of MALAT1, MyD88, IRAK1 and TRAF6 in DM-I/R. Over-activated microglial cells play critical roles in the pro-inflammatory response of the CNS via the increased release of pro-inflammatory cytokines. The mRNA expressions of CD68 and Emr1, the makers of microglial cells, were dramatically increased in the peri-infarct cortical tissues of cerebral I/R injury rats (n = 6), DM rats (n = 6) and DM-I/R rats (n = 6) compared with the control (n = 6) ( Fig. 2A,B). Besides, the relative expressions of CD68 and Emr1 in peri-infarct cortical tissues of DM-I/R rats were significantly higher than that of cerebral I/R injury rats, suggesting that DM enhanced the activation of microglial cells ( Fig. 2A,B). Moreover, the expression of MALAT1 was significantly increased in the peri-infarct cortical tissues of cerebral I/R injury rats (n = 6), DM rats (n = 6) and DM-I/R rats (n = 6) compared with the control (n = 6), of which MALAT1 increased most dramatically in DM-I/R rats (Fig. 2C). It was also observed that the protein expressions of MyD88, IRAK1 and TRAF6 were significantly up-regulated in the peri-infarct cortical tissues of cerebral I/R injury rats (n = 6), DM rats (n = 6) and DM-I/R rats (n = 6) (Fig. 2D-G). This observation was further confirmed by the results of the in vitro models of DM-I/R. The rat primary microglia cells and microglia line HAPI were cultured to establish the cell model of DM-I/R by treatment of HG and H/R. Compared with the LG-control, the expression of MALAT1 in microglia was significantly induced by H/R and/or HG, as well as MyD88, IRAK1 and TRAF6 protein (Fig. 3A-E). Compared with the LG-H/R, H/R combined with HG induced a higher expression of MALAT1, MyD88, IRAK1 and TRAF6 protein in both primary microglia cells and HAPI cells (Fig. 3A-E). MALAT1 promoted the HG-H/R-induced inflammatory response in microglia. To determine the functional role of MALAT1 in the HG-H/R-induced inflammatory response of microglia, MALAT1 was inhibited LG-control; # P < 0.01 vs. LG-H/R. in HAPI cells using transfection of siRNA targeting MALAT1 (si-MALAT1). The results showed that the relative cell viability of HAPI cells was significantly reduced by pretreatment of H/R, HG or HG-H/R, which was accompanied by a substantially increase in apoptosis (Fig. 4A,B). We found that MALAT1 silencing enhanced cell viability and inhibited cell apoptosis in HAPI cells treated with HG-H/R (Fig. 4A,B). HG and H/R alone or together evidently induced the expression of proinflammatory cytokines TNF-α, IL-1β and IL-6 at both mRNA and protein levels in HAPI cells (Fig. 4C-G). As expected, MALAT1 silencing could markedly attenuate the effects of HG-H/R on HAPI cells, as seen by a decrease in proinflammatory cytokines expression (Fig. 4C-G). These results suggested that MALAT1 played a vital role in the HG-H/R-induced inflammatory response of microglia. MALAT1 positively regulated the expression of MyD88, IRAK1 and TRAF6 protein in microglia. To further study the mechanism of MALAT1, we explored the functional relationship between MALAT1 and MyD88. Our results showed that the level of H3 histone acetylation at the promoter of MyD88 was down-regulated by si-MALAT1, but increased by pcDNA-MALAT1 (Fig. 5A,D). However, MALAT1 had no significant effect on H4 histone acetylation of MyD88 promoter (Fig. 5A,D). Furthermore, the protein expressions of MyD88, IRAK1 and TRAF6 were significantly reduced in HAPI cells with MALAT1 silencing (Fig. 5B,C); while MALAT1 overexpression obviously enhanced the protein expressions of MyD88, IRAK1 and TRAF6 (Fig. 5E,F). These findings indicated that MALAT1 positively regulated the expression of MyD88 via increasing the level of H3 histone acetylation of MyD88 promoter, thereby affecting the expression of MyD88-regulated proteins, IRAK1 and TRAF6. MALAT1 induced the inflammatory response by MyD88 in the HG-H/R treated microglia. MALAT1 overexpression induced the growth inhibition of HAPI cells by decreasing cell viability and promoting apoptosis, but MyD88 silencing significantly attenuated the effects of MALAT1 on the growth of HAPI cells (Fig. 6A,B). MALAT1 overexpression dramatically accelerated the expression of proinflammatory cytokines TNF-α, IL-1β and IL-6 at both mRNA and protein levels in the HG-H/R treated HAPI cells; while the up-regulation was attenuated considerably by MyD88 knockdown (Fig. 6C-G). Collectively, our data suggested that MyD88 was crucial for the MALAT1-induced inflammatory response in microglia. Discussion It has been showed that diabetes-associated hyperglycemia increased ischemic infarct volumes and was closely correlated with the poor prognosis of stroke 21 . The excessive inflammation response facilitated the cranial nerves injuries following cerebral ischemic reperfusion (I/R) 22 . In the current study, the cerebral I/R injury model in diabetic rats had more severe neuronal injury both at 24 and 72 hours after I/R was set. Moreover, the levels of the proinflammatory cytokines TNF-α, IL-1β and IL-6 were dramatically increased in the peri-infarct cortical tissues of diabetic rats with cerebral I/R injury. Consistent with of previous studies, we specified that DM exacerbated cerebral I/R injury accompanied by increasing inflammatory response of microglia in vivo and in vitro. Microglia activation and release of inflammatory factors are central events in inflammatory response 23 . Microglia is the main inflammatory cell that participates in the pathological environment of cerebral I/R injury, which can secrete a large amounts of inflammatory cytokines, resulting in serious inflammation reaction 14 . Hyperglycemia has been found to aggravate neuronal degeneration, apoptosis, and inflammation in ischemic regions 24 . It also has been proved that hyperglycemia promoted microglia activation, thereby further worsening ischemic brain injury 25 . The present study identified activated microglia and a large number of inflammatory cytokines in the peri-infarct cortical tissues during cerebral I/R injury associated with DM. We also observed that the expression of MALAT1 was significantly increased in DM-I/R models when compared with I/R models, suggesting that MALAT1 played a critical role in the pathogenesis of DM-associated cerebral I/R injury. MALAT1 was closely related to inflammatory reaction in a variety of pathological and physiological circumstances, including DR and diabetes-induced vascular complications. Our results confirmed that MALAT1 promoted the HG-H/R-induced inflammatory response in microglia, suggesting that MALAT1 might play an important role in the progression of DM-associated cerebral I/R injury. After confirming the important role of MALAT1 in the inflammatory response during cerebral I/R injury associated with DM, we also identified the downstream signaling pathway of MALAT1 in this mechanism. The protein expressions of MyD88, IRAK1 and TRAF6 were found to up-regulate in the peri-infarct cortical tissue of I/R and DM-I/R models and this observation was further confirmed by the findings of the in vitro model of DM-I/R. MyD88-dependent signaling was the essential pathway for provoking the systemic inflammatory reaction and NF-κB activation in the process of cerebral I/R injury 26 . The MyD88 adaptor proteins, TRAF6 and IRAK1, could assemble into a complex that induced the activation of NF-κB cascade reaction 27 . In this study, we identified that MALAT1 up-regulated the expression of MyD88 through increasing the H3 histone acetylation of MyD88 promoter, thereby increasing IRAK1 and TRAF6 protein. Additional studies are necessary to delineate how MALAT1 affects the H3 histone acetylation of MyD88 promoter. In the present study, the cerebral I/R injury was aggravated by DM in the rat models. We further demonstrated that MALAT1 triggered the inflammatory response in microglia via MyD88 signaling that mediated the DM-induced exacerbation of cerebral I/R injury (Fig. 7). MALAT1 is apparently an important regulating factor in DM-associated cerebral I/R injury, and it also may be the effective therapeutic target for preventing and combating DM-associated cerebral I/R injury.
4,464.6
2018-05-29T00:00:00.000
[ "Medicine", "Biology" ]
A Bayesian approach to disease clustering using restricted Chinese restaurant processes : Identifying disease clusters (areas with an unusually high inci- dence of a particular disease) is a common problem in epidemiology and public health. We describe a Bayesian nonparametric mixture model for disease clustering that constrains clusters to be made of adjacent areal units. This is achieved by modifying the exchangeable partition probability function associated with the Ewen’s sampling distribution. We call the resulting prior the Restricted Chinese Restaurant Process, as the associated full conditional distributions resemble those associated with the standard Chinese Restaurant Process. The model is illustrated using synthetic data sets and in an application to oral cancer mortality in Germany. Introduction A disease cluster is a higher-than-expected incidence of a particular disease or disorder occurring in close proximity in terms of both time and geography. Although communicable diseases (those that can be spread from one person to another, such as the flu or HIV) often occur in clusters, clusters of noncommunicable disease are rare and their presence might indicate the presence of a harmful environmental factor or other hazard. Therefore, identification of cancer clusters is a key task in epidemiology and public health. A strand of the statistics literature on disease clustering focuses on methods for confirmatory cluster analysis. Sometimes called focused tests, these methods are concerned with determining whether the rate of disease in a pre-specified area (which usually contains some putative health hazard) is higher than expected (e.g., see Stone, 1988, Besag & Newell, 1991, Tango, 1995, Morton-Jones et al., 1999. In contrast, the focus of this paper is on methods for de novo identification of disease clusters in datasets in which the presence of such clusters is not known. Methods based on scan statistics (e.g., see Weinstock, 1981, Kulldorff, 1997, Tango & Takahashi, 2005 are well known examples of this type of approaches. Implementations of classical approaches to disease cluster analysis are widely available in a number of platforms. One example is the R package DCluster (Gómez-Rubio et al., 2005). Methods for disease clustering can also be classified according to whether they are designed to work with point-referenced or with spatially aggregated (areal) data. In the case of point-referenced data, it is common to distinguish between distance-based methods (Whittemore et al., 1987, Besag & Newell, 1991, and Tango, 1995, which derive tests based on the distribution of the time/distance between locations on which events occurred, and quadrat-based methods (e.g, Openshaw et al., 1987, Kulldorff & Nagarwalla, 1995, which study the variability of case counts in certain subsets of the region of interest (called quadrats). In the case of areal data, frequency tests similar to those used in quadrat-based methods are frequently used (e.g., see Whittinghill, 1966a andWhittinghill, 1966b). Bayesian methods for disease clustering in spatially aggregated data have been proposed by Knorr-Held & Raßer (2000), Gangnon & Clayton (2000), Green & Richardson (2002), Gómez-Rubio et al. (2018), Wakefield & Kim (2013), and Anderson et al. (2014). Other recent contributions to the field include the work of Moraga & Montes (2011), Charras-Garrido et al. (2012, Heinzl & Tutz (2014), and Wang & Rodríguez (2014). Kulldorff et al. (2003), Waller et al. (2006), and Goujon-Bellec et al. (2011) present detailed comparisons of various methods for disease clustering. It is worth noting that the main goals of disease clustering methods are similar but distinct from those of disease mapping. Typically, disease mapping applications deal with the estimation of smooth covariate-adjusted risk measures, but do not aim at identifying discontinuities in the risk function. On the other hand, the whole point of methods for de novo identification of cancer cluster is to pinpoint such discontinuities. Of course, these two objectives are not necessarily opposed (e.g., see Knorr-Held & Raßer, 2000, Green & Richardson, 2002, and Anderson et al., 2014, but most techniques designed for disease mapping are not directly applicable in the context of disease clustering. The literature on disease clustering is also related to, but distinct from, the literature on boundary analysis in areal data (sometimes referred to as "areal wombling", e.g., see Lu & Carlin, 2005;Lu et al., 2007;Fitzpatrick et al., 2010;Li et al., 2015b;Guhaniyogi, 2017). In this paper we develop a Bayesian approach for de novo identification of disease clusters in areal data. Our approach uses a restricted version of the Exchangeable Partition Probability Function (EPPF) associated with a species sampling model (SSM) (Pitman, 1995(Pitman, , 1996 as a prior on the partition of areal units. To simplify our exposition, we focus here on the SSM associated with the Dirichlet process (Ferguson, 1973;Blackwell & MacQueen, 1973;Antoniak, 1974;Lee et al., 2013;Rodríguez & Quintana, 2015), which is sometimes referred to as the Chinese restaurant process (CRP). However, the formulation is more general and our key results (particularly around the form of the full conditional distributions associated with the prior) extend to other SSMs such as the Generalized CRP induced by the two-parameter Poisson-Dirichlet process (Pitman & Yor, 1997). The restricted prior we introduce in this paper is specifically designed to enforce clusters made of adjacent spatial units (which we call admissible). The approach we develop in this paper is related to those developed in Fuentes-García et al. (2010) and Martínez et al. (2014) in the context of time series data. Fuentes-García et al. (2010) consider a special case of our model that assumes ordered observations and uses reversible jump Markov chain Monte Carlo algorithms for inference. More recently, Martínez et al. (2014) propose a change-point model constructed by restricting a Generalized CRP (Pitman, 1995;Gnedin & Pitman, 2006), but their proposal differs from ours in the way the probability associated with inadmissible partitions is redistributed. Our model can be seen as generalizing the ideas in Fuentes-García et al. (2010) and Martínez et al. (2014) to situations in which the EPPF is restricted to partitions driven by general neighborhood graphs. We also show that, for our construction, the full conditional distributions associated with the restricted prior take a simple and appealing form, making the use of reversible jump algorithm unnecessary. The model we introduce here is also related to the literature on spatiallydependent mixture models. Fernández & Green (2002) consider the use of a Potts model as the joint prior on the cluster indicators of a finite mixture model, leaving the question of how the number of clusters is to be selected open. Loschi & Cruz (2005), Müller et al. (2011), andPage et al. (2016) consider extensions of Hartigan's product partition model (PPM) (Hartigan, 1990) in which the so-called coherence functions account for temporal and/or spatial dependence. These models cannot be easily generalized to other SSMs beyond the CRP, where the prior on the partition cannot be written in terms of the product of coherence functions. Dahl (2008), Blei & Frazier (2011), Ghosh et al. (2011), and Dahl et al. (2017 consider Chinese restaurant processes in which co-clustering probabilities are functions of the distance between observations. In a similar spirit, Li (2015), Li et al. (2013), Li et al. (2014), Li et al. (2015a), Li et al. (2016a), Li et al. (2016b), andLi et al. (2016c) generalize the approach of Blei & Frazier (2011) so that the clustering probabilities depend on side information. A common feature of all these approaches is their focus on soft constraints that encourage nearby areas to cluster together but still allow clusters to be disconnected. In contrast, our focus is on ensuring that clusters are fully con-nected. This type of constraint is the most natural one in the context of our application to disease clustering, and cannot be easily enforced with any of the models discussed above. For example, Page et al. (2016) note that computational challenges arise when a restricted cohesion function in a PPM is considered in order to assign zero probability to "non-desirable" cluster configurations. Our proposal overcomes these challenges. The remaining of the paper is organized as follows: Section 2 presents our model and discusses its properties. Section 3 describes our computational approach. Sections 4 and 5 present the analysis of two simulated data sets and an application to oral cancer mortality in Germany, respectively. Finally, Section 6 discusses the limitations of our model as well as future research directions. The model Suppose that areal data in the form of pairs (y i , h i ) are available, where y i records the observed number of cases in region i, and h i represents the expected number of cases in region i, obtained by internal or external standardization, for i = 1, . . . , n. As is common in the literature, we assume that the counts in region i are independently Poisson distributed and model their rates as a function of h i and their log-scale relative risk, η i , log-RR for short. More specifically, where η i = x t i θ i , x t i is the transpose of a p-dimensional vector of covariates associated to region i, θ i ∈ R p is the random effect associated with region i, and P ois(λ) denotes the Poisson distribution with rate λ > 0. When no covariates are available, i.e., x i,j = 1, for all i, j, the log-RR reduces to η i = θ i , with θ i ∈ R. Our approach assumes that the geographic information associated with the data set is encoded in a known n × n binary adjacency matrix, W = [w i,i ]. This adjacency matrix can be interpreted as defining an unweighted, undirected graph G whose nodes correspond to the different geographical regions under study. In our illustration we focus on first-order neighborhood matrices in which w i,i is equal to 1 if regions i and i share a common boundary, and equal to 0 otherwise. However, our methodology applies more generally to higher order neighborhoods, or to other ways to define the (binary) adjacency relationship, e.g., by means of the distances between centroids or of distances based on length of shared boundary. Recall that our interest is to identify clusters of spatially connected regions that share the same relative log-risk. Therefore, regions i and j can belong to the same cluster k only if G contains a path that connects them for which all the nodes in the path also belong to cluster k. In what follows we will propose a spatially restricted prior distribution for the cluster membership random variable and illustrate how specific adjacency matrices impact the probability mass function of the number of clusters. To construct our prior on the random effects we borrow ideas from the modelbased clustering literature. More specifically, we augment the model in (1) with Table 1 All possible partitions and cluster configurations for a sample of size n = 3. Table 1). If no spatial information were available (or, alternatively, if W corresponds to the complete graph), it would be convenient to assign c a Chinese restaurant process prior (CRP) (Pitman, 1995), where is the number of labels having value k or, equivalently, the number of observations in cluster k and Γ(z) = ∞ 0 t z−1 e −t dt denotes the Gamma function. The corresponding full conditionals are given by where c −i = (c 1 , . . . , c i−1 , c i+1 , . . . , c n ). Therefore, each c i is either an already existing label, with probability proportional to n k (c −i ), or a new label, with probability proportional to α. Clearly, the concentration parameter α controls the number of clusters, with larger values of α favoring larger numbers of clusters a priori. In order to define a spatially restricted prior distribution that enforces connected clusters, we propose to modify (2) by giving zero probability to configurations that involve clusters with non-connected components. More specifically, let G A k be the subgraph of G involving only the nodes that belong to the set A k . We call a partition A 1 , . . . , A K admissible if G A k is a connected subgraph (but not necessarily complete) for every k = 1, . . . , K. If we define the function Q(c, W ) as being equal to 1 whenever c is an admissible cluster configuration under W , our prior takes the form where the normalizing constant C(α, W ) is given by We call this prior the Restricted Chinese Restaurant Process with parameters α and W , denoted c | α, W ∼ RCRP (α, W ). In general, there is no closed form expression for C(α, W ); two exceptions are provided in Appendix A. However, we can still make some general statements. For example, we note that C(α, W ) is a polynomial function of degree n in α, i.e., we can write where the coefficient f l (W ) is a weighted sum over the admissible partitions involving l clusters, In particular, f 1 (W ) = Γ(n) for any W , f n (W ) = 1 for any W that implies a graph G that is connected (and f n (W ) = 0 otherwise), and f l (W ) = |S n,l |, the unsigned Stirling number of the second kind, when W implies the complete graph. Figure 1 presents the probability associated with the number of clusters K(c) for the unrestricted CRP, as well as for the neighborhood structures associated with a star and a linear graphs (see Appendix A), when n = 6. Recall that the prior on partitions implied by the model of Fuentes-García et al. (2010) corresponds to the linear graph setting, while the model in Martínez et al. (2014) is constructed to ensure that the prior on K(c) matches the one for the unrestricted CRP. It is clear from the graph that the probability of the partitions depends on the underlying adjacency matrix. For both restricted cases, the probability of K = 1 and K = n are higher than in the non-restricted case. For smaller values of α (α = 0.1 and α = 1), the linear graph seems to favor a smaller number of clusters than the star graph. This pattern is reversed for the larger values of α (α = 3 and α = 10). While an explicit expression for C(α, W ) is generally not available, the full conditional distributions associated with (4) take a particularly simple, appealing, and computationally convenient form (see Appendix B): Hence, when the assignment of a region to a particular cluster leads to an admissible configurations, (7) and (3) agree. On the other hand, for assignments that lead to inadmissible configurations, the full conditional is zeroed out. As before, α controls the number of clusters. The CRP prior has sometimes been criticized in the context of clustering applications because of their tendency to create clusters of unbalanced size. In disease clustering applications, where the disease clusters can usually be expected to be rare and small a priori, this behavior (which will carry out to the restricted model we discuss below) is an appealing feature of the model. Having defined the spatially restricted prior distribution for the labeling random variables, the rest of the model is specified byθ k iid ∼ N (μ, σ 2 ), where μ ∈ R and σ 2 ∈ R + are the mean and variance of the normal distribution. Additionally, we consider hyperprior distributions for α, μ, and σ 2 . Our model can finally be written as with one more level for the hyperpriors where the parameters κ, φ 2 , a, and b are fixed, IG(a, b) denotes the inverse gamma distribution with shape and scale parameters a > 0 and b > 0, respectively, and π is a distribution on R + . Computational aspects We use Markov Chain Monte Carlo (MCMC) algorithms (Smith & Roberts, 1993;Robert & Casella, 2005) to generate samples from the posterior distributions associated with the proposed model. This Section describes the full conditional distributions involved. The full conditional distribution for the components of the indicator vector c takes the relatively simple form: σ 2 ) and f (y i |θ k ) denotes the probability mass function of a Poisson distribution with rate parameter h i eθ k . This resembles algorithm 8 from Neal (2000). Note that there is an implicit and very standard relabeling of vector c every time a cluster becomes empty (i.e., n k = 0, for some k), as in the "no gap" algorithm of MacEachern & Müller (1998). The Markov chain that results from cycling through these full conditional distributions is irreducible: we can move from between any two admissible configurations by first breaking each cluster (one region at a time, starting with the "periphery" to ensure that admissibility is preserved), and then reassembling the new clusters. While the need to repeatedly check on the admissibility of the configurations might suggest that the computational cost of implementing this algorithm would be high, that is not the case. Using the fact that the current configuration must be admissible, a careful implementation of the algorithm only requires that, for each i, we check that the cluster currently containing observation i remains fully connected if that observation is removed (which, in the worst case scenario, can be done in quadratic time on the size of that cluster). Then c i is updated with the label of any of its neighbours (which can be directly identified from W in linear time) or with a new label according to the probabilities given by (10). As with a standard mixture model, posterior distributions for log-RR parametersθ 1 , . . . ,θ K(c) are conditionally independent and take the form where y + k = {i:ci=k} y i and h + k = {i:ci=k} h i . Since these posterior distributions do not belong to any tractable family of distributions, one must resort to algorithms such as random walk Metropolis-Hastings (M-H) or Hamiltonian Monte Carlo to sample from them. However, these algorithms require that the user selects a number of tuning parameters (such as the variance of the random walk in random walk M-H algorithms, or the size and number of steps in Hamiltonian Monte Carlo algorithms). Instead, we resort to slice samplers algorithms (Damien et al., 1999), which do not require the selection of any tuning param-eters and therefore facilitate the use of our approach by practitioners. More specifically we introduce unit-rate exponentially distributed auxiliary random variables, u k , leading to truncated exponential and truncated normal conditional distributions for u k andθ k , respectively, where Exp(· | λ) denotes the exponential distribution with rate λ. On the other hand, the hyperparameters μ and σ 2 are sampled from their conjugate posterior distributions Finally, we discuss the process of generating samples from the full conditional distribution of α. This step is particularly difficult because the full conditional is double intractable: additionally to the posterior not belonging to any known distribution, it involves the computation of the intractable normalizing constant C(α, W ). In this paper we use the noisy exchange algorithm (NEA), proposed by Alquier et al. (2016), for sampling from (12). The NEA updates α using a M-H step replacing the ratio of normalizing constants C(α, W )/C(α * , W ) by an unbiased importance sampling estimator. Note that, as a consequence of importance sampling, where E c |α,W denotes the expectation under c | α, W . Therefore, the ratio of normalizing constants is approximated by where c 1 , . . . , c N are samples from (4), obtained by running a second MCMC algorithm based on the full conditionals given by (7). In order to speed up computations, we considered a discrete prior distribution for α with support on a relatively small number of point masses, and computed the ratios of normalizing constants in advance. Simulated data We conduct two simulation studies to ascertain the performance of our model. Both scenarios assume that the spatial association is given by the first-order neighborhood structure of the counties in the U.S. state of Ohio, and that h i = 100 for every i = 1, . . . , 88. Scenario I involves eight snake-shaped clusters with log-RR θ i ∈ {−2, 0, 2} (see Figure 2, top row). Note that in this case some clusters share same disease risk. Scenario II is formed by 4 round-shaped connected clusters with log-RR θ i ∈ {−1.5, −0.5, 0.5, 1.5}, so each cluster has a different disease risk (see Figure 2, bottom row). A comparison model We compare the performance of our model against the boundary distance (BD) model proposed by Knorr-Held & Raßer (2000). The BD model uses a Poisson likelihood similar to ours, but assigns a different prior distribution on the partition indicators c 1 , . . . , c n that is inspired by K-means clustering. More specifically, their prior is specified hierarchically through a prior distribution on the number of clusters, K, a uniform prior distribution on the set of cluster centers, denoted (g 1 , . . . , g K ), with g k ∈ {1, . . . , n}, and a measure of distance between regions, which in their case is given by the minimum number of boundaries that need to be crossed to move between the two regions. Given K and (g 1 , . . . , g K ), each region i is assigned to the cluster whose center is closest. If one region is equally distant from two cluster centers, then it is assigned to the cluster with the smallest index position. As we will show in our simulations, this prior strongly favors round shaped cluster configurations. In terms of the prior for the log-RR, Knorr-Held & Raßer (2000) make a choice that is similar to ours. In particular, they set θ i =β ci , with logβ k ∼ N (μ, σ 2 ). Their posterior sampling MCMC scheme is based on a reversible jump MCMC iterating over birth, death, shift, switch, height and hyper steps. This algorithm tends to mix very slowly, and requires a large number of iterations (in the order of millions of samples) to produce approximations with a reasonably small Monte Carlo error. Prior specification and comparison criteria In our simulation studies, we compare the performance of the RCRP model and BD model, under different specifications of the prior distributions. For the RCRP model, we fix α = 4 (resulting on a prior distribution on the number of clusters centered roughly around K = 5), and a product of independent priors for (μ, σ 2 ), π 1 (μ, σ 2 ) = N (μ | q 0.5 , s 2 n /2)IG(σ 2 | 2, s 2 n /2), where q 0.5 and s 2 n denote the median and unbiased sample variance of log(y i /h i ). The hyperparameters κ = q 0.5 and a = 2 were chosen such that the prior mean of the log RR are centered at q 0.5 and the prior variance of σ 2 is infinity. A sensitivity analysis involving three more combinations of hyperparameters for φ 2 and b is included in the supplementary material (Wehrhahn et al., 2020). Results appeared to be robust to the different prior specifications. To ensure that the comparison between models is fair, we slightly modify the hyperpriors for the BD model from those originally used in Knorr-Held & Raßer (2000). In particular, we assign (μ, σ 2 ) the same hyperpriors as the RCRP model. Additionally, rather than the original geometric prior used by Knorr-Held & Raßer (2000), we consider two slightly different prior distributions for K that more closely resemble the prior implied by our model. These two priors correspond to a truncated Poisson and a truncated Negative Binomial, respectively. For each of the two models, we report point estimates for the cluster configuration,ĉ, heat maps of the posterior probability of two regions belonging to the same cluster, π(c i = c j | y), i = j (which provide a measure of the uncertainty associated with the point estimates), and a comparison between the prior and posterior distributions over the number of clusters K. The point estimateĉ is obtained by minimizing (using iterative componentwise optimization) a slightly modified version of the expected loss function discussed in Lau & Green (2007), Note that the ratio w 1 /w 2 controls the relative loss of incorrectly clustering or separating a pair of regions, and the multiplier Q(ĉ, W ) ensures that our point estimate corresponds to an admissible partition. In our illustrations we set w 1 /w 2 = 1. We evaluate the ability of the models to identify clusters using the adjusted random index (ARI, Hubert & Arabie, 1985) of the posterior cluster configurations. The ARI evaluates the agreement in cluster assignment between two cluster configurations. It ranges between −1 and 1, larger values indicating agreement between cluster configurations. On the other hand, estimates of the log-RR are evaluated through the mean squared error (MSE) of the posterior mean of the log-RR. Results For each model, a single Markov chain was generated. In all cases, the inferences presented below are based on 10,000 samples obtained after burn-in and thinning. The amount of burn-in varied; we discarded 40,000 samples in both instances of the RCRP model, 200,000 for the BD model in Scenario I, and 300,000 for the BD model in Scenario II. Convergence was evaluated by standard convergence tests, as implemented in the CODA R package (Plummer et al., 2009), and by examining the trace plot of the log-posterior distribution, the number of clusters K(c), and the hyperparameters μ and σ 2 . Figure 3 displays point estimates for the cluster configurations for Scenario I and Scenario II. Under Scenario I, the BD model struggles even though the rates associated with the clusters are quite well differentiated. In particular, note that the BD model breaks down the 8 snake-like clusters into a large number of very small, round clusters. Under scenario II, the BD model improves its performance substantially, but still tends to slightly overestimate the number of clusters. In particular, note that the upper left and bottom right clusters are being broken down by the BD model into two subclusters each. In contrast, the RCRP model is able to recover the true cluster structure in both scenarios. Figure 4 provides further insight into the estimates of the cluster structure by displaying the heat maps of the posterior probability of two regions belonging to the same cluster. To facilitate visualization, regions are ordered according to the true cluster configuration. In general, there is very little uncertainty associated with the point estimates presented in Figure 3, particularly for the RCRP model. Along similar lines, Figure 5, displays boxplots of the posterior distribution of the ARI. We can see that, while the posterior distribution of the ARI for the RCRP model is concentrated around 1 (further confirming that the model places high probability on the true clustering configuration), the values for the BD model tend to be much smaller, particularly in Scenario I. It is also worth noting that, for the BD model, π 2 (K) leads to much higher variability in the quality of the cluster estimates. Finally, we compute the MSE of the log-RR with respect to the true log-RR. Under Scenario I the MSE for the RCRP model and α = 4, BD model and π 1 (K), and BD model and π 2 (K) were 0.00145, 0.00579, and 0.00731, respectively. On the other hand, under Scenario II, the respective MSEs were 0.00057, 0.00073 and 0.00074. For both scenarios the RCRP model has the best performance; in the best case scenario, the MSE of the BD model, was 3.99 and 1.28 times bigger that the MSE for the RCRP model in Scenario I and Scenario II, respectively. Also, note that, under the BD model, prior π 1 (K) shows the best performances. Further results comparing the performance of the models regarding the number of clusters and the log-RRs can be found in the online supplementary material (Wehrhahn et al., 2020). An application to oral cancer in Germany As a second illustration, this Section reports our analysis of the oral cancer mortality data discussed in Knorr-Held & Raßer (2000) (see Figure 6). The data set registers the observed and expected number of deaths during the period 1986-1990 across 544 administrative districts in Germany. As in the simulation studies, this analysis emphasizes a comparison between the RCRP and the BD models. Prior specification Under the RCRP model, we employ a discrete uniform prior distribution for α. This prior has support on the set {16, 20, 24, 28, 32} and is denoted π 1 (α). This choice results in a prior distribution for the number of clusters centered around 16. As we discussed before, the use of a discrete prior enables additional flexibility while containing the computational cost of the MCMC algorithm (by allowing us to pre-compute approximations to the ratio of intractable normalizing constants used in the noisy exchange algorithm). As in the case of the simulated data, the hyperprior on (μ, σ 2 ) is given by (14). For the BD model, two prior specifications for K were considered: π 3 (K) = NB(K | p = 0.65, r = 92.84)1 (K ≥ 1) , π 4 (K) = NB(K | p = 0.01, r = 1)1 (K ≥ 1) . Results As before, we generate a single Markov chain for each model specification. For the RCRP model, one sample of size 10,000 was generated, obtained by saving 1 out of every 10 iterations and after a burn-in period of 20,000 samples. In order to approximate the intractable ratio of normalizing constants, one sample of size 10,000 was generated for each pair of values of α in the support of the discrete prior, after a burn-in period of 15,000 iterations. Based on these samples, the ratios of normalizing constants, used in the M-H updating step of α, were estimated as in (13). For the BD model, we also generated posterior samples of size 10,000. However, following Knorr-Held & Raßer (2000), in this case the samples were obtained after a burn-in period of 1,000,000 iterations by saving 1 out of every 10,000 iterations. As in the case of the simulated data, convergence of these chains was evaluated by standard convergence tests, as implemented in the CODA R package (Plummer et al., 2009), and by examining the trace plot of the log-posterior distribution, the number of clusters in the data, and the hyperparameters μ and σ 2 . Figure 7 presents our point estimate of the partition structure under each model. The differences are again striking. The RCRP model identifies five clusters: a main one, formed by the vast majority of regions, two small ones (formed by 110 and 9 regions, respectively), and a couple of singleton clusters. In contrast, the BD model identifies 84 and 98 clusters under π 4 (K) and π 3 (K), respectively. The shape of the clusters suggests that we might be in a situation that is similar to the one in our first simulated data sets, where the BD model artificially splits large, non-circular clusters into a large number of smaller, roughly circular ones. To further emphasize the difference in the reported partition structure, we present in Figure 8 heat maps of the posterior probability of two regions belonging to the same cluster. From these graphs we can see that, while the level of uncertainty in the point estimates is somewhat larger in this case when compared to the simulation studies, it is clear that all models are quite certain about the main features of their reported partition estimates. Finally, Figure 9 displays the posterior mean log-RR estimate under both models. There are some clear similarities in the estimated rates. For example, both the RCRP and the BD models estimate a higher incidence of the disease in the Southwest corner of Germany, and a lower incidence in the East of the country. However, it is clear that the RCRP model smooths the rates much more than the BD model. This is not surprising given the very different estimates of the partition structures induced by these models. Further results comparing the performance of the models regarding the number of clusters can be found in the supplementary material (Wehrhahn et al., 2020). Discussion We have proposed a restricted mixture model for detecting clusters of non communicable diseases. The restriction is imposed in the CRP prior for the cluster membership vector, constraining the space of possible configurations only to those resulting in connected clusters, i.e., those where there is a path joining any pair of regions that includes only regions that belong to the cluster. We show that the model is very flexible and less computationally demanding that the alternative BD model. A number of extensions of this model are possible. For example, while we have focused here on restrictions of a CRP prior, the basic approach could be used to restrict any other exchangeable SSM. Our key result around the structure of the full conditional distribution of the restricted model should extend in a straightforward fashion. Similarly, the model could easily accommodate different likelihood functions and other more general definitions of the binary neighborhood matrix W . Furthermore, while this paper has focused on applications to disease clustering, it is clear that our approach can be extended to applications in time series and image segmentation. Another extension would involve using dependent priors for the cluster specific parametersθ 1 , . . . ,θ K . Indeed, we might expect that nearby clusters have more similar rates than clusters located far away. For example, we could consider a (proper) conditionally autoregressive prior forθ 1 , . . . ,θ K . However, such an approach introduces complex identifiability issues that make prior elicitation complex. In particular, note that a model in which there is a single cluster is equivalent to the (limit) model in which we allow for any number of clusters but make the spatial correlation in the prior goes to 1. This means that the prior distribution would have an even more critical effect on the model, one that would be hard to measure a priori. In that sense, the use of independent priors can be seen as a kind of "maximum separation" prior that will maximize the ability of the model to identify a disease cluster. Finally, an anonymous referee suggested the use of a more general form for Q(c, W ) that would allow for continuous values on [0, 1]. Our model works by restricting an EPPF, which leads to a model that is partially exchangeable. That is, the probability distribution implied by the model is invariant to simultaneous permutations of the observations and the rows and columns of the neighborhood matrix W . A referee pointed out that using an exchangeable PPF as the starting point is not required. While this is true as a general modeling strategy, the appropriateness of such an approach will depend on the application at hand. For example, in applications to time series data (where there is a natural ordering to the observations), the use of such a nonexchangeable PPF as our starting point might make sense. However, in spatial applications such as the ones considered in this paper, partial exchangeability would seem to be the right assumption: why would one want to have a different model when counties are listed alphabetically in the database vs. when they are listed by, say, population size? One shortcoming of our model is that it uses a single set of random effects to capture both overdispersion in the Poisson model and spatial dependence. One way to address this limitation is to use two sets of random effects: one set in which they are independent (meant to capture over-dispersion), and another set in which they are dependent and modeled using our restricted CRP (therefore capturing the spatial effects in the data). See, for example, Banerjee et al. (2014). Such an approach could be easily incorporated into our disease clustering model. Appendix A: Two special restrictions In what follows we consider two special adjacency graphs for which the normalizing constant C(α, W ) can be computed in closed form: (1) when the underlying graph G is the star graph, and (2) when the underlying graph G is linear. The former is of interest for its simplicity, and the latter because of its potential application to modeling ordered/time series data. The analysis of the these two special graphs is also useful in terms of highlighting the differences between our model and that of Martínez et al. (2014). Indeed, if the approach in Martínez et al. (2014) was extended beyond time series models to accommodate general adjacency graphs, it would lead to exactly the same prior distribution on partitions under either graphs. That is because the number of admissible partitions for any size happens to be the same under both graphs. In contrast, our model leads to quite different specifications for these two graphs because our prior weights partitions according to the size of the clusters. A.1. Restriction under a star graph Under this adjacency structure, in any cluster configuration with l clusters there will be exactly l −1 singleton clusters, and one cluster with n−l +1 observations (which must include the root node). This is because any cluster of size greater than one must necessarily include the root node in order to be formed of adjacent regions. See the top row of Table 2 for an example with n = 4. Hence, f l (W star ) = n − 1 n − l Γ(n − l + 1) = (n − 1)! (l − 1)! , and C (α, W star ) = n l=1 (n − 1)! (l − 1)! α l . A.2. Restriction under a linear graph In a linear graph, there are also a total of n−1 l−1 configurations involving exactly l clusters. Indeed, note that choosing l adjacent clusters is equivalent to picking l − 1 breakpoints out in n − 1 possible positions for them. However, unlike the star case, each of these configurations involves clusters of different sizes (see the bottom row of Table 2). configuration, then we must consider two cases. For l ≤ K(c −i ), This follows from the fact that Γ (n k (c −i ) + 1(k = l)) = Γ (n k (c −i )) k = l, n k (c −i )Γ (n k (c −i )) k = l. On the other hand, if l = K(c −i ) + 1, The simplified expression in (7) are obtained by noting that both (16) and (17) The previous derivation assumes that there is at least one value of c i in the set {1, . . . , K(c −i ) + 1} that leads to an admissible configuration. Otherwise the normalizing constant is zero and the full conditional is not well defined. Since our MCMC algorithm must start in an admissible configuration, and the full conditionals maintain admissibility, this is not an issue for our purposes.
9,161.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Investigating Transferability in Pretrained Language Models How does language model pretraining help transfer learning? We consider a simple ablation technique for determining the impact of each pretrained layer on transfer task performance. This method, partial reinitialization, involves replacing different layers of a pretrained model with random weights, then finetuning the entire model on the transfer task and observing the change in performance. This technique reveals that in BERT, layers with high probing performance on downstream GLUE tasks are neither necessary nor sufficient for high accuracy on those tasks. Furthermore, the benefit of using pretrained parameters for a layer varies dramatically with finetuning dataset size: parameters that provide tremendous performance improvement when data is plentiful may provide negligible benefits in data-scarce settings. These results reveal the complexity of the transfer learning process, highlighting the limitations of methods that operate on frozen models or single data samples. Introduction Despite the striking success of transfer learning in NLP, remarkably little is understood about how these pretrained models improve downstream task performance. Recent work on understanding deep NLP models has centered on probing, a methodology that involves training classifiers for different tasks on model representations (Alain and Bengio, 2016;Conneau et al., 2018;Hupkes et al., 2018;Liu et al., 2019;Tenney et al., 2019a,b;Goldberg, 2019;Hewitt and Manning, 2019). While probing aims to uncover what a network has already learned, a major goal of machine learning is transfer: systems that build upon what they have learned to expand what they can learn. Given that most †<EMAIL_ADDRESS>Figure 1: The three experiments we explore. Lighter shades indicate randomly reinitialized layers, while darker shades indicate layers with BERT parameters. For layer permutations, all layers hold BERT parameters, what changes between trials is their order. In all three experiments, the entire model is finetuned end-toend on the GLUE task. recent models are updated end-to-end during finetuning (e.g. Devlin et al., 2019;Howard and Ruder, 2018;Radford et al., 2019), it is unclear how, or even whether, the knowledge uncovered by probing contributes to these models' transfer learning success. In a sense, probing can be seen as quantifying the transferability of representations from one task to another, as it measures how well a simple model (e.g., a softmax classifier) can perform the second task using only features from a model trained on the first. However, when pretrained models are finetuned end-to-end on a downstream task, what is transferred is not the features from each layer of the pretrained model, but its parameters, which define a sequence of functions for processing representations. Critically, these functions and their interactions may shift considerably during training, potentially enabling higher performance despite not initially extracting features correlated with this task. We refer to this phenomenon of how layer parameters from one task can help transfer learning Figure 2: The benefit of using BERT parameters instead of random parameters at a particular layer varies dramatically depending on the size of the finetuning dataset. However, as finetuning dataset size decreases, the curves align more closely with probing performance at each layer. Solid lines show finetuning results after reinitializing all layers past layer k in BERT-Base. 12 shows the full BERT model, while 0 shows a model with all layers reinitialized. Line darkness indicates subsampled dataset size. The dashed lines show probing performance at each layer. Error bars are 95% CIs. on another task as transferability of parameters. In this work, we investigate a methodology for measuring the transferability of different layer parameters in a pretrained language model to different transfer tasks, using BERT (Devlin et al., 2019) as our subject of analysis. Our methods, described more fully in Section 2 and Figure 1, involve partially reinitializing BERT: replacing different layers with random weights and then observing the change in task performance after finetuning the entire model end-to-end. Compared to possible alternatives like freezing parts of the network or removing layers, partial reinitialization enables fairer comparisons by keeping the network's architecture and capacity constant between trials, changing only the parameters at initialization. Through experiments across different layers, tasks, and dataset sizes, this approach enables us to shed light on multiple dimensions of the transfer learning process: Are the early layers of the network more important than later ones for transfer learning? Do individual layers become more or less critical depending on the task or amount of finetuning data? Does the position of a particular layer within the network matter, or do its parameters aid optimization regardless of where they are in the network? We find that when finetuning on a new task: 1. Transferability of BERT layers varies dramatically depending on the amount of finetuning data available. Thus, claims that certain layers are universally responsible or important for learning certain linguistic tasks should be treated with caution. (Figure 2) 2. Transferability of BERT layers is not in general predicted by the layer's probing performance for that task. However, as finetuning dataset size decreases, the two quantities exhibit a greater correspondence. (Figure 2, dashed lines) 3. Even holding dataset size constant, the most transferable BERT layers differ by task: for some tasks, only the early layers are important, while for others the benefits are more distributed across layers. (Figure 3) 4. Reordering the pretrained BERT layers before finetuning decreases downstream accuracy significantly, confirming that pretraining does not simply provide better-initialized individual layers; instead, transferability through learned interactions across layers is crucial to the success of finetuning. (Figure 4) 2 How many pretrained layers are necessary for finetuning? Our first set of experiments aims to uncover how many pretrained layers are sufficient for accurate learning of a downstream task. To do this, we perform a series of incremental reinitialization experiments, where we reinitialize all layers after the kth layer of BERT-Base, for values k ∈ {0, 1, . . . 12}, replacing them with random weights. We then finetune the entire model end-toend on the target task. Note that k = 0 corresponds to a BERT model with all layers reinitialized, while k = 12 is the original BERT model. We do not reinitialize the BERT word embeddings. As BERT uses residual connections (He et al., 2016) around layers, the model can simply learn to ignore any of the reinitialized layers if they are not helpful during finetuning. We use the BERT-Base uncased model, implemented in PyTorch (Paszke et al., 2019) via the Transformers library (Wolf et al., 2019). We finetune the network using Adam (Kingma and Ba, 2015), with a batch size of 8, a learning rate of 2e-5, and default parameters otherwise. More de-tails about reinitialization, training, statistical significance, and other methodological choices can be found in the Appendix. We conduct our experiments on three English language tasks from the GLUE benchmark, spanning the domains of sentiment, reasoning, and syntax (Wang et al., 2018): SST-2 Stanford Sentiment Treebank involves binary classification of a single sentence from a movie review as positive or negative (Socher et al., 2013). QNLI Question Natural Language Inference is a binary classification task derived from SQuAD (Rajpurkar et al., 2016;Wang et al., 2018). The task requires determining whether for a given (QUESTION, ANSWER) pair the QUESTION is answered by the ANSWER. CoLA The Corpus of Linguistic Acceptability is a binary classification task that requires determining whether a single sentence is linguistically acceptable (Warstadt et al., 2019). Because pretraining appears to be especially helpful in the small-data regime (Peters et al., 2018), it is crucial to isolate task-specific effects from data quantity effects by controlling for finetuning dataset size. To do this, we perform our incremental reinitializations on randomly-sampled subsets of the data: 500, 5k, and 50k examples (excluding 50k for CoLA, which contains only 8.5k examples). The 5k subset size is then used as the default for our other experiments. To ensure that an unrepresentative sample is not chosen by chance, we run multiple trials with different subsamples. Confidence intervals produced through multiple trials also demonstrate that trends hold regardless of intrinsic task variability. While similar reinitialization schemes have been explored by Yosinski et al. 2019) in an NLP context, none investigate these data quantity-and task-specific effects. Figure 2 shows the results of our incremental reinitialization experiments. These results show that the transferability of a BERT layer varies dramatically based on the finetuning dataset size. Across all but the 500 example trials of SST-2, a more specific trend holds: earlier layers provide more of an improvement on finetuning performance when the finetuning dataset is large. This trend suggests that larger finetuning datasets may enable the network to learn a substitute for the parameters in the middle and later layers. In contrast, smaller datasets may leave the network reliant on existing feature processing in those layers. However, across all tasks and dataset sizes, it is clear that the pretrained parameters by themselves do not determine the impact they will have on finetuning performance: instead, a more complex interaction occurs between the parameters, optimizer, and the available data. Does probing predict layer transferability? What is the relationship between transferability of representations, measured by probing, and transferability of parameters, measured by partial reinitialization? To compare, we conduct probing experiments for our finetuning tasks on each layer of the pretrained BERT model. Our probing model averages each layer's hidden states, then passes the pooled representation through a linear layer and softmax to produce probabilities for each class. These task-specific components are identical to those in our reinitialization experiments; however, we keep the BERT model's parameters frozen when training our probes. Our results, presented in Figure 2 (dashed lines), show a significant difference between the layers with the highest probing performance and reinitialization curves for the data-rich settings (darkest solid lines). For example, the probing accuracy on all tasks is near chance for the first six layers. Despite this, these early layer parameters exhibit significant transferability to the finetuning tasks: preserving them while reinitializing all other layers enables large gains in finetuning accuracy across tasks. Interestingly, however, we observe that the smallest-data regime's curves are much more similar to the probing curves across all tasks than the larger-data regimes. Smaller finetuning datasets enable fewer updates to the network before overfitting occurs; thus, it may be that finetuning interpolates between the extremes of probing (no data) and fully-supervised learning (enough data to completely overwrite the pretrained parameters). We leave a more in-depth exploration of this connection to future work. 4 Which layers are most useful for finetuning? While the incremental reinitializations measure each BERT layer's incremental effect on transfer Figure 3: Early layers provide the most QNLI gains, but middle ones yield an added boost for CoLA and SST-2. Finetuning results for 1) reinitializing a consecutive three-layer block ("block reinitialized") and 2) reinitializing all other layers ("block preserved" learning, they do not assess each layer's contribution in isolation, relative to either the full BERT model or an entirely reinitialized model. Measuring this requires eliminating the number of pretrained layers as a possible confounder. To do so, we conduct a series of localized reinitialization experiments, where we take all blocks of three consecutive layers and either 1) reinitialize those layers or 2) preserve those layers while reinitializing the others in the network. 1 These localized reinitializations help determine the extent to which BERT's different layers are either necessary (performance decreases when they are removed) or sufficient (performance is higher than random initialization when they are kept) for a specific level of performance. Again, BERT's residual connections permit the model to ignore reinitialized layers' outputs if they harm finetuning performance. These results, shown in Figure 3, demonstrate that the earlier layers appear to be generally more helpful for finetuning relative to the later layers, even when controlling for the amount of finetuning data. However, there are strong task-specific effects: SST-2 appears to be particularly damaged by removing middle layers, while the effects on CoLA are distributed more uniformly. The effects 1 See the Appendix for more discussion and experiments where only one layer is reinitialized. on QNLI appear to be concentrated almost entirely in the first four layers of BERT-suggesting opportunities for future work on whether sparsity of this sort indicates the presence of easy-to-extract features correlated with the task label. These results support the hypothesis that different kinds of feature processing learned during BERT pretraining are helpful for different finetuning tasks, and provide a new way to gauge similarity between different tasks. How vital is the ordering of pretrained layers? We also investigate whether the success of BERT depends mostly on learned inter-layer phenomena, such as learned feature processing pipelines (Tenney et al., 2019a), or intra-layer phenomena, such as a learned feature-agnostic initialization scheme which aid optimization (e.g. Glorot and Bengio, 2010). To approach this question, we perform several layer permutation experiments, where we randomly shuffle the order of BERT's layers before finetuning. The degree that finetuning performance is degraded in these runs indicates the extent to which BERT's finetuning success is dependent on a learned composition of feature processors, as opposed to providing better-initialized individual layers which would help optimization anywhere in the network. These results, plotted in Figure 4, show that scrambling BERT's layers reduces their finetuning ability to not much above a randomly-initialized network, on average. This decrease suggests that BERT's transfer abilities are highly dependent on the intra-layer interactions learned during pretraining. We also test for correlation of performance between tasks. We do this by comparing task-pairs for each permutation, as we use the same permutation for the nth run of each task. The high correlation coefficients for most pairs shown in Table 1 suggest that BERT finetuning relies on similar inter-layer structures across tasks. Conclusion We present a set of experiments to better understand how the different pretrained layers in BERT influence its transfer learning ability. Our results reveal the unique importance of transferability of parameters to successful transfer learning, distinct from the transferability of fixed representations assessed by probing. We also disentangle important factors affecting the role of layers in transfer learning: task vs. quantity of finetuning data, number vs. location of pretrained layers, and presence vs. order of layers. While probing continues to advance our understanding of linguistic structures in pretrained models, these results indicate that new techniques are needed to connect these findings to their potential impacts on finetuning. The insights and methods presented here are one contribution toward this goal, and we hope they enable more work on understanding why and how these models work. B Reinitialization We reinitialize all parameters in each layer, except those for layer normalization (Ba et al., 2016), by sampling from a truncated normal distribution with µ = 0, σ = 0.02 and truncation range (−0.04, 0.04). For the layer norm parameters, we set β = 0, γ = 1. This matches how BERT was initialized (see the original BERT code on GitHub and the corresponding TensorFlow documentation). C Subsampling, number of trials, and error bars The particular datapoints subsampled can have a large impact on downstream performance, especially when data is scarce. To capture the full range of outcomes due to subsampling, we randomly sample a different dataset for each trial index. Due to this larger variation when data is scarce, we perform 50 trials for the experiments with 500 examples, while we perform three trials for the other incremental reinitialization experiments. A scatterplot of the 500-example trials is shown in Figure 5. For the localized reinitialization experiments, we perform ten trials each. Error bars shown on all graphs in the main text are 95% confidence intervals calculated with a tdistribution. D Localized reinitializations of single layers We also experiment with performing our localized reinitialization experiments at the level of a single layer. To do so, we perform three trials of reinitializing each layer k ∈ {1 . . . 12} and then finetuning on each of the three GLUE tasks. Our results are plotted in Figure 6. Interestingly, we observe little effect on finetuning performance from reinitializing each layer (except for reinitializing the first layer on CoLA performance). This lack of effect suggests either redundant information between layers or that the "interface" exposed by the two neighboring layers somehow beneficially constrains optimization. E Number of finetuning epochs He et al. (2019) found that much or all of the performance gap between an ImageNet-pretrained model and a model trained from random initialization could be closed when the latter model was trained for longer. To evaluate this, we track validation losses up to ten epochs in our incremental experiments, for k ∈ {0, 6, 12} across all tasks and for 500 and 5k examples. We find minimal effects of training longer than three epochs for the subsamples of 5k, but find improvements of several percentage points for training for five epochs for the trials with 500 examples. Thus, for the trials of 500 in Figure 2, we train for five epochs, while training for three epochs for all other trials. We train our probing experiments (8 trials per layer) with early stopping for a maximum of 40 epochs on the full dataset. F Higher learning rate for reinitialized layers In their reinitialization experiments on a convolutional neural network for medical images, Raghu et al. (2019) found that a 5x larger rate on the reinitialized layers enabled their model to achieve higher finetuning accuracy. To evaluate this possibility in our setting, we increase the learning rate by a factor of five for the reinitialized layers. The results for our incremental reinitializations are plotted in Figure 7. A higher learning rate appears to increase the variance of the evaluation metrics while not improving performance. Thus, we keep the learning rate the same across layers. G Layer norm Because the residual connections around each sublayer in BERT are of the form LayerNorm(x + Sublayer(x)), reinitializing a particular layer neutralizes the effect of the last layer norm application from the previous layer in a way that cannot be circumvented through the residual connections. However, for brevity we simply refer to "reinitializing a layer" in this paper. We also assessed whether preserving the layer norm parameters in each layer might aid optimization. To do so, we preserved these parameters in our incremental trials with 5k examples. These trials are plotted in Figure 8, and demonstrate that preserving layer norm does not aid (and may even harm) finetuning of reinitialized layers. H Dataset descriptions and statistics We display more information about the finetuning datasets, including the full size of the datasets, in I.2 Computing infrastructure All experiments were run on single Titan XP GPUs. I.4 Average runtime Average runtime for each approach: I.5 Evaluation method To evaluate the performance of our method, we compute accuracy for SST-2 and QNLI and Matthews Correlation Coefficient (Matthews, 1975) for CoLA. We compute these metrics always on the official validation sets, which are never seen by the model during training. Accuracy measures the ratio of correctly predicted labels over the size of the test set. Formally: accuracy = T P +T N T P +T N +F P +F N Since CoLA presents class imbalances, MCC is used, which is better suited for unbalanced binary classifiers (Warstadt et al., 2019). It measures the correlation of two Boolean distributions, giving a value between -1 and 1. A value of 0 means that the two distributions are uncorrelated, regardless of any class imbalance. M CC = (T P ·T N )−(F P ·F N )) √ (T P +F P )(T P +F N )(F P +T N )(T N +F N ) I.6 Hyperparameters We performed one experiment with a 5x learning rate and implemented early stopping to choose the number of epochs for the probing experiments. For batch size and learning rate, we kept the default parameters for all tasks: • Learning rate: 2e-5 • Batch size: 8
4,586.4
2020-04-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Research on an Online Monitoring Device for the Powder Laying Process of Laser Powder Bed Fusion Improving the quality of metal additive manufacturing parts requires online monitoring of the powder bed laying procedure during laser powder bed fusion production. In this article, a visual online monitoring tool for flaws in the powder laying process is examined, and machine vision technology is applied to LPBF manufacture. A multiscale improvement and model channel pruning optimization method based on convolutional neural networks is proposed, which makes up for the deficiencies of the defect recognition method of small-scale powder laying, reduces the redundant parameters of the model, and enhances the processing speed of the model under the premise of guaranteeing the accuracy of the model. Finally, we developed an LPBF manufacturing process laying powder defect recognition algorithm. Test experiments show the performance of the method: the minimum size of the detected defects is 0.54 mm, the accuracy rate of the feedback results is 98.63%, and the single-layer laying powder detection time is 3.516 s, which can realize the effective detection and control of common laying powder defects in the additive manufacturing process, avoids the breakage of the scraper, and ensures the safe operation of the LPBF equipment. Introduction The LPBF manufacturing process is a complex and dynamic interaction between a high-energy laser and metal powder.It involves interactions between gas, liquid, and solid phases [1].The quality of parts manufacturing is influenced by many factors, including material properties [2], optical path systems [3], scanning characteristics [4], geometry [5], and mechanical structure [6].The powder laying process is one of many contributing elements, and as a significant factor impacted by material characteristics and mechanical structure, it is a critical stage in the manufacturing of LPBF components.The efficacy of powder laying has a direct impact on component production quality [7].If the powder laying layer is uneven, i.e., if there are faults in the powder laying layer, such as missing powder, streaks, or piles of powder, the surface of the component, which will be cooled and solidified by this layer following the high-energy laser melting, will be uneven [8].The accumulation of metal powder laying powder defects over time will also result in metallurgical flaws like spheroidization, porosity, cracks, and unmelted powder, which ultimately affects the part's manufacturing quality and, in extreme cases, can harm the powder spreading scraper and LPBF molding machinery.In order to avoid powder laying errors that result in part molding failure or damage to LPBF equipment, it is crucial to ensure that the quality of each layer of powder laying corresponds to the manufacturing needs of the part during the part manufacturing process. In the LPBF parts manufacturing process, a squeegee performs the powder spreading process, which causes the metal powder to be spread on the molded substrate, waiting for the laser scanning process.The powder laying process is the first step in the manufacturing process to ensure that the parts are manufactured and is a critical step that has an impact on the quality of the metal LPBF manufacturing.At present, the research on the detection of powder laying quality mainly includes three stages in the molding process, which are the detection of scraper state information in the powder laying process, the detection of powder layer state after the completion of the powder laying action, and the detection of powder layer state after laser scanning and processing.In the parts manufacturing process, when the process parameters or manufacturing substrate heating temperature and other factors are not set reasonably, it will lead to parts warping, surface spheroidization, and non-fused defects, which affect the scraper spread powder, and cause serious damage to the scraper.To monitor the movement state of the scraper, B. Reinarz et al. [9] installed piezoelectric accelerometers on the powder spreading scraper, real-time monitoring of the acceleration information of the scraper in the process of powder spreading when the part exists ultra-high, the powder spreading scraper will be an ultra-high part of the existence of a certain degree of interference, resulting in different degrees of vibration, according to the speed change to analyze the protruding of the molten cladding layer.S. Kleszczynski et al. [10] continued to improve the research on this basis, the use of acceleration sensors to monitor the speed change information of the scraper laying powder and the limit sensors at both ends of the scraper operation as the start and end position of the information recording, to achieve the accurate acquisition of acceleration information at different positions.Warpage, surface spheroidization, super-high melted cladding layer, and other defects that occur during the part molding process, however, are the result of continuous accumulation.If these defects can be identified early on and appropriate action is taken to prevent the accumulation of defects, interference between the scraper and super-high part of the part can be avoided, ensuring the safe and stable operation of the LPBF manufacturing process.As a result, machine learning, machine vision, and deep learning technologies are widely used to monitor the powder layer status during LPBF molding as a way to improve part-molding quality and to avoid damage to the LPBF molding equipment.Many studies [11][12][13][14][15][16][17] have extracted defects in the powder-laying process using devices such as industrial cameras and infrared cameras coupled with image processing algorithms.For example, M. Abdelrahman [18] and others, utilized a high-resolution optical imaging monitoring system to photograph the powder bed before and after laser scanning, which used multiple light sources from different directions to construct the image, and then created a binary template from a sliced 3D model of the part, which was utilized to index the optical image data to the part geometry, which ultimately allowed for the detection of defects in the part defects in the area of the part; B. Shi et al. [19], proposed to build a powder bed inspection system using an industrial camera and multiple illumination sources, and proposed a better illumination strategy by investigating the expression of defective features under different illumination, and also utilized image feature enhancement and adaptive threshold segmentation algorithm based on the grayscale features of the powder bed image for separating defective regions and based on the three convolutional neural network algorithms, namely, AlexNet, RexNet50, and VGG16-three kinds of convolutional neural network algorithms on the current powder layer exist in the stripe, ultra-high and incomplete powder laying three types of defective regions were experimentally compared and analyzed, and the results showed that the three kinds of defective data are prone to overfitting under the complex model.Other scholars have identified and detected defects in the powder laying process by using industrial cameras, infrared cameras, thermal cameras, and other devices combined with depth algorithms [20][21][22][23][24].The above research has realized the acquisition of scraper motion signals and powder bed images in the powder spreading process by installing piezoelectric accelerometers on the scraper, installing industrial cameras, etc.At the same time, the combination of deep learning algorithms realizes the recognition of defects in the powder spreading process, but there are still the following problems: (1) most of the studies only focus on the detection and identification of single powder laying defects, whereas the defects generated during the LPBF molding process are more complex, and multiple defects are prone to occur in a single powder laying layer; (2) most of the research is to obtain the feature information of the powder bed and scraper in the powder laying process, and then use image processing algorithms to identify, detect and analyze them, which belongs to the offline analysis, and it is not possible to realize the real-time detection of the powder laying defects in the LPBF molding process and to control the LPBF equipment to perform the suspension of the printing when the powder laying defects are serious, and other operations.A previous experimental study of the laser powder bed melting process discovered that defects in the laying of powder occurred during the printing process and staff did not detect the problem in time to deal with it in a timely manner.On the one hand, this will lead to the laying process of each layer of metal powder thickness not meeting the theoretical thickness and the manufacturing process of the metal powder over-melting or not melting, causing damage to the squeegee rubber strip.It also leads to a waste of the raw material and time costs for the metal powder.As shown in Figure 1, the JSJ100 equipment printing process, due to the laying of powder defects not being dealt with in a timely manner, then continue to accumulate, ultimately leading to the manufacturing failure. powder spreading process, but there are still the following problems: (1) most of the studies only focus on the detection and identification of single powder laying defects, whereas the defects generated during the LPBF molding process are more complex, and multiple defects are prone to occur in a single powder laying layer; (2) most of the research is to obtain the feature information of the powder bed and scraper in the powder laying process, and then use image processing algorithms to identify, detect and analyze them, which belongs to the offline analysis, and it is not possible to realize the real-time detection of the powder laying defects in the LPBF molding process and to control the LPBF equipment to perform the suspension of the printing when the powder laying defects are serious, and other operations.A previous experimental study of the laser powder bed melting process discovered that defects in the laying of powder occurred during the printing process and staff did not detect the problem in time to deal with it in a timely manner.On the one hand, this will lead to the laying process of each layer of metal powder thickness not meeting the theoretical thickness and the manufacturing process of the metal powder over-melting or not melting, causing damage to the squeegee rubber strip.It also leads to a waste of the raw material and time costs for the metal powder.As shown in Figure 1, the JSJ100 equipment printing process, due to the laying of powder defects not being dealt with in a timely manner, then continue to accumulate, ultimately leading to the manufacturing failure.Therefore, it is imperative to develop a powder coating quality online monitoring device with automatic real-time detection and result feedback function, which realizes real-time detection and evaluation of the quality of each layer of powder coating by visual detection technology, identifies a variety of powder coating defects, and intervenes in the manufacturing process of LPBF manufacturing equipment according to the classification results, to improve the quality of parts manufacturing and manufacturing efficiency.At the same time, recording and saving the inspection results in each layer of powder laying in the manufacturing process providing data support for subsequent parts quality tracing and process research, which is of great significance for the future development of metal additive manufacturing.In this paper, based on machine vision, deep learning, image processing, and other methods, an online monitoring device for LPBF powder laying Therefore, it is imperative to develop a powder coating quality online monitoring device with automatic real-time detection and result feedback function, which realizes real-time detection and evaluation of the quality of each layer of powder coating by visual detection technology, identifies a variety of powder coating defects, and intervenes in the manufacturing process of LPBF manufacturing equipment according to the classification results, to improve the quality of parts manufacturing and manufacturing efficiency.At the same time, recording and saving the inspection results in each layer of powder laying in the manufacturing process providing data support for subsequent parts quality tracing and process research, which is of great significance for the future development of metal additive manufacturing.In this paper, based on machine vision, deep learning, image processing, and other methods, an online monitoring device for LPBF powder laying quality is developed and deployed on the JSJ100 LPBF metal additive manufacturing equipment developed by the team.Ultimately, the accurate identification and judgment of powder laying defects are realized, and the online identification and monitoring of powder laying defects is completed, which greatly saves the material cost, time cost, and personnel cost, and at the same time ensures the safe operation of LPBF equipment. Program Design of Online Visual Monitoring Device for the LPBF Powder Laying Process The online visual monitoring device of the LPBF powder laying process is designed based on the research and development of JSJ100 equipment, which has a maximum molding size of 250 mm × 250 mm × 300 mm, and a single molding time between a few hours to dozens of hours, JSJ100 equipment process conditions parameters, as shown in Table 1.Through the preliminary experiments, it was found that the LPBF powder laying process mainly includes six powder laying states: normal, a super-high fused cladding layer, striped powder pile, a lumpy powder pile, a squeegee stripe, and an insufficient powder laying, as shown in Figure 2. The program has the following requirements that it can accurately identify: (1) the above six laying powder state defects; (2) the above laying powder defects for common size parameters of more than 1 mm, and the online monitoring system resolution of at least monitor 1 mm detail defects; (3) the equipment in the processing of medium-sized parts in a single layer of the processing time of about 2-3 min, in a single layer of laying powder time of 15-20 s, to ensure that the efficiency of the parts molding, the requirements of the system to lay a single layer of the quality of detection of the powder is less than a single layer of the time of molding the time of 5 percent, so it is necessary to control the time to detect the detection time in less than 6 s.Hardware imaging system in the overall system parameters under the same premise, when the coaxial installation of the camera when the imaging effect is the best, but taking into account the JSJ100 equipment optical circuit modification difficulties, as well as JSJ100 equipment molding bin space is small, so the use of off-axis mounting program to lay the powder process of image acquisition, off-axis camera mounting program schematic diagram in Figure 3.The software system utilizes the OPC UA protocol to achieve communication.The controller of the JSJ100 device selects the control platform of AMCP with the OPC UA server, which supports the latest OPC UA protocol, and the nodes in the server can be accessed and operated through the OPC UA client by using IDs and digital certificates.The main functions of the software design include: (1) a software communication function, mainly involved in the development of the OPC UA client function and the secondary development of the industrial camera, in which the OPC UA client function is realized using the C# language, and the OPC UA components for the development of the monitoring system software and the camera's communication is mainly through the form of the secondary development of the camera control program embedded into the development of the software, mainly including the camera's opening and closing, triggering, the camera's exposure time, the image width and height parameters, and the camera mode, etc. to achieve the automation function of the software; (2) An image automatic acquisition and real-time correction function, used after it completes the powder spreading action, to pause the printing process and carry out the powder spreading to complete the variable identification.At the same time, the online monitoring software through the OPC UA subscription function realizes the identification of the variable monitoring of the off-axis installation of the camera to collect the powder spreading image as a real-time correction; (3) The detection and feedback control function of powder spreading defects, the detection and identification of defects in the collected powder spreading image and the identification of the corrected powder spreading image in the area of identification and statistics of the results, and the statistical results will be fed back to the controller of the JSJ100 equipment through the OPC UA protocol, and the equipment will execute the relevant commands of continuing to print, alarming or pausing according to the set control logic; (4) Process data storage and database functions, online monitoring system including the name of the print job, start time, the total number of layers, the image of each layer of the laying of powder, and the identification of the results of the record and save for the operator to query.name of the print job, start time, the total number of layers, the image of each layer of the laying of powder, and the identification of the results of the record and save for the operator to query. Construction of the LPBF Online Monitoring Device for the Powder Laying Process According to the online visual monitoring program design, shown in Section 2, to build the system according to the camera resolution requirements in the monitoring system, we chose Daheng Group's industrial camera model MER-630-16GM/C-P, whose maximum resolution is 3088 (H) × 2064 (V) and frame rate is 16 fps.The selection of the lens is based on the selected industrial camera, which is known to be a type of industrial camera with a camera size of 1/1.8 inch and a target size of 7.18 mm × 5.32 mm.Due to the use of off-axis shooting, which is affected by the size of the window glass, the actual distance between the camera and the surface of the metal substrate is about 420-470 mm and taking the object distance l = 450 mm, according to the formula for calculating the focal length of the objective lens the focal length of 9.567 mm can be obtained.At the same time, name of the print job, start time, the total number of layers, the image of each layer of the laying of powder, and the identification of the results of the record and save for the operator to query. Construction of the LPBF Online Monitoring Device for the Powder Laying Process According to the online visual monitoring program design, shown in Section 2, to build the system according to the camera resolution requirements in the monitoring system, we chose Daheng Group's industrial camera model MER-630-16GM/C-P, whose maximum resolution is 3088 (H) × 2064 (V) and frame rate is 16 fps.The selection of the lens is based on the selected industrial camera, which is known to be a type of industrial camera with a camera size of 1/1.8 inch and a target size of 7.18 mm × 5.32 mm.Due to the use of off-axis shooting, which is affected by the size of the window glass, the actual distance between the camera and the surface of the metal substrate is about 420-470 mm and taking the object distance l = 450 mm, according to the formula for calculating the focal length of the objective lens the focal length of 9.567 mm can be obtained.At the same time, Construction of the LPBF Online Monitoring Device for the Powder Laying Process According to the online visual monitoring program design, shown in Section 2, to build the system according to the camera resolution requirements in the monitoring system, we chose Daheng Group's industrial camera model MER-630-16GM/C-P, whose maximum resolution is 3088 (H) × 2064 (V) and frame rate is 16 fps.The selection of the lens is based on the selected industrial camera, which is known to be a type of industrial camera with a camera size of 1/1.8 inch and a target size of 7.18 mm × 5.32 mm.Due to the use of off-axis shooting, which is affected by the size of the window glass, the actual distance between the camera and the surface of the metal substrate is about 420-470 mm and taking the object distance l = 450 mm, according to the formula for calculating the focal length of the objective lens the focal length of 9.567 mm can be obtained.At the same time, taking into account the adjustment of the camera interface type, the final choice is the Japanese company Computar's focal length of 8 mm, the model for the M0828-MPW2 lens.Taking into account the monitoring system camera is installed in the molding silo outside of the glass, when obtaining the image of the paving powder, there is an easy reflective phenomenonthat can affect the image quality, therefore, the lens is installed on the polarizer, with a light source with two perpendicular white ordinary LED lamps.The paraxial monitoring system hardware system platform is shown in Figure 4.The software system carries out the design of the human-computer interaction interface, as shown in Figure 5, and realizes the communication between the camera and the AMCP control platform. Micromachines 2023, 14, x FOR PEER REVIEW 6 of 22 taking into account the adjustment of the camera interface type, the final choice is the Japanese company Computar's focal length of 8 mm, the model for the M0828-MPW2 lens. Taking into account the monitoring system camera is installed in the molding silo outside of the glass, when obtaining the image of the paving powder, there is an easy reflective phenomenonthat can affect the image quality, therefore, the lens is installed on the polarizer, with a light source with two perpendicular white ordinary LED lamps.The paraxial monitoring system hardware system platform is shown in Figure 4.The software system carries out the design of the human-computer interaction interface, as shown in Figure 5, and realizes the communication between the camera and the AMCP control platform.taking into account the adjustment of the camera interface type, the final choice is Japanese company Computar's focal length of 8 mm, the model for the M0828-MPW2 l Taking into account the monitoring system camera is installed in the molding silo out of the glass, when obtaining the image of the paving powder, there is an easy reflec phenomenonthat can affect the image quality, therefore, the lens is installed on the po izer, with a light source with two perpendicular white ordinary LED lamps.The para monitoring system hardware system platform is shown in Figure 4.The software sys carries out the design of the human-computer interaction interface, as shown in Figu and realizes the communication between the camera and the AMCP control platform . Tilted Image Correction As the camera adopts the off-axis mounting scheme, resulting in aberration and perspective distortion in the acquired powder spread image, it is necessary to carry out camera calibration and perspective correction for this.Camera calibration uses the pinhole imaging principle to find the mathematical relationship between the points in the world coordinate system and the pixel coordinate system and completes the data conversion between the two.In this paper, we use a Halcon calibration assistant and a 7 × 7 dot calibration board to solve the parameters of the acquired 20 calibration images; the calibration results are shown in Table 2, and the image distortion correction is completed according to the obtained camera parameters. Perspective Correction The perspective transformation correction of the original metal powder laying image actually maps the value of each pixel on the original metal powder laying image to the new plane in turn, with the principle equation: where (u, v) are the coordinates of each pixel point in the tilted image of the original metal powder laying powder, the parameter w = 1, and H is the transformation matrix required for calibration, which is converted to chi-square coordinates and matrices in the form: In the transformation matrix H (h 11 , h 12 , h 21 , h 22 ) denote linear transformations, (h 31 , h 32 ) denote translation transformations, and (h 13 , h 23 ) denote perspective transformations.We obtain the H matrix using the hom_vector_to_proj_hom_mat2d (: : Px, Py, Pw, Qx, Qy, Qw, method: H) operator. The key parameter (Px, Py) is the set of coordinates of each point in the image before correction, and (Qx, Qy) is the set of coordinates of the corresponding points after correction, in which the coordinates of a certain point are set to be (x, y), which is normalized by making h 33 = 1: According to Formulas (3) and ( 4), it can be seen that there are a total of eight unknown parameters, and by a point corresponding to the x, y coordinates, it can be obtained from the two solution equations, so only four points can be solved for the H matrix.To obtain the perspective transformation matrix, this paper chooses to manufacture the substrate with four mounting holes in the center of the coordinates of the solution.As shown in Figure 6, the process of extracting the coordinates of the center point of the substrate mounting holes is as follows: (a) draw the ROI region, which contains an image of a substrate mounting hole; (b) process the drawn image of the ROI region by using an adaptive segmentation algorithm, and obtain the region of the mounting holes; (c) carry out the extraction of the contour of the region of the mounting holes; and (d) fit a circle on the basis of the contour of the region, and extract the coordinates of the circle's center as the center coordinate of the position of the mounting holes.The perspective transformation matrix can be obtained by substituting the coordinates of the four points extracted by the above method into (3) and (4).Meanwhile, since the bilinear difference algorithm is characterized by high quality and strong continuity of pixel values, the bilinear interpolation method is used to supplement some of the missing points on the image after perspective transformation.In order to facilitate further processing of defects in the powder laying process, the region of interest, the image of the manufacturing region of the part, is obtained through image cropping.The original image and the final image of the manufacturing region are shown in Figure 7. Figure 6, the process of extracting the coordinates of the center point of the substrate mounting holes is as follows: (a) draw the ROI region, which contains an image of a substrate mounting hole; (b) process the drawn image of the ROI region by using an adaptive segmentation algorithm, and obtain the region of the mounting holes; (c) carry out the extraction of the contour of the region of the mounting holes; and (d) fit a circle on the basis of the contour of the region, and extract the coordinates of the circle's center as the center coordinate of the position of the mounting holes.The perspective transformation matrix can be obtained by substituting the coordinates of the four points extracted by the above method into (3) and (4).Meanwhile, since the bilinear difference algorithm is characterized by high quality and strong continuity of pixel values, the bilinear interpolation method is used to supplement some of the missing points on the image after perspective transformation.In order to facilitate further processing of defects in the powder laying process, the region of interest, the image of the manufacturing region of the part, is obtained through image cropping.The original image and the final image of the manufacturing region are shown in Figure 7. Image Processing and Data Enhancement Due to the complex lighting environment in the LPBF manufacturing silo and the long manufacturing experimental period, in order to effectively enhance the diversity of the powder laying data, the powder laying images under different lighting environments and luminance were collected by interfering with the LPBF manufacturing process, 4).Meanwhile, since the bilinear difference algorithm is characterized by high quality and strong continuity of pixel values, the bilinear interpolation method is used to supplement some of the missing points on the image after perspective transformation.In order to facilitate further processing of defects in the powder laying process, the region of interest, the image of the manufacturing region of the part, is obtained through image cropping.The original image and the final image of the manufacturing region are shown in Figure 7. Image Processing and Data Enhancement Due to the complex lighting environment in the LPBF manufacturing silo and the long manufacturing experimental period, in order to effectively enhance the diversity of the powder laying data, the powder laying images under different lighting environments and luminance were collected by interfering with the LPBF manufacturing process, Image Processing and Data Enhancement Due to the complex lighting environment in the LPBF manufacturing silo and the long manufacturing experimental period, in order to effectively enhance the diversity of the powder laying data, the powder laying images under different lighting environments and luminance were collected by interfering with the LPBF manufacturing process, respectively.Cumulatively, 1794 images of powder laying were collected through the experimental method of manufacturing molding process, including 221 effective images with common defects, and some different image data of the powder laying defects are shown in Figure 8. Micromachines 2024, 15, 97 9 of 22 At the same time, considering different paving defect size characteristics, classification network recognition characteristics, and the small-scale region division recognition strategy adopted in this paper, the acquired paving images are cropped.The size of the processed powder laying defect image is 50 pixel × 50 pixel and the data annotation of the cropped defect images is performed to produce a small-scale powder laying defect dataset, and the annotation principle is shown in Figure 9.By cropping 221 valid images collected from several manufacturing experiments, a total of more than 170,000 50 pixel × 50 pixel images were obtained, but the normal image data were predominant among them, and in order to enhance the diversity of the dataset, data augmentation was used to expand the data in order to avoid the interclass imbalance phenomenon that exists in different data.Considering also that the two types of defects, the strip powder stack and scraper strips, have a fixed texture direction, three methods of contrast enhancement, rotation, and image mirroring are applied to the images to expand the dataset.The originally collected image data has been able to meet the training requirements of deep learning after the cropping process and data expansion, the dataset includes 6 categories, 2050 data images after cropping, and 5150 images have been added after data expansion, totaling 7200 images.For the normal powder layer, ultra-high sintered layer, strip powder stack, insufficient powder laying powder, and scraper stripe, there are 200 pictures of each type of metal powder laying powder state. respectively.Cumulatively, 1794 images of powder laying were collected through the perimental method of manufacturing molding process, including 221 effective ima with common defects, and some different image data of the powder laying defects shown in Figure 8.At the same time, considering different paving defect size characte tics, classification network recognition characteristics, and the small-scale region divis recognition strategy adopted in this paper, the acquired paving images are cropped.T size of the processed powder laying defect image is 50 pixel × 50 pixel and the data an tation of the cropped defect images is performed to produce a small-scale powder lay defect dataset, and the annotation principle is shown in Figure 9.By cropping 221 va images collected from several manufacturing experiments, a total of more than 170,000 pixel × 50 pixel images were obtained, but the normal image data were predomin among them, and in order to enhance the diversity of the dataset, data augmentation w used to expand the data in order to avoid the interclass imbalance phenomenon that ex in different data.Considering also that the two types of defects, the strip powder st and scraper strips, have a fixed texture direction, three methods of contrast enhanceme rotation, and image mirroring are applied to the images to expand the dataset.The or nally collected image data has been able to meet the training requirements of deep lea ing after the cropping process and data expansion, the dataset includes 6 categories, 2 data images after cropping, and 5150 images have been added after data expansion, to ing 7200 images.For the normal powder layer, ultra-high sintered layer, strip pow stack, insufficient powder laying powder, and scraper stripe, there are 200 pictures of ea type of metal powder laying powder state.respectively.Cumulatively, 1794 images of powder laying were collected through the perimental method of manufacturing molding process, including 221 effective ima with common defects, and some different image data of the powder laying defects shown in Figure 8.At the same time, considering different paving defect size characte tics, classification network recognition characteristics, and the small-scale region divis recognition strategy adopted in this paper, the acquired paving images are cropped.T size of the processed powder laying defect image is 50 pixel × 50 pixel and the data an tation of the cropped defect images is performed to produce a small-scale powder lay defect dataset, and the annotation principle is shown in Figure 9.By cropping 221 va images collected from several manufacturing experiments, a total of more than 170,000 pixel × 50 pixel images were obtained, but the normal image data were predomin among them, and in order to enhance the diversity of the dataset, data augmentation w used to expand the data in order to avoid the interclass imbalance phenomenon that ex in different data.Considering also that the two types of defects, the strip powder st and scraper strips, have a fixed texture direction, three methods of contrast enhanceme rotation, and image mirroring are applied to the images to expand the dataset.The or nally collected image data has been able to meet the training requirements of deep lea ing after the cropping process and data expansion, the dataset includes 6 categories, 2 data images after cropping, and 5150 images have been added after data expansion, to ing 7200 images.For the normal powder layer, ultra-high sintered layer, strip pow stack, insufficient powder laying powder, and scraper stripe, there are 200 pictures of e type of metal powder laying powder state. Identification of Powder Laying Defects by Small-Scale Area Division In order to verify the effect of different class models on the extraction and recognition of features of various types of metal powder laying images, three models, AlexNet, ResNet50, and SqueezeNet, are trained and analyzed using migration learning.The environment used in this experiment is a 64-bit Windows 10 system, the processor model is Intel(R) Core(TM) i5-10400F CPU @ 2.90 GHz, and the graphics card is NVIDIA brand GeForce RTX 3060 (12 GB), loaded with the Halcon Image Processing Algorithm Library and Halcon 21.11 Deep Learning Framework, which provides thousands of image processing operators and commonly used pre-trained neural network models for image processing, training, and recognition, as well as building customized models for image classification, target detection, and image segmentation. Identification of Powder Laying Defects by Small-Scale Area Division In order to verify the effect of different class models on the extraction and recognition of features of various types of metal powder laying images, three models, AlexNet, Res-Net50, and SqueezeNet, are trained and analyzed using migration learning.The environment used in this experiment is a 64-bit Windows 10 system, the processor model is Intel(R) Core(TM) i5-10400F CPU @ 2.90 GHz, and the graphics card is NVIDIA brand Ge-Force RTX 3060 (12 GB), loaded with the Halcon Image Processing Algorithm Library and Halcon 21.11 Deep Learning Framework, which provides thousands of image processing operators and commonly used pre-trained neural network models for image processing, training, and recognition, as well as building customized models for image classification, target detection, and image segmentation. Evaluation of the Model In order to judge the accuracy of the training model, it needs to be evaluated.According to the actual needs of the classification scenarios in this paper, the accuracy, precision, recall, and F1 score are used as the evaluation indexes of the models' effect on the recognition of laying powder defects.Another 20% of the test set data was used to evaluate the models, and the results are shown in Figure 11a.The evaluation and assessment results showed that the three models had more than an 80% recognition accuracy on the test data Evaluation of the Model In order to judge the accuracy of the training model, it needs to be evaluated.According to the actual needs of the classification scenarios in this paper, the accuracy, precision, recall, and F1 score are used as the evaluation indexes of the models' effect on the recognition of laying powder defects.Another 20% of the test set data was used to evaluate the models, and the results are shown in Figure 11a.The evaluation and assessment results showed that the three models had more than an 80% recognition accuracy on the test data set, and the performance was similar to the accuracy on the validation set.The AlexNet model performed poorly, and the ResNet50 and SqueezeNet models performed better and were closer to each other.The recognition of each metal powder laying powder defect category image on AlexNet, ResNet50, and SqueezeNet models were also exported to analyze the recognition ability of different models for each category of defect images.As shown in Figure 11b, from the table, it can be seen that the three models of AlexNet, ResNet50, and SqueezeNet had more than a 70% recognition rate on each category of the defect data, and it can be seen from the analysis that the two models, ResNet50 and SqueezeNet, had better recognition results on the validation set, and the overall difference in the recognition rate was smaller.set, and the performance was similar to the accuracy on the validation set.The AlexNet model performed poorly, and the ResNet50 and SqueezeNet models performed better and were closer to each other.The recognition of each metal powder laying powder defect category image on AlexNet, ResNet50, and SqueezeNet models were also exported to analyze the recognition ability of different models for each category of defect images.As shown in Figure 11b, from the table, it can be seen that the three models of AlexNet, Res-Net50, and SqueezeNet had more than a 70% recognition rate on each category of the defect data, and it can be seen from the analysis that the two models, ResNet50 and SqueezeNet, had better recognition results on the validation set, and the overall difference in the recognition rate was smaller. Heat Map Visualization and Analysis In order to more intuitively understand and analyze the identification of the above three models for various types of powder laying defects, this paper utilizes the Gradientweighted Class Activation Mapping (Grad-CAM) technique to obtain a visual interpretation of the heat map of some of the images from the deep network through gradient-based localization, in order to analyze the distinguishing ability of the three models for the image features of different powder laying defects, as well as the image regions that are the main focus of attention when performing inference.As shown in Figure 12, the heat map visualization results of different powder laying images in the three models of AlexNet, ResNet50, and SqueezeNet are shown.Among them, the red and yellow regions indicate the regions of interest for different categories of dusted images for inference under different models.It can be observed that among the three trained models, the SqueezeNet model was more accurate in focusing on the region of interest when performing image Heat Map Visualization and Analysis In order to more intuitively understand and analyze the identification of the above three models for various types of powder laying defects, this paper utilizes the Gradientweighted Class Activation Mapping (Grad-CAM) technique to obtain a visual interpretation of the heat map of some of the images from the deep network through gradient-based localization, in order to analyze the distinguishing ability of the three models for the image features of different powder laying defects, as well as the image regions that are the main focus of attention when performing inference.As shown in Figure 12, the heat map visualization results of different powder laying images in the three models of AlexNet, ResNet50, and SqueezeNet are shown.Among them, the red and yellow regions indicate the regions of interest for different categories of dusted images for inference under different models.It can be observed that among the three trained models, the SqueezeNet model was more accurate in focusing on the region of interest when performing image inference, which is beneficial in accurately performing inference on the powder laying defects. inference, which is beneficial in accurately performing inference defects. SqueezeNet Model-Based Multiscale Improved Method for Identifyin Defects Through the comparative experimental analysis of the three m can be seen that the SqueezeNet model had better overall perform improve the accuracy and recognition efficiency of the paving defe way, this section mainly focuses on the improvement and optimiz paving defect recognition method based on the SqueezeNet mode is proposed to analyze the deficiencies in the recognition of defects ment images, and a multiscale pavement dataset is constructed fo by model training, evaluation, and feature map visualization. Multiscale Powder Laying Defect Identification Methods The incorrectly identified powder laying image in Section 4. was found that the main reason was due to the fact that part of the the edge information of the defects during the 50 pixel × 50 pixel d in the unclear feature class of the defects represented in this imag small-scale pavement defect dataset will be consciously labeled acc state of the surrounding area when performing the production o ment defect dataset, the introduction of the influence of the pav rounding area on the recognition results when performing the train the original 50 pixel × 50 pixel defect image can improve the short of the small-scale pavement defects.In view of the analysis results a SqueezeNet Model-Based Multiscale Improved Method for Identifying Powder Laying Defects Through the comparative experimental analysis of the three models in Section 4.2, it can be seen that the SqueezeNet model had better overall performance in order to further improve the accuracy and recognition efficiency of the paving defects in a comprehensive way, this section mainly focuses on the improvement and optimization of the small-scale paving defect recognition method based on the SqueezeNet model.A multiscale method is proposed to analyze the deficiencies in the recognition of defects in the small-scale pavement images, and a multiscale pavement dataset is constructed for the method, followed by model training, evaluation, and feature map visualization. Multiscale Powder Laying Defect Identification Methods The incorrectly identified powder laying image in Section 4.2 was analyzed, and it was found that the main reason was due to the fact that part of the image contained only the edge information of the defects during the 50 pixel × 50 pixel division, which resulted in the unclear feature class of the defects represented in this image.Considering that the small-scale pavement defect dataset will be consciously labeled according to the pavement state of the surrounding area when performing the production of the small-scale pavement defect dataset, the introduction of the influence of the pavement state of the surrounding area on the recognition results when performing the training and recognition of the original 50 pixel × 50 pixel defect image can improve the shortcomings in recognition of the smallscale pavement defects.In view of the analysis results and the CNN's requirements on the input data, a multiscale powder laying defect recognition method was proposed, which mainly combined the original 50 pixel × 50 pixel region, the 100 pixel × 100 pixel, and 224 pixel × 224 pixel powder laying images centered on the region into a three-channel image, and keeping the original defect labels unchanged, as the new model training set.It added the ability to perceive the powdering state of the surrounding area in the model training and classification process to improve the shortcomings that exist when using only small-scale powdering image recognition, the principle of which is shown in Figure 13. Data Set Construction The construction of the powder-laying defect dataset was carried out accordi the proposed multiscale powder-laying defect identification method.The constructi the multiscale dataset was based on the whole LPBF molding process powder layin fect image corrected in Section 4.1, and the original dataset and its surrounding areas cropped and channel-merged.The production principle is shown in Figure 14.The image processing methods such as contrast enhancement, rotation, and mirroring the same parameters, as discussedin Section 4.1, were used to enhance the powder la image data. Model Training The CNN model was trained using the momentum-based SGD optimization rithm, in which the hyperparameters were set as follows: the momentum was 0.9 learning rate was 0.001, the iteration number Epoch was 50, the batch size of the tra Batch Size was 32, and at the same time set the random number of seeds.The data o training process is shown in Figure 15, which shows that the multiscale SqueezeNet m performs well on the multiscale powdered image dataset, and the loss value tends 0.5 at 30 rounds of iteration.It can be seen that the multiscale SqueezeNet model perf better on the multiscale pavement image dataset, and from the loss function curve model tends to 0.5 loss value in 30 rounds of iteration and remains stable in the subseq iteration process.Similar to the change rule of the loss curve, the accuracy curve o Data Set Construction The construction of the powder-laying defect dataset was carried out according to the proposed multiscale powder-laying defect identification method.The construction of the multiscale dataset was based on the whole LPBF molding process powder laying defect image corrected in Section 4.1, and the original dataset and its surrounding areas were cropped and channel-merged.The production principle is shown in Figure 14.Then the image processing methods such as contrast enhancement, rotation, and mirroring with the same parameters, as discussedin Section 4.1, were used to enhance the powder laying image data. Data Set Construction The construction of the powder-laying defect dataset was carried out accordin the proposed multiscale powder-laying defect identification method.The constructio the multiscale dataset was based on the whole LPBF molding process powder laying fect image corrected in Section 4.1, and the original dataset and its surrounding areas w cropped and channel-merged.The production principle is shown in Figure 14.Then image processing methods such as contrast enhancement, rotation, and mirroring w the same parameters, as discussedin Section 4.1, were used to enhance the powder la image data. Model Training The CNN model was trained using the momentum-based SGD optimization a rithm, in which the hyperparameters were set as follows: the momentum was 0.9, learning rate was 0.001, the iteration number Epoch was 50, the batch size of the train Batch Size was 32, and at the same time set the random number of seeds.The data of training process is shown in Figure 15, which shows that the multiscale SqueezeNet m performs well on the multiscale powdered image dataset, and the loss value tends t 0.5 at 30 rounds of iteration.It can be seen that the multiscale SqueezeNet model perfo better on the multiscale pavement image dataset, and from the loss function curve, model tends to 0.5 loss value in 30 rounds of iteration and remains stable in the subsequ iteration process.Similar to the change rule of the loss curve, the accuracy curve of model tended to stabilize after 30 rounds of iteration. Model Training The CNN model was trained using the momentum-based SGD optimization algorithm, in which the hyperparameters were set as follows: the momentum was 0.9, the learning rate was 0.001, the iteration number Epoch was 50, the batch size of the training Batch Size was 32, and at the same time set the random number of seeds.The data of the training process is shown in Figure 15, which shows that the multiscale SqueezeNet model performs well on the multiscale powdered image dataset, and the loss value tends to be 0.5 at 30 rounds of iteration.It can be seen that the multiscale SqueezeNet model performs better on the multiscale pavement image dataset, and from the loss function curve, the model tends to 0.5 loss value in 30 rounds of iteration and remains stable in the subsequent iteration process.Similar to the change rule of the loss curve, the accuracy curve of the model tended to stabilize after 30 rounds of iteration. Model Evaluation The model needs to be evaluated at the end of model training to determine the training quality of the model.The model was evaluated using another 20% of the test set data, which had a total of 1440 images of six categories, including normal, squeegee stripe, bar powder accumulation, block powder accumulation, underlayment, and fused cladding layer ultra-high.The results are shown in Figure 16.From the figure, it can be seen that the accuracy performance of the SqueezeNet model based on multiscale improvement was better, and all evaluation indexes were improved.Its accuracy, precision, recall, and F1 score increased compared to before the improvement, as shown in Figure 16a, and the recognition rate of each powder laying defect category in the test set was also increased, as shown in Figure 16b. Model Evaluation The model needs to be evaluated at the end of model training to determine the training quality of the model.The model was evaluated using another 20% of the test set data, which had a total of 1440 images of six categories, including normal, squeegee stripe, bar powder accumulation, block powder accumulation, underlayment, and fused cladding layer ultra-high.The results are shown in Figure 16.From the figure, it can be seen that the accuracy performance of the SqueezeNet model based on multiscale improvement was better, and all evaluation indexes were improved.Its accuracy, precision, recall, and F1 score increased compared to before the improvement, as shown in Figure 16a, and the recognition rate of each powder laying defect category in the test set was also increased, as shown in Figure 16b. Model Evaluation The model needs to be evaluated at the end of model training to determine the training quality of the model.The model was evaluated using another 20% of the test set data, which had a total of 1440 images of six categories, including normal, squeegee stripe, bar powder accumulation, block powder accumulation, underlayment, and fused cladding layer ultra-high.The results are shown in Figure 16.From the figure, it can be seen that the accuracy performance of the SqueezeNet model based on multiscale improvement was better, and all evaluation indexes were improved.Its accuracy, precision, recall, and F1 score increased compared to before the improvement, as shown in Figure 16a, and the recognition rate of each powder laying defect category in the test set was also increased, as shown in Figure 16b. Feature Map Visualization In order to better observe the learning of the improved multiscale SqueezeNet model on the powder defect images, the feature channels of the first convolutional layer "Conv1" and the fifth Fire module output layer "Fire5_concat" in the recognition process of the powder defect images were selected and visualized [25].The first convolutional product layer had a total of 64 feature channels, and some of the channels were visualized, as shown in Figure 17a.The fifth Fire module had a total merged output of 256 feature channels, and again, some of the channels were selected for visualization, as shown in Figure 17b.Different convolutional kernels differently characterized the multiscale SqueezeNet model during forward propagation and some deep convolutional kernels were useless for normal powdered areas in the defective image, which was considered because the normal powdered image area cannot be used as an effective feature extraction area for distinguishing between the six classes of images.As the convolutional layers deepened, the features learned by the model became more abstract and visually uninterpretable, due to the fact that deeper layers showed less visual information and more abstract information related to image categories.At the same time, the sparsity of feature activation increased with the deepening of the convolutional layer, such as the visualization results of the red border feature image indicate that the corresponding channel is not activated, which is due to the fact that the feature pattern encoded by this channel was not found in the input image, and the image features extracted from the feature channel corresponding to the blue border had less influence on the final decision of this pavement image.Therefore, it is concluded that some of the channels in the SqueezeNet model trained based on the multiscale pavement image dataset were not activated when performing image inference, or the extracted features have less influence on the final decision, and there is a certain amount of parameter redundancy, which can be removed by pruning techniques to obtain a compact, less complex, and a more targeted multiscale pavement defect recognition model. Feature Map Visualization In order to better observe the learning of the improved multiscale SqueezeNe on the powder defect images, the feature channels of the first convolutional layer " and the fifth Fire module output layer "Fire5_concat" in the recognition proces powder defect images were selected and visualized [25].The first convolutional p layer had a total of 64 feature channels, and some of the channels were visual shown in Figure 17a.The fifth Fire module had a total merged output of 256 featur nels, and again, some of the channels were selected for visualization, as shown in 17b.Different convolutional kernels differently characterized the multiscale Sque model during forward propagation and some deep convolutional kernels were use normal powdered areas in the defective image, which was considered because the powdered image area cannot be used as an effective feature extraction area for guishing between the six classes of images.As the convolutional layers deepened, tures learned by the model became more abstract and visually uninterpretable, du fact that deeper layers showed less visual information and more abstract informa lated to image categories.At the same time, the sparsity of feature activation in with the deepening of the convolutional layer, such as the visualization results of border feature image indicate that the corresponding channel is not activated, w due to the fact that the feature pattern encoded by this channel was not found in th image, and the image features extracted from the feature channel corresponding blue border had less influence on the final decision of this pavement image.Ther is concluded that some of the channels in the SqueezeNet model trained based multiscale pavement image dataset were not activated when performing image in or the extracted features have less influence on the final decision, and there is a amount of parameter redundancy, which can be removed by pruning techniques to a compact, less complex, and a more targeted multiscale pavement defect reco model. Channel Pruning Model Optimization Method Through the visual analysis of the feature channels of the multiscale Sque model, it can be seen that some of the channels in the SqueezeNet model trained multiscale pavement image dataset were not activated when performing image in or the extracted features had less influence on the final decision, and there was a amount of parameter redundancy, so the model optimization method of channel p is proposed to remove them, and to reduce the redundant parameters of the m enhance the speed of the model under the premise of guaranteeing the accuracy model.In order to prevent excessive pruning resulting in model perfo Channel Pruning Model Optimization Method Through the visual analysis of the feature channels of the multiscale SqueezeNet model, it can be seen that some of the channels in the SqueezeNet model trained on the multiscale pavement image dataset were not activated when performing image inference, or the extracted features had less influence on the final decision, and there was a certain amount of parameter redundancy, so the model optimization method of channel pruning is proposed to remove them, and to reduce the redundant parameters of the model to enhance the speed of the model under the premise of guaranteeing the accuracy of the model.In order to prevent excessive pruning resulting in model performance degradation, this paper adopts an iterative pruning strategy, i.e., removing part of the proportional convolution kernel each time, realizing the pruning goal through multiple iterations, and counting the changing relationship between the pruning rate and the various model performances. Analysis of Pruning Results (1) Changing patterns of model accuracy and storage space at different levels of pruning. The variation of model accuracy and storage space in relation to different degrees of pruning is shown in Figure 18, from which it can be analyzed that as the percentage of pruning increases, both model accuracy and model size show an overall decreasing trend.Among them, the model size decreases steadily with it, which is approximately linearly correlated.The model accuracy changes less when the pruning percentage is less than 40% or less, and the accuracy loss is within 1% and shows an accelerated decreasing trend when the pruning percentage is more than 40%.According to the data analysis, it can be concluded that when the pruning percentage is 40% or less, the multiscale SqueezeNet model pruning based on Oracle pruning standard can reduce the model channel redundancy under the premise of guaranteeing the model accuracy, so the multiscale SqueezeNet model with a pruning percentage of 40% is considered as the optimal model for monitoring powder laying defects of metal powders in this paper, MC-SqueezeNet. degradation, this paper adopts an iterative pruning strategy, i.e., removin proportional convolution kernel each time, realizing the pruning goal thro iterations, and counting the changing relationship between the pruning rate ous model performances. Analysis of Pruning Results (1) Changing patterns of model accuracy and storage space at different leve The variation of model accuracy and storage space in relation to differ pruning is shown in Figure 18, from which it can be analyzed that as the pruning increases, both model accuracy and model size show an overall dec Among them, the model size decreases steadily with it, which is approxim correlated.The model accuracy changes less when the pruning percentage is or less, and the accuracy loss is within 1% and shows an accelerated dec when the pruning percentage is more than 40%.According to the data ana concluded that when the pruning percentage is 40% or less, the multiscal model pruning based on Oracle pruning standard can reduce the model ch dancy under the premise of guaranteeing the model accuracy, so t SqueezeNet model with a pruning percentage of 40% is considered as the o for monitoring powder laying defects of metal powders in this paper, MC-S (2) Changes in reasoning speed before and after model pruning. Through inference experiments on the model after pruning at each st cluded that traditional convolutional neural network can improve the infer the model to a certain extent after channel pruning.As Figure 19 shows, th between the inference speed of the multiscale SqueezeNet model and the pruning, where the inference time was the average of the time required for perform inference on 5000 multiscale images of powder laying defects fiv the figure, it can be seen that the initial network for the 5000 multiscale p defect image prediction time consumed 12.27 s, with the increase of the tota centage of the model, the recognition model inference time shows a gener duction, when the percentage of pruning was greater than 10%, the inferen to have a significant decrease, and it can be seen that the image inference tim SqueezeNet model was 11.31 s, compared to the 7.8% reduction before prun (2) Changes in reasoning speed before and after model pruning. Through inference experiments on the model after pruning at each stage, it is concluded that traditional convolutional neural network can improve the inference speed of the model to a certain extent after channel pruning.As Figure 19 shows, the relationship between the inference speed of the multiscale SqueezeNet model and the percentage of pruning, where the inference time was the average of the time required for the model to perform inference on 5000 multiscale images of powder laying defects five times.From the figure, it can be seen that the initial network for the 5000 multiscale paving powder defect image prediction time consumed 12.27 s, with the increase of the total pruning percentage of the model, the recognition model inference time shows a general trend of reduction, when the percentage of pruning was greater than 10%, the inference time began to have a significant decrease, and it can be seen that the image inference time of the MC-SqueezeNet model was 11.31 s, compared to the 7.8% reduction before pruning.For this experiment, Renishaw 316L stainless steel powder was used as the raw terial, and the particle size of the powder was between 15-45 μm.Before conductin experiment, the impurities were first screened out using a sieving machine and dr 200 °C for 2 h.At the same time, in the molding experiment process the molding bin vided an argon gas environment to prevent high-temperature oxidation on the parts ufacturing process. Manufacturing Experiment and Analysis Through a specific part of the manufacturing experiment for this paper LPBF m ing metal powder laying powder quality online monitoring device verification, fo part a total of two printing experiments, in order to obtain the experimental effec experimental process will be monitoring the system's feedback control function is pop-up window prompts, by the manual judgment of whether to implement the feed control, experiments set the detection of various types of defects threshold of 3, that identification of the results of a certain type of defects in the number of 3 and above ognized as a valid category of defects in the detection of the results. (1) The first experiment In order to obtain the laying powder defects and to verify this paper's online toring system defect detection and feedback function to meet the requirements se conducted the first manufacturing experiment, with the results shown in Figure 2 can be clearly seen that there was a serious phenomenon of the part being too high when the 59th layer of the laying powder action is too high, as well as the interfe with the squeegee,therefore the experiments were stopped.The 58th layer of powde age is shown in Figure 21b.The manufactured part shows an area of ultra-high area age to the scraper adhesive strip, so that the process of spreading the powder prod with its side of the movement of the scraper stripe caused defects, ultra-high serious and the scraper body collision jitter, resulting in defects perpendicular to the direct its movement of the collision stripe. Manufacturing Experiment and Analysis Through a specific part of the manufacturing experiment for this paper LPBF molding metal powder laying powder quality online monitoring device verification, for the part a total of two printing experiments, in order to obtain the experimental effect, the experimental process will be monitoring the system's feedback control function is set to pop-up window prompts, by the manual judgment of whether to implement the feedback control, experiments set the detection of various types of defects threshold of 3, that is, the identification of the results of a certain type of defects in the number of 3 and above, recognized as a valid category of defects in the detection of the results. (1) The first experiment In order to obtain the laying powder defects and to verify this paper's online monitoring system defect detection and feedback function to meet the requirements set, we conducted the first manufacturing experiment, with the results shown in Figure 21a.It can be clearly seen that there was a serious phenomenon of the part being too high, and when the 59th layer of the laying powder action is too high, as well as the interference with the squeegee, therefore the experiments were stopped.The 58th layer of powder image is shown in Figure 21b.The manufactured part shows an area of ultra-high area damage to the scraper adhesive strip, so that the process of spreading the powder produced with its side of the movement of the scraper stripe caused defects, ultra-high serious areas, and the scraper body collision jitter, resulting in defects perpendicular to the direction of its movement of the collision stripe. In order to verify the stability of the online monitoring system in this paper over a long period of time, the original 316L process parameter package was invoked during the second molding experiment to finalize the part fabrication with a total duration of more than 20 h.Insufficient powder during the manufacturing process led to the 648th to 657th layer of the right side of the region There was insufficient powder spreading defects.The part molding results are shown in Figure 22a, where it can be clearly seen that there is a gap in the red box area, corresponding to the 654 layers of the powder image state shown in Figure 22b.As shown in Figure 22c, it shows the recognition effect of the 654th layer of powder state and in Figure 22d, for the layer of powder detection results.(2) Second manufacturing experiment. In order to verify the stability of the online monitoring system in this paper over a long period of time, the original 316L process parameter package was invoked during the second molding experiment to finalize the part fabrication with a total duration of more than 20 h.Insufficient powder during the manufacturing process led to the 648th to 657th layer of the right side of the region There was insufficient powder spreading defects.The part molding results are shown in Figure 22a, where it can be clearly seen that there is a gap in the red box area, corresponding to the 654 layers of the powder image state shown in Figure 22b.As shown in Figure 22c, it shows the recognition effect of the 654th layer of powder state and in Figure 22d, for the layer of powder detection results.In order to verify the stability of the online monitoring system in this pa long period of time, the original 316L process parameter package was invoked second molding experiment to finalize the part fabrication with a total duratio than 20 h.Insufficient powder during the manufacturing process led to the 648 layer of the right side of the region There was insufficient powder spreading d part molding results are shown in Figure 22a, where it can be clearly seen tha gap in the red box area, corresponding to the 654 layers of the powder image st in Figure 22b.As shown in Figure 22c, it shows the recognition effect of the 654 powder state and in Figure 22d, for the layer of powder detection results.(3) Online detection accuracy and detection time. In order to verify whether the detection accuracy and detection speed of the singlelayer powder laying quality of the online monitoring device meets the requirements of the system, the feedback results of the identification of each layer of powder laying and the time-consuming detection of defects in the experimental process are statistically and analytically analyzed.Table 4 shows the above experimental process for each layer of powder laying quality of the feedback results and the real powder laying situation of the comparison results.As can be seen from the table, in the above two experiments, for a total of 1460 layers of powder laying quality detection, the system's feedback accuracy was 98.63%.The analysis of the wrongly identified layers was mainly due to the fact that various types of defects were not obvious at the initial stage, and the number of certain types of defects detected did not exceed the set threshold value of 3, which were not recorded as valid defects.Table 5 shows the time-consuming data for three different stages of the online inspection of the quality of 50 layers of powder laying.The average time consumed in the three stages of image acquisition, tilt correction and storage, and partition and identification was 0.795 s, 0.159 s, and 2.562 s, totaling 3.516 s, meeting the time-consuming requirements for testing set in Section 2. After the above two manufacturing experiments and recognition effect analysis, it can be seen that by setting various types of defect thresholds, we can effectively avoid the influence of part of the powder laying area misdetection on the recognition results of the whole layer.When the threshold value of all kinds of defects was set to 3, the feedback accuracy rate of the two molding experiments was 98.63% for the cumulative quality of 1460 layers of paving powder, among which the average time consumed in the online detection of the quality of 50 layers of paving powder was 3.516 s.At the same time, the monitoring system did not show any abnormality when it worked continuously for more than 20 h and recognized the paving powder image of 1402 layers and the multiscale paving powder block for more than 1,090,000 times, which is characterized by a good recognition stability. Conclusions Through the study of the metal powder laying state in the LPBF molding process, an online monitoring device for the laying process was built, and the algorithm for identifying the laying defects was developed, and finally experimental validation was carried out, and better results were obtained.It is shown that this online monitoring system for laying powder is suitable for both simple and complex parts.The effective identification of powder laying defects by this monitoring device has the following significance: on the one hand, it reduces the manufacturing defects of the parts and improves the manufacturing quality and mechanical properties of the parts; on the other hand, it avoids the damage of the scraper and ensures the safe operation of the LPBF equipment, which greatly saves the time cost and labor cost.The specific conclusions of this paper are as follows. We propose a recognition method of small-scale regional division of powder laying defects, the division of the image size of 50 pixel × 50 pixel, the construction of a small-scale powder laying defects dataset for the method, and the experiments and analyses of three different complexity models, namely, AlexNet, ResNet50, and SqueezeNet, have been completed.The results show that the method can be used for the detection of common powder laying defects, in which the SqueezeNet model had the best performance. Aiming at the shortcomings of the small-scale powder laying defect detection method, a multiscale improvement method based on SqueezeNet model is proposed.The original small-scale region and the 100 pixel × 100 pixel and 224 pixel × 224 pixel powder laying images centered on the region were combined into a three-channel image, which was used as a multiscale dataset for the model training in order to increase the model's ability to perceive the powder laying state around the original small-scale region.The results show that the method improved the recognition accuracy of three types of defects, namely, lumpy power stacks, insufficient powder laying power, and ultra-high fusion cladding layers. For the parameter redundancy problem of the multiscale SqueezeNet model, an iterative pruning method is proposed to prune the model channels under the premise of guaranteeing the accuracy of the model, and better results are obtained. The deployment of MC-SqueezeNet model and the development of online monitoring device system software were completed using OPC UA development components and .Net Framework platform, and experimental verification was conducted.The results show that the system can recognize the minimum size of defects is 0.54 mm, the accuracy of the feedback results is 98.63%, the recognition speed is 3.516 s, and it works online for more than 20 h, and all the indexes meet the design requirements. Figure 3 . Figure 3.The off-axis camera mounting scheme. Figure 3 . Figure 3.The off-axis camera mounting scheme. Figure 3 . Figure 3.The off-axis camera mounting scheme. Figure 4 . Figure 4.The online monitoring device for the powder spreading process. Figure 5 . Figure 5.The human-computer interaction interface: (a) connection test interface; (b) monitoring system interface. Figure 4 . Figure 4.The online monitoring device for the powder spreading process. Figure 4 . Figure 4.The online monitoring device for the powder spreading process. Figure 6 . Figure 6.The process for extracting the coordinates of the center point of the mounting holes for manufacturing substrates. Figure 6 . Figure 6.The process for extracting the coordinates of the center point of the mounting holes for manufacturing substrates. Figure 6 , Figure6, the process of extracting the coordinates of the center point of the substrate mounting holes is as follows: (a) draw the ROI region, which contains an image of a substrate mounting hole; (b) process the drawn image of the ROI region by using an adaptive segmentation algorithm, and obtain the region of the mounting holes; (c) carry out the extraction of the contour of the region of the mounting holes; and (d) fit a circle on the basis of the contour of the region, and extract the coordinates of the circle's center as the center coordinate of the position of the mounting holes.The perspective transformation matrix can be obtained by substituting the coordinates of the four points extracted by the above method into (3) and (4).Meanwhile, since the bilinear difference algorithm is characterized by high quality and strong continuity of pixel values, the bilinear interpolation method is used to supplement some of the missing points on the image after perspective transformation.In order to facilitate further processing of defects in the powder laying process, the region of interest, the image of the manufacturing region of the part, is obtained through image cropping.The original image and the final image of the manufacturing region are shown in Figure7. Figure 6 . Figure 6.The process for extracting the coordinates of the center point of the mounting holes for manufacturing substrates. Figure 8 . Figure 8.The partial metal powder spreading image. Figure 9 . Figure 9.The principle of making image data of small-scale powder laying defects. Figure 8 . Figure 8.The partial metal powder spreading image. Figure 8 . Figure 8.The partial metal powder spreading image. Figure 9 . Figure 9.The principle of making image data of small-scale powder laying defects. Figure 9 . Figure 9.The principle of making image data of small-scale powder laying defects. 4. 2 .1.Model Training This experiment contains a total of 6 types of data for normal and different defective images, with a total of 7200 pieces of known training data, which are randomly divided into training, validation, and test sets in the ratio of 6:2:24,320 pieces that are used for model training, and 1440 pieces of each that are used for validation and evaluation of the model.The CNN model is trained using the momentum-based SGD optimization algorithm with hyper-parameters set as follows: the momentumwas 0.9, the learning rate was 0.001, the iteration number epoch was 50, the batch size for training was 32, and the random number seeds were set at the same time.The training results are shown in Figure 10.From Figure 10, the training loss curves show that the loss values of the three models can be stabilized after 30 rounds of training, in which the SqueezeNet model converges faster and the ResNet50 model converges slower, and from Figure 10 the accuracy curves showed that the AlexNet model had lower accuracy, and the ResNet50 and SqueezeNet models had higher accuracies, and finally the training was completed.The final size of AlexNet, ResNet50, and SqueezeNet models were 837 MB, 180 MB, and 5.63 MB, respectively. 4. 2 .1.Model Training This experiment contains a total of 6 types of data for normal and different defective images, with a total of 7200 pieces of known training data, which are randomly divided into training, validation, and test sets in the ratio of 6:2:24,320 pieces that are used for model training, and 1440 pieces of each that are used for validation and evaluation of the model.The CNN model is trained using the momentum-based SGD optimization algorithm with hyper-parameters set as follows: the momentumwas 0.9, the learning rate was 0.001, the iteration number epoch was 50, the batch size for training was 32, and the random number seeds were set at the same time.The training results are shown in Figure 10.From Figure10, the training loss curves show that the loss values of the three models can be stabilized after 30 rounds of training, in which the SqueezeNet model converges faster and the ResNet50 model converges slower, and from Figure 10 the accuracy curves showed that the AlexNet model had lower accuracy, and the ResNet50 and SqueezeNet models had higher accuracies, and finally the training was completed.The final size of AlexNet, ResNet50, and SqueezeNet models were 837 MB, 180 MB, and 5.63 MB, respectively. Figure 11 . Figure 11.(a) Evaluation results of various models; (b) recognition accuracy of different models for each defect category in the test set. Figure 11 . Figure 11.(a) Evaluation results of various models; (b) recognition accuracy of different models for each defect category in the test set. Figure 12 . Figure 12.The heat map visualization of different models. Figure 12 . Figure 12.The heat map visualization of different models. Figure 13 . Figure 13.The principle of recognizing multiscale powder laying defects. Figure 14 . Figure 14.Principle of a multiscale image dataset of powder laying defects. Figure 13 . Figure 13.The principle of recognizing multiscale powder laying defects. Figure 13 . Figure 13.The principle of recognizing multiscale powder laying defects. Figure 14 . Figure 14.Principle of creating a multiscale image dataset of powder laying defects. Figure 14 . Figure 14.Principle of creating a multiscale image dataset of powder laying defects. Figure 17 . Figure 17.(a) Partial feature map visualization of the Conv1 layer; (b) feature map visualiz the output layer of the fifth Fire module. Figure 17 . Figure 17.(a) Partial feature map visualization of the Conv1 layer; (b) feature map visualization of the output layer of the fifth Fire module. Figure 18 . Figure 18.Relationship between the model accuracy and model storage space and purning. Figure 18 . Figure 18.Relationship between the model accuracy and model storage space and percentage of purning. Figure 21 . Figure 21.(a) Part fabrication results; (b) the 58th layer of metal powder laying image; (c) the 58th layer of laying image defect recognition effect; (d) the 58th layer of laying image inspection results. Figure 22 . Figure 22.(a) Part fabrication results; (b) the 654th layer of metal powder laying image; (c) the 654th layer of laying image defect recognition effect; (d) the 654th layer of laying image inspection results. Figure 21 . Figure 21.(a) Part fabrication results; (b) the 58th layer of metal powder laying image; (c) the 58th layer of laying image defect recognition effect; (d) the 58th layer of laying image inspection results. Figure 21 . Figure 21.(a) Part fabrication results; (b) the 58th layer of metal powder laying image; layer of laying image defect recognition effect; (d) the 58th layer of laying image inspec Figure 22 . Figure 22.(a) Part fabrication results; (b) the 654th layer of metal powder laying image; ( layer of laying image defect recognition effect; (d) the 654th layer of laying image inspec Figure 22 . Figure 22.(a) Part fabrication results; (b) the 654th layer of metal powder laying image; (c) the 654th layer of laying image defect recognition effect; (d) the 654th layer of laying image inspection results. Table 1 . The JSJ100 equipment process conditions parameters. Table 1 . The JSJ100 equipment process conditions parameters. Table 1 . The JSJ100 equipment process conditions parameters. Table 4 . Online inspection results of metal powder laying quality. Table 5 . Time-consuming data for different detection stages.
18,282.6
2024-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
Precision annotation of digital samples in NCBI’s gene expression omnibus The Gene Expression Omnibus (GEO) contains more than two million digital samples from functional genomics experiments amassed over almost two decades. However, individual sample meta-data remains poorly described by unstructured free text attributes preventing its largescale reanalysis. We introduce the Search Tag Analyze Resource for GEO as a web application (http://STARGEO.org) to curate better annotations of sample phenotypes uniformly across different studies, and to use these sample annotations to define robust genomic signatures of disease pathology by meta-analysis. In this paper, we target a small group of biomedical graduate students to show rapid crowd-curation of precise sample annotations across all phenotypes, and we demonstrate the biological validity of these crowd-curated annotations for breast cancer. STARGEO.org makes GEO data findable, accessible, interoperable and reusable (i.e., FAIR) to ultimately facilitate knowledge discovery. Our work demonstrates the utility of crowd-curation and interpretation of open ‘big data’ under FAIR principles as a first step towards realizing an ideal paradigm of precision medicine. Introduction The paradigm of precision medicine 1-6 is based largely on first understanding the genomic features of disease and then designing biomarkers and drugs that identify and rescue these genomic defects respectively. Thus far, precision medicine has gained the most traction in cancer 7 where for both nonsmall cell lung cancer and breast cancer, for instance, the standard-of-care now includes sequencing of genes such as EGFR or quantitating panels of RNA such as those included in Oncotype DX, respectively, to drive therapeutic decisions for new subtypes of patients 7 . Moreover, clinical trials are ongoing to develop a precision medicine approach to other diseases such as those that affect the cardiovascular [8][9][10][11] and neuropsychiatric 12,13 systems among others. In fact, the National Research Council recently affirmed that to realize the practice of precision medicine requires building a molecular taxonomy or nosology from functional gene targets defined across many different diseases 14 . However, the dearth of machine readable public genomics data appropriately curated over a great number of diseases has largely precluded such efforts. Meanwhile, the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) is perhaps the largest of a number of open functional genomics data repositories [15][16][17] , and GEO data is rich and complete over a great many diseases and phenotypes. There are currently gene expression measurements openly available on over 2 million samples drawn from experiments amassed since the year 2000 [18][19][20] . Funding agencies, such as the National Institutes of Health (NIH), mandates the public sharing this data from functional genomics experiments, and GEO doubles in size every two years on average. To date, over 21,000 PubMed publications have been derived from over 1,000,000 digital samples (see http://STARGEO.org/stats) measured by microarrays, the single largest type of genomic data within GEO 19 . While this data can openly be used to define a precision medicine ideal, GEO itself and almost all other attempts at crowd-curation of sample level annotations have largely failed to embrace the guiding principles to make data generated more findable, accessible, interoperable and reusable (i.e., FAIR principles) 21 to ultimately enhance the ability of machines and individuals to leverage GEO data for downstream scientific inquiry. For instance, while GEO itself elicits a basic level of sample curation for contrasts of phenotypes in GEO DataSets, this curation is largely used to visualize gene expression in a given study (or Series), and most importantly DataSet annotations disregard FAIR principles and are not standardized across studies. Similarly, while the Gene Expression Atlas by ArrayExpress 17 employs a combination of small scale biomedical manual curation by experts with sophisticated text-mining tools to annotate samples with a structured bioontology across studies 22 , their approach is neither open nor embraces FAIR principles and thus cannot scale GEO's millions of individual samples. As of October 2015, ArrayExpress had annotated 2,330 datasets studying samples in 6,345 differential comparisons across 25 different organisms (http:// www.ebi.ac.uk/gxa/release-notes.html). Moreover, the few other crowd curation attempts 23,24 , including some with an interactive meta-analysis portal (http://metasignature.stanford.edu/), have either failed in their scale to annotate GEO and / or failed to embrace FAIR principles that encourage sustained and ever more useful crowd curation. With this immediate and increasing need to better mine large open data stores to foster new knowledge discovery, the NIH had a Big Data to Knowledge (BD2K) initiative to maximize the use of biomedical big data for individual investigators and the overall research community. Towards this end, we introduce the Search Tag Analyze Resource for GEO (STARGEO.org) as a NIH / BD2K-funded online platform to share open crowd-curation of digital samples. Currently, no large-scale central repository of annotations exists that the biomedical community can leverage to characterize the molecular genomic pathology of disease. STARGEO.org fills that gap by providing a convenient web-based annotation interface to facilitate precise curation of digital samples, as well as an analysis portal to easily generate robust genomics signatures from meta-analysis of the genomics data and crowd curated annotations. Towards that end, we recruited a small group of up to 10 biomedical students to develop STARGEO. org into a structured functional genomics database of digital samples curated for relevant biological features. Specifically, we targeted graduate biomedical students as a scientific crowd enriched with disease knowledge whose members are most incentivized to learn about disease genomics in preparation for their careers. Indeed, studies have shown that levels of intrinsic motivation far outweigh extrinsic motivation in inducing crowd participation and to maintain precision of task performance 25 . Therefore, we hypothesize that leveraging a small crowd of biomedical graduate students for the curation of biological features with STARGEO.org will result in a precisely annotated dataset of samples that may be used for large-scale translational discovery. In this work, we demonstrate a rapid crowd-curation of sample annotations across all phenotypes, we report a high precision of annotation among curators to characterize common annotation mistakes, and we demonstrate high and significant biological validity of crowd-curated annotations on open data for characterizing the genomic pathology of breast cancer. STARGEO.org genomic discovery process The general STARGEO.org workflow for using GEO data to define genomic signatures is shown (Fig. 1 provides a focal point and description of the whole study by linking together a group of related Samples. STARGEO.org continually downloads raw data from GEO for all the unstructured free text Sample and Series attributes (defined in the original data deposition by the study authors) for genome-wide human expression profiling by micro-array experiments. We deposit the free text attributes into a database that is indexed to facilitate full text searches of both Samples and Series attributes. This search functionality is built into STARGEO.org, allowing curators to efficiently find specific samples of interest described by specific keywords and modifiers thereby immediately facilitating Findability and Accessibility of raw GEO sample attributes under FAIR principles. Furthermore, we keep the STARGEO.org data in sync with GEO data, which is continually being updated. Once appropriate studies to curate are found, STARGEO.org curators make annotations on Tags that represents knowledge about digital samples. Specifically, we define Tags and annotations as key: value bindings where Tags are the keys that hold annotation values. We allow users to map Tags to formal ontologies sourced from the National Center for Bioontology's BioPortal 26 to immediately make their crowd-curation data Interoperable and machine readable. Also, we provide a snapshot interface for users to quickly assemble and ultimately freeze snapshots of annotations and digitally publish their snapshots to Zenodo (https://zenodo.org) to promote Reusability. In addition, we automatically map all probe sets or Platforms deposited in GEO to the National Center for Biotechnology Information's Entrez gene IDs 27 to allow users to perform robust meta-analyses across Series to define differentially expressed genes. These results we describe here are based on raw GEO data downloaded for 465,770 digital Samples from 11,903 Series (experiments) across 1,682 different Platforms (chipsets) as of December 19, 2013, and we report on 490,110 total sample annotations made on 5,798 series across 278 independent Tags made on that data through December 31, 2015. The STARGEO.org curation process We implemented the STARGEO.org annotation process (Fig. 2) to allow for manual curation through a simple Tagging interface based on interactive regular expressions (RegExs). Tags define a standardized Figure 2. STARGEO.org curation process. The figure shows a STARGEO.org screenshot to annotate the experimental study (GSE10780) entitled 'Proliferative genes dominate malignancy-risk gene signature in histologically-normal breast tissue'. The Tag has been mapped a-priori to the disease ontology and represents a generalized class of breast cancer (DOID:1612). To annotate samples that match to the breast_cancer Tag, the curator selected the sample_characteristics column and designed the 'IDC' RegEx to GEO sample descriptors. STARGEO.org automatically highlights matching samples in real-time based on the curator's RegEx and annotates those samples with the selected Tag. This process is repeated across many different studies and different Tags to explicitly capture all relevant information that is subsequently used to perform meta-analyses. nomenclature across experiments to represent biological phenotypes such as age, gender, survival, or case or control status of a disease. Specifically, we define Tags as curator-assigned key:value bindings for digital samples where the names of tags are reusable keys that are bound to sample annotation values (for example Age:50, Gender:Female, Cancer:True, etc.). When data is deposited in GEO, the submitter uses specific words or phrases in the raw data attributes to describe contrasts in sample phenotypes. RegExs have long been a standardized syntax in computer science to efficiently match and extract text 28 , and they allow curators to select subsets of Samples in a given Series for mass annotation. With STARGEO.org, curators design RegExs at the Series level to match and thus discriminate linked Samples in order to assign appropriate Tags. Therefore, a Series with thousands of Samples is Tagged with the same effort as a Series with only ten samples once an appropriate RegEx is used to discriminate Sample level annotations. The web application features real-time highlighting of annotations being applied to samples to make clear the result of any RegEx being applied to Tag samples (Fig. 2). Most of our curators have been able to design RegExs to match Tags that hold Boolean annotations (such as case/ control) status. Although, more RegEx savvy users can use parentheses to directly 'capture' matching categorical (such as cancer subtype) or quantitative annotations (such as age). In our analysis of the precision of making RegExs below, we find capturing RegEx annotations to be more error prone to simply matching (Boolean) RegEx annotations, and suggest that users should explicitly enumerate Boolean matches for any given set of categorical annotations, and that users only capture quantitative phenotypes. Crowd-curation of STARGEO.org annotations To instantiate the database with high quality annotations, we recruited ten biomedical graduate students from across the country to curate samples for disease and other biological phenotypes, and we designed a reimbursement scheme to reward their precision in making annotations. We used word of mouth and social media to reach out to potential curators. Our sole criterion was that curators had at least some graduate level training in the biomedical sciences. We used Twitter to strategically recruit curators that would be interested in learning about disease and defining genomic disease signatures. Specifically, we sent direct messages to Twitter users with any keywords like 'biomedicine', 'translational medicine' or 'research' in their profile descriptions as well as key words like 'student' and 'MD' and/or 'PhD' to capture their educational exposure. In all, we recruited three different biomedical graduate students from the local Bay area (Stanford and UCSF), and we recruited an additional six biomedical graduate students across the United States with our Twitter outreach. With this small crowd of curators from 12/1/2014 through 12/31/2015, we made 490,110 total sample annotations using 278 Tags across 149,380 distinct Samples drawn from 11,903 distinct Series. This represents 32% of the 465,770 digital Samples we downloaded from GEO that we have annotated with at least one Tag or 14% of the 1,639 series we downloaded from GEO (Fig. 3). We found our Series level approach to annotating Samples scaled very quickly; in about six weeks we were able to amass over 360,000 individual sample annotations among ten biomedical graduate students. To achieve this rate of coverage, we reimbursed curators to exhaust a total budget of $10,000 during the initial six-week period. We found that this initial reimbursement drove the initial rate of coverage and validation, and once we exhausted our budget, the rate of validation plateaued. Nonetheless, without any reimbursement, some students continued to annotate new samples to define differentially expressed genes and learn about the molecular pathology of disease for their own purposes. Interestingly, the figure also shows a spike in annotations over the summer months independent of any reimbursement as the students had the time and interest to contribute. Strategically, curators were allowed the freedom to define new Tags in order to represent any phenotype of interest. We employed a text based system to define new Tags to facilitate complete flexibility to describe any biological or experimental feature. In the initial six-week period of reimbursement, virtually all Tags represented disease states except for demographics such as Age, Gender, etc. We manually controlled the vocabulary of Tags by collapsing obvious duplicates (such as 'Breast Cancer' and 'BRCA') where appropriate. For disease related phenotypes, such as case or control status, we mapped curator-supplied tags to the Disease Ontology 29,30 post-hoc to further standardize the semantic consistency of Tags across studies and to facilitate cohort selection of contrasts for meta-analyses. Precision of STARGEO.org annotations To test the precision in making the 490,110 sample annotations we acquired, we implemented a validation interface for blinded cross-annotation among the curators-i.e., different curators made independent annotations to check the annotation concordance as a measure of precision of already Tagged Samples. We reimbursed pairs of curators 5 cents for every concordant sample annotation to drive precision, and curators were only reimbursed for 100% concordant Sample annotations per Series. To minimize the potential for abuse of our reimbursement scheme and to ensure the highest reliability of our measured cross-annotation precision, we sought to facilitate true independence of the crossannotations among different curators. Specifically, we hid all GEO identifier fields to completely blind the cross-annotation interface such that curators cannot easily duplicate concordant Sample annotations for a given Series. Similarly, we strategically hide RegExs submitted from users to again discourage automated cross-annotation without manual review. (Table 1). As multiple Tags can annotate a given Series, we made 2,084 original annotations at the Series level that were blindly cross-annotated by an independent curator (Supplementary File 2). Cohen's Kappa coefficient of agreement is a more statistically robust measure of precision than concordance [31][32][33] , and we estimated Cohen's Kappa coefficient for 1,827 pairs of Series containing Samples blindly cross-annotated for the same Tag (Fig. 4a). While Samples from the remaining 257 pairs of comparisons at the Series level were highly concordant, Kappa remained undefined because annotations were uniform for each Study without any variability. We found the mean Kappa estimate was 0.86, and 81% (1,487/1,827) of the comparisons had perfect Kappa coefficients of 1.0. Besides these pairs of annotations sharing perfect agreement, the next most common pattern of agreement centered on Kappa = 0, which represents random agreement in 156 comparisons (−0.25 o = Kappa o = 0.25). We found this random pattern of agreement between pairs of expert annotators involved mistakes in defining RegExs such as with capturing Age, the most frequent phenotype annotated initially and subsequently validated. However, other examples of random agreement involved poorly designed Tags that asked for ambiguous annotations. For instance, one example was the MB_Histology Tag for the GSE21140 Series, which is supposed to represent a histological annotation for medulloblastoma. The original RegEx captured categorical annotations of medulloblastoma histology (RegEx = '(Classic|Desmoplastic|Large cell anaplastic|MBEN)'). However, the validation RegEx matched on whether the patient had primary medulloblastoma (RegEx = 'Primary medulloblastoma'). When grouped by Tags across multiple Series and Samples, the most discordant tags (Fig. 4b) all derived from either curator mistakes in defining a RegEx to capture a quantitative value (Onset_age and pH) or poorly designed or ambiguous Tags (MB_Histology, MB_Gender). Additionally, there was a distinct subset of 10 pairs of sets of Sample annotations with perfect disagreement where Kappa = − 1. Almost inevitably, these were mistakes made in matching the RegEx for case or control status. For instance, the largest Series with a Kappa = − 1 on cross annotation of 144 Samples was for the RCC_control Tag for the GSE53757 Series, which represents control samples for renal cell carcinoma. The original annotation matched samples with normal kidney (RegEx = 'normal kidney') while the validation annotation matched renal cell carcinoma patients (RegEx = 'clear cell renal cell carcinoma'). Validation of STARGEO.org annotations To validate the biological accuracy of STARGEO.org annotations, we used The Cancer Genome Atlas (TCGA) 34 as a gold standard for a well annotated set of samples of functional genomics data, and we compared the rank correlation of the summary statistics for the tumor-normal differential expression of STARGEO.org versus TCGA samples. In particular, breast cancer is the best represented disease among TCGA samples, and we performed differential gene analysis on RNA-Seq data from 1,119 cases of breast cancer tumors relative to 113 normal breast tissue samples as controls. We generated a comparable STARGEO.org measure of differential gene expression for breast cancer with meta-analysis (http:// STARGEO.org/analysis/249/) using our crowd-curated annotations. In all, we used 1,234 tumors (cases) versus 535 normal (control) samples of breast tissue over 27 different GEO studies from STARGEO.org. Overall, we found a significant (P o = 0.01) Spearman rank correlation of 0.77 (Fig. 5a) across all 19,725 gene effects estimated for STARGEO.org and TCGA data, and we found 3,168 genes that are significant at a false discovery rate of 0.1 in both TCGA and STARGEO.org after correcting for multiple tests. Moreover, among the top 200 genes (1%), we found an overlap of 92 most down-regulated versus 98 most up-regulated (Fig. 5b) shared by both STARGEO.org and TCGA analyses. This result is highly significant as an overlap of only two genes is expected by chance. Discussion Robust gene signatures discovered through public disease-related datasets have had tremendous translational impact for biomarker and drug discovery 35 across transplant rejection 36 , lung cancer 37 , pancreatic cancer 38 , chronic renal disease 39 , preeclampsia 40,41 , and sepsis 42 among others. However, defining robust gene signatures from public data involves a laborious process requiring substantial technical expertise to download, curate, and analyze digital samples across different datasets. While physicians and scientists are the disease experts most incentivized to annotate and subsequently interpret GEO data, the significant bioinformatics burden to do so precludes their efforts. STARGEO.org immediately solves this problem for individual researchers by providing robust meta-analyzed genomics signatures to users based on their curated annotations of digital samples through the convenient web application. Moreover, STARGEO.org provides a natural mechanism to check those curations for precision and consistency by embracing FAIR principles for crowd curation. This stands in stark contrast of other attempts to annotate GEO, including GEO itself, that disregard FAIR principles thereby handicapping the sustainability of such efforts and development of any robust digital curation community. In this work, we introduce STARGEO.org as a novel web-based application to gain better descriptions of GEO sample phenotypes uniformly across different studies and to define robust differentially expressed gene signatures of disease by meta-analysis of gene expression. Most importantly, STARGEO. org specifically makes every free text attribute we source from GEO as well as all curation and analysis data we generate immediately FAIR. Moreover, by targeting and reimbursing a specialized crowd of biomedical graduate students, we are able to leverage STARGEO.org to curate biological features with high precision. We found that without any bioinformatics training or experience, the students we recruited were able to dynamically conduct sophisticated meta-analyses to define robust signatures of disease and ultimately discover the molecular pathology of different diseases. As a proof of principle, we demonstrate the biological accuracy of these crowd-curated annotations by significantly recapitulating differentially expressed genes that define breast cancer relative to a TCGA gold standard for a well annotated functional genomics dataset. We acknowledge that we cannot estimate the performance of Tags to accurately capture crowdcuration of sample annotations for lack of an appropriate gold standard of annotations from the original data depositor. In fact, the gold standard of open data curation is manual review by human curators as we perform here twice with high precision. The high inter-rater reliability we observed among curators suggests that Tags can reproducibly capture the features of biological samples that the original data depositor intended to share. In the absence of an appropriate gold standard, however, it is reasonable to asses curation performance by consensus theory or majority opinion because aggregation of independent responses across curators is more accurate than any individual curator's response [43][44][45] , and this relationship is robust and independent of any explicit bias among curators 46 . Therefore, while for lack of a gold standard it remains unclear how sensitive or specific the crowd-curation annotations are, we assume accurate annotations with high inter-rater reliability metrics we demonstrate here despite of any individual curator's unknown bias. Finally, STARGEO.org is designed to be for crowd curation of open data what GitHub has been for open source code development: i.e., a community of curators that can openly build large sets of annotations together. Specifically, STARGEO.org is designed to support existing best practices to make research data more findable, accessible, interoperable and reusable (i.e., FAIR principles) to ultimately facilitate knowledge discovery. We embrace FAIR principals for both the crowd-curated sample annotations we generate and the raw sample attribute free-text data that we download from GEO. By making raw GEO data Findable and Accessible, we immediately provide a valuable tool beyond the standard search interface that GEO provides. By building in ontology-mapping functionality from bioontology.org to map our Tags, we immediately make or crowd-curation data Interoperable. We provide a snapshot interface for users to quickly assemble and ultimately freeze snapshots of annotations and digitally publish their snapshots to Zenodo.org (https://zenodo.org) to promote Reusability. Therefore, by adopting FAIR principles, we may transform STARGEO.org into a translational community resource that can be used to capture open digital curation to characterize the functional genomics of disease on a large scale towards discovery of novel drugs and biomarkers in this age of precision medicine. Methods Using the Amazon Web Services cloud infrastructure, we downloaded over 1.7 TB of public data for all processed expression data and associated attributes for series, samples, and platforms catalogued in GEO (ftp://ftp.ncbi.nih.gov/pub/geo/DATA/), and we developed a scalable database schema to represent their With this schema, we implemented a web application in Python (https://www.python.org) programming language using the Django (https://www.djangoproject.com) web development framework that allowed us to crowd-curate a semantic network of Tags and appropriate sample annotations representing biological diseases and other phenotypes. We also implemented the functionality for users to quickly assemble and ultimately freeze snapshots of annotations on STARGEO.org and digitally publish their annotation datasets to Zenodo.org (https://zenodo.org) for formal citation. For the data behind the web application described here, we filtered GEO for 'expression profiling by microarray' in humans to find 465,770 digital Samples from 11,903 Series (experiments) across 1,682 different Platforms (chipsets) as of December 19, 2013, and we report on curations made on this raw GEO data through 12/31/2015. We full text indexed all 14,874,580 sample and 283,883 series attributes to facilitate rapid searches at the sample attribute level, a task currently impossible on GEO. We leveraged regular expressions (RegExs) in Python to design an annotation interface for curators to use to quickly annotate sample with Tags to represent biological interpretation. We integrated a blinded validation scheme that allowed for cross-annotation of Tags on which we derived measurements of precision. We used simple concordance estimates as well as Cohen's Kappa statistic 33 to measure precision on annotations on blind cross-annotation by independent curators. Additionally, we mapped all microarray probe identifiers to Entrez gene 27 identifiers using the mygene.info 47 community annotation service. Finally, we designed a simple analytical interface where more advanced curators could design, compute and visualize standard genomic meta-analysis 48 of random and fixed effects across tagged and annotated digital samples. We used STARGEO.org to define a genomic signature for breast cancer on crowd-curated and compared it with a genomic signature for breast cancer using TCGA data. We used STARGEO.org mappings, based on the mygene.info 47 gene annotation service, to map all probe identifiers to Entrez gene identifiers. For STARGEO.org, we used samples with crowd-curated annotations that were made across 1,234 cases versus 535 control samples from 27 different GEO experiments. For every gene measured in each study, we estimated the mean difference of contrasts for expression as well as the standard deviation of that mean difference. We used a standard meta-analysis with 1) fixed and 2) random effects model to combine these estimates across studies to generate meta P-values and meta effects across studies. Specifically, we used inverse variance weighting for pooling of the data across studies, and calculated weights for estimates of random effects with continuous outcome data via the DerSimonian-Laird estimate 49 . We use Python to implement these analyzes in STARGEO.org. All raw GEO data, curations, and analyses that we generate are available at the http://STARGEO.org web application portal with documentation for programmatic download via a representational state transfer (ReST) application programmer interface (API) through http://STARGEO.org/docs. For TCGA, we downloaded RNA-Seq data already preprocessed to transcript counts across genes and deposited in GEO with clinical annotations from thousands of samples from TCGA and matched controls (GSE62944). We selected 1,119 breast cancer cases versus 113 controls and performed two standard types of analyses to define differentially expressed genes: (1) A statistical T-test based on fragments per kilobase per million sequenced reads (FPKM) estimates 50 , and (2) differential gene expression analysis based on the negative binomial distribution (DESeq2) method 51 . We used Spearman rank correlation across all four comparisons of differentially expressed genes between STARGEO.org (random versus fixed effects) meta-analyses and TCGA (FPKM versus DESeq2) analyses. Although all the comparisons were both highly and significantly correlated by Spearman rank correlation, we found that the highest correlation of the STARGEO.org breast cancer genomic signature under random effects for STARGEO.org and the FPKM for TCGA, and these results are reported as our results. To correct significance for multiple tests, we applied the Benjamini-Hochberg procedure 52 and selected genes with false discovery rate (FDR)o0.1 (10%). For both STARGEO.org and TCGA analyses, we scaled the fold change of each gene's effect by the significance (−log10(P-value) × fold change), and used this score to rank genes by their differential expression and estimate the overlap among the top 200 (1%) of genes 53 shared between the two datasets. All calculations are provided as Supplementary Data (Supplementary File 3).
6,260.8
2017-09-19T00:00:00.000
[ "Computer Science" ]
Distribution of the T12 erector spinal muscle plane block in the dorsal region guided by ultrasound Background This study aimed to explore the distribution of the erector spinal muscle plane block of the thoracic 12 vertebral body (T12) in the dorsal region guided by ultrasound. Methods A total of 28 patients, who underwent elective lumbar surgery, were enrolled in the present study. These patients were aged between 18 and 65 years, and the American Society of Anesthesiologists (ASA) grade was 1 or 2. The block of the T12 transverse process erector spinal muscle was performed under the guidance of ultrasound, and each side was injected with 25 ml of 0.4% ropivacaine hydrochloride + 2 mg of dexamethasone. The back areas were measured using the cold-warm method (the back area was divided into 11 areas [T7–S1] with the body surface marker). At 10, 20, 30, 40, 50, and 60 min after the drug injection, the effectiveness of the regional block was recorded. The presence of puncture hematoma, local anesthesia drug poisoning, nausea, vomiting, headache, and dizziness after the block was recorded. Results The range of the T12 transverse process block was basically fixed at 30 min after the single injection. No pneumothorax, hematoma, or local anesthetic poisoning occurred in any of the patients. Conclusion The effective longitudinal plane of the T12 transverse process erector spinal muscle block was mainly distributed in the T9–L5 dorsal cutaneous branches, and the distribution of the block area was safe and stable. Background The erector spinal muscle plane block (ESPB) has been a novel regional block technique in recent years. However, its indications have not been clearly defined [1,2]. With the development of ultrasound visualization technology and the clear elucidation of the neuroanatomy of the posterior branch of the spinal nerve, the bilateral ESPB has been more and more frequently applied in clinics. However, the anatomy, injection method, diffusion range, dose, and volume of local anesthetics are not clear [3,4]. ESPB can choose different transverse process levels, and it is easy to control the block range. However, its disadvantage is that there are currently fewer relevant clinical studies. The study has revealed that under the guidance of ultrasound, the analgesic range of the horizontal T12 ESPB could reach the L5 level. Studies have suggested that local anesthetics can be diffused to L2~L3 when the horizontal block position of the erector spine muscle is T7 [1]. Tulgar et al. [2] found that ESPB was performed in the transverse plane of the L4 vertebral body, and the sensory block plane was measured in the range of T12~L4. However, there is no clear report on the safety and stability of the block area distribution. In addition, the clinical application of the vertical spinal muscle block directly at the level of the lumbar spine has caused some orthopedic surgeons to worry about whether the puncture site will cause incision infection. The present study aimed to explore the distribution of the horizontal ESPB of the thoracic 12 vertebral body (T12) in the dorsal region guided by ultrasound and to explore, as far away as possible from the lumbar incision puncture, whether the drug solution can diffuse the effectiveness and stability of the lumbar region, in order to provide a reference for the analgesia of lumbar surgery. General clinical data The Ethics Committee of Shanxi Provincial People's Hospital approved the present study, and the patients or their families provided informed consent. A total of 28 patients (13 male and 15 female) who underwent elective posterior lumbar surgery under general anesthesia were enrolled in the present study. The age of these patients ranged from 18 to 65 years, with an average age of 53.4 ± 9.8 years. The body mass index (BMI) of these patients ranged from 18 to 28 kg/m 2 (average: 24.5 ± 1.7 kg/m 2 ). The American Society of Anesthesiologists (ASA) grade was 1 or 2. These patients had no history of local anesthetics allergy, mental illness, or coagulation dysfunction. Surgical methods After the patient entered the pre-anesthesia room, the venous access was opened. The blood pressure, electrocardiogram (ECG), and blood oxygen saturation (SpO 2 ) were monitored. The patient lay in a prone position. Under the guidance of a portable ultrasound device, 0.4% ropivacaine (25 ml) and dexamethasone (2 ml) were injected into the erector spinal muscles of the bilateral T12 transverse processes. At 10, 20, 30, 40, 50, and 60 min after the block, the sites on the back were defined as observation points. The body back was divided into 11 regions, from T7-L5 along the median posterior line. The cold-warm method (75% alcohol) was used to test the block in each area. When the cold-warm sensation was weakened, it was defined as an effective block. At 10, 20, 30, 40, 50, and 60 min after the injection of the drugs, the effectiveness of the regional block was recorded. The fluctuations of the mean arterial pressure (MAP) and heart rate (HR) were maintained to not exceed 20% of the basic level. The changes in vital signs were measured and recorded every 5 min until the patient entered the operating room from the preanesthesia room. The presence of puncture hematoma, local anesthesia drug poisoning, nausea, vomiting, shivering, itching, headache, and dizziness after the block was recorded. After entering the operating room, lumbar posterior spinal canal decompression, intervertebral disc removal, bone graft fusion, and internal fixation were performed under general anesthesia. The patient was placed in a prone position. The operating position was located at the lumbar bridge of the operating bed, with thin pillows on both sides of the iliac part. Surgical procedures: midline incision was made in the back. Skin, subcutaneous tissue, and fascia were cut open, and the supraspinous ligament was exposed. The supraspinous ligament was cut open along the midline of the spinous process to reach the bone, and the sacrospinous muscle was stripped to reach the articular process. An automatic retractor was used to expose the lamina. The vertebral lesion body was located with the aid of c-arm. The entry point of each vertebra was determined. The cone was opened to penetrate through to the pedicle entrance cortex. The instrument was opened to break through cancellous bone material. A probe detected whether the pedicle bone pipeline was broken. Bone wax was applied to the tip of the Kirschner wire for positioning and then exiting the Kirschner wire after determining whether the c-arm was accurate. The appropriate length and type of pedicle screw was selected, and a Twrench was applied to the screw. C-arm fluoroscopy was used to ensure a good screw position. The scissor spinous process was applied to remove the required segment of the spinous process. The lamina needed to decompression segment was bitten by the vertebral plate forceps, and the nerve exfoliator was used to explore the nerve root canal. The posterior longitudinal ligament, annulus fibrosus, and nucleus pulposus were exposed using forceps to remove the intervertebral disc. A connecting rod and bone graft, flat screen, and crossconnection were installed. The incision was rinsed, hemostatic, drainage was placed and sutured. Statistics analysis The data were statistically analyzed by using statistical software SPSS 22.0. Normally distributed measurement data were expressed as mean ± standard deviation (x ± SD), and subjected to cluster analysis [5]. Block effect Under the guidance of ultrasound, the block range of the erector spinal muscle of the T12 transverse process was basically fixed, covering the T8-L5 level. The effective block region covered the T9-L5 levels in ≥ 80% of patients and the T8-L5 levels in ≥ 50% and < 80% of patients ( Table 1). The effective rate of the block in the different regions was systematically clustered. According to the cluster results and professional knowledge, 11 different regions were divided into three clusters. Among these, the 3rd-9th regions were the first cluster, the 2nd and 10th regions were the second cluster, and the 1st and 11th regions were the third clusters. The comprehensive analysis results of observation time and effective block rate revealed that the first cluster was the area that had a rapid onset of the block. The onset time of the second cluster was slower than that of the first cluster, and the onset time of the third group was the slowest. For the whole block onset time, at 20 min after the injection, the onset of the block was initiated. At 30 min after the injection, the block effect was relatively stable, and within 40-60 min after the injection, there were no significant changes in block effect. A further cluster analysis was performed on the onset rate of the block in different time intervals, and the overall onset rate was gradually increased with time. The clustering was conducted according to the improvement degree of the effective rate in different time intervals. The regions were divided into two clusters. The first cluster was the 1st, 2nd, 10th, and 11th regions, while the second cluster was the 3rd-9th regions. The first cluster had a slowly increased block rate, while the second cluster had a rapid onset within 20 min and subsequently became stable. Adverse reactions All 28 patients successfully completed the bilateral ESPB under ultrasound guidance. No local hematoma, pneumothorax, infection, nerve injury, or other complications occurred, and no adverse reactions, such as nausea, vomiting, dizziness, or drowsiness, were observed. One patient had transient mild hypotension and mild bradycardia, which was relieved after intravenous injection of 0.5 mg of atropine. Discussion Posterior lumbar surgery has a large trauma range. This operation can strip and pull the blocked paravertebral regions, which can cause moderate to severe postoperative pain and delay postoperative recovery. The inflammatory reactions caused by surgical trauma may lead to peripheral and central sensitization, making the postoperative pain more difficult to control [6]. Present studies have reported that the ESPB should be performed at the level of the lumbar vertebrae. However, since the puncture site is close to the incision site, this has caused orthopedic doctors to worry about incision infection after lumbar surgery [7,8]. Therefore, in the present study, the probe slid apart from the midline by approximately 2.5-3.0 cm while paralleled to the long axis to find the T12 transverse process. The puncture point was the point vertical to the transverse process of T12 and approximately 3 cm horizontally apart from the head of the long axis to make it as far as possible from the incision [9]. A related study revealed that the concentration of local anesthetic is an important factor in the onset time of the nerve block. The higher the concentration was, the faster the onset became [10]. In clinics, the commonly used concentration of ropivacaine is 0.375-0.500%. A total of 50 ml of 0.4% ropivacaine was bilaterally used to help the concentration and volume reach a proper ratio to achieve the expected effect in the present study. This approach achieves the clinical effect and does not affect the safety of the patient [11]. In the present study, an ultrasound-guided bilateral ESPB was conducted. Under dynamic direct vision, local anesthetics were accurately injected into the deep surface of the erector spinal muscle using the in-plane technique, and the diffusion of the local anesthetic was observed. The results revealed that the effect of the block was good. The present study results revealed that the block effect was relatively stable at 30 min after the horizontal T12 erector spinal muscle block. The range of the block was within the T9-L5 levels. The effective block area was within the T9-L5 levels in ≥ 80% of patients, while the effective block area was within the T8-L5 levels in ≥ 50% and < 80% of patients. Cluster analysis is a statistical method in which the subjects are divided into different subtypes based on their difference in similarity degree. The similarity between different clusters is observed and explored. In the present study, the distribution of the effective block area of the erector spinal muscle block of the T12 transverse process in the lumbar back block was observed. Cluster analysis was used to analyze and summarize the block effect and onset rate to determine the internal connection of the different block areas. In the present study, the effective rate of the block in different regions was systematically clustered, and the results revealed that 20 min is the rapid onset period of the block. In terms of the whole block onset time, the onset of the block started at 10-20 min after the injection of drugs, and at 30 min after the injection of drugs, the block effect stabilized. There were some differences in the onset rate and effective block rate of the erector spinal muscle block of the T12 transverse process in different regions. The erector spinal muscle covers the whole of the back, and the deep layer of thoracolumbar fascia between the erector spinal muscle and transverse process of the lumbar vertebrae extends from the thoracic vertebrae to the lumbar vertebrae. Therefore, this provides an anatomical basis for the diffusion of local anesthetics in the direction of the head and tail. A study considered that the main mechanism of anesthetics might be the block of the posterior branch of the spinal nerve, which diffuses in the fascial space [12]. In the present study, the liquid medicine diffused to the periphery with the injection point at the horizontal T12 transverse process as the center. In addition, since the present study chose to puncture the needle at 45-60°in the plane, the tip of the needle was paralleled to the caudal side of the spine to inject the liquid medicine, and the liquid medicine spread from the proximal end to the distal end. This shows that the closer the area to the center point, the earlier the spinal nerve contacts with the drugs, and the faster the drug works, while on the contrary, the slower the drug works. The difference in effective rate was most likely because the injected liquid medicine in the thoracic region flowed to the thoracic and lumbar segments, and the anatomical paths of the thoracic and lumbar segments, through which the liquid medicine acted differently on the spinal nerve [13]. In the case of the ESPB under the guidance of ultrasound, the ultrasound image of the transverse process could easily be identified, and there were no significant blood vessels, nerves, or other organs on the transverse process. Therefore, the ESPB can significantly reduce the risk of adverse events, such as hematoma, nerve injury, pneumothorax, and block failure. In addition, blood was not observed during the withdrawal of the needle. Local anesthetics were injected into the deep surface of the erector spinal muscle. The separation of erector spinal muscle from the surface of the transverse process was observed to ensure that the injection site of the local anesthetic was correct and prevent local anesthetic poisoning. The present study revealed that mild hypotension occurred in one patient. In the literature, the view of these paraaxonal nerve blocks was that these blocking techniques not only completely cover the distribution area of cutaneous branches but also block the sympathetic nerve [14,15]. There are several limitations in the present study. Due to the influence of objective factors, the sample size of this study is limited, which needs to be studied in large sample size in the future; secondly, this study is a descriptive study, and there is no control group, and the control group will be added in the future study for further demonstration. In summary, in the present study, 25 ml of 0.4% ropivacaine hydrochloride and 2 mg of dexamethasone were used for the ultrasound-guided horizontal erector spinal muscle block of the T12 transverse process, and the block range was from T9 to L5 within 30 min after the injection of the drugs. The effect of the block was good. Therefore, this technique can be used for incision analgesia in this range after lumbar surgery. No pneumothorax, hematoma, or local anesthetic poisoning occurred in any of the patients in the present study. This technique is fairly safe. Further studies can be conducted to explore the effect of the erector spinal muscle plane block on postoperative analgesia and long-term prognosis. Abbreviations ASA: American Society of Anesthesiologists; ECG: Electrocardiogram; MAP: Mean arterial pressure; HR: Heart rate
3,785.6
2021-01-12T00:00:00.000
[ "Medicine", "Engineering" ]
Application of Metal-Organic Frameworks and Covalent Organic Frameworks as (Photo)Active Material in Hybrid Photovoltaic Technologies : Metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) are two innovative classes of porous coordination polymers. MOFs are three-dimensional materials made up of secondary building blocks comprised of metal ions / clusters and organic ligands whereas COFs are 2D or 3D highly porous organic solids made up by light elements (i.e., H, B, C, N, O). Both MOFs and COFs, being highly conjugated sca ff olds, are very promising as photoactive materials for applications in photocatalysis and artificial photosynthesis because of their tunable electronic properties, high surface area, remarkable light and thermal stability, easy and relative low-cost synthesis, and structural versatility. These properties make them perfectly suitable for photovoltaic application: throughout this review, we summarize recent advances in the employment of both MOFs and COFs in emerging photovoltaics, namely dye-sensitized solar cells (DSSCs) organic photovoltaic (OPV) and perovskite solar cells (PSCs). MOFs are successfully implemented in DSSCs as photoanodic material or solid-state sensitizers and in PSCs mainly as hole or electron transporting materials. An innovative paradigm, in which the porous conductive polymer acts as standing-alone sensitized photoanode, is exploited too. Conversely, COFs are mostly implemented as photoactive material or as hole transporting material in PSCs. Introduction In the last decades, the demand for sustainable clean energy sources, particularly solar energy, has constantly raised. In spite of the fact that the energy supplied by the Sun's radiation over one year is roughly 10,000 times higher than the world current rate of energy consumption, the state-of-art of the photovoltaic devices are not yet satisfactory because of their low efficiency, high cost and/or limited scale [1]. Nowadays, the most exploited (and efficient) technology for conversion of solar energy to electricity consists in silicon-based devices. In a classical silicon cell, the top and bottom layers are constituted of an n-doped and a p-doped silicon wafer, in which the charge of the mobile carriers is negative Although both metal/covalent organic frameworks are generally used for gas storage or catalysis, their optoelectronics and energy storage/conversion properties have been recently explored too [19,31,32]. In this work, we review and critically discuss recent advancement in the application of MOFs and COFs as active materials in emerging photovoltaic technology. It is worth to mention that throughout the present review, we mainly focused our attention on the application of MOFs and COFs in two classes of emerging photovoltaic technology, namely DSSCs and PSCs. With respect to OPV, as far as we are aware, just few examples have been reported in literature; therefore, here, we decided specifically to not tackle this aspect. Indeed, the reader is kindly addressed to some recent reviews for the principle of this technology [33,34]. Theoretically, porous polymers are very promising for this technology being mainly composed by two different organic building blocks, which can exhibit donor and acceptor features and can be used n-type and p-type heterojunction. The first report on COFs in OPVs owed to Prof. T. Bein and Prof. D. Jiang [35][36][37]. Notwithstanding their promising feature the effective application of this compound is limited by solubility issues affecting the processability of the material itself. This is mainly a drawback when COFs are compared to other porous Although both metal/covalent organic frameworks are generally used for gas storage or catalysis, their optoelectronics and energy storage/conversion properties have been recently explored too [19,31,32]. In this work, we review and critically discuss recent advancement in the application of MOFs and COFs as active materials in emerging photovoltaic technology. It is worth to mention that throughout the present review, we mainly focused our attention on the application of MOFs and COFs in two classes of emerging photovoltaic technology, namely DSSCs and PSCs. With respect to OPV, as far as we are aware, just few examples have been reported in literature; therefore, here, we decided specifically to not tackle this aspect. Indeed, the reader is kindly addressed to some recent reviews for the principle of this technology [33,34]. Theoretically, porous polymers are very promising for this technology being mainly composed by two different organic building blocks, which can exhibit donor and acceptor features and can be used n-type and p-type heterojunction. The first report on COFs in OPVs owed to Prof. T. Bein and Prof. D. Jiang [35][36][37]. Notwithstanding their promising feature the effective application of this compound is limited by solubility issues affecting the processability of the material itself. This is mainly a drawback when COFs are compared to other porous organic polymers, such as amorphous conjugated microporous polymers (CMPs) or polymers with intrinsic microporosity (PIMs) (Figure 1), that are highly processable [38]. Our analysis is mainly focused on DSSCs and PSCs. As regards organic photovoltaics (OPV), the third family in emerging photovoltaics, just a couple of examples have been reported in literature so far. Therefore, we decided to not specifically tackle this technology in the present review. COFs are starting to be implemented as conductive polymers in photovoltaic applications. COFs are generally constituted by two different organic linkers, that could have a ditopic (C2, two active functional groups), tritopic (C3, three active functional groups) or tetratopic (C4, four active functional groups) geometry leading to 2D or 3D materials. Figure S1A shows different monomers that can be employed to obtain 2D COFs: indeed, there are not very strict requirements on the choice of starting material (e.g. from bi-to tetradentate monomers). On the other hand, tetragonal and tetradentate monomers are necessary to obtain 3D structures ( Figure S1B). COFs are usually synthetized as bulk materials by using boronate esters or imine bond-bridged systems as precursors ( Figure 2) [62]. Yet, the so obtained COFs usually have poor electric conductivity in the z axis being constituted by stacked 2D layers. In this context, the introduction of imine-bridged pattern offers a feasible approach to partially extend the conjugation throughout the different layers. Indeed, whereas in boronate COFs the charge preferentially moves alongside the plane directions, imine-based COFs also allow lateral diffusion throughout the frameworks. Therefore, boronate ester-based COFs could be considered as small molecule-based electronics whereas imine-based COFs are generally described as conjugated conducting polymers [19,63]. Nanostructured systems are usually synthetized by solvothermal method allowing th finely control the morphology and the crystallinity of the materials [64][65][66]. However, this includes the quite harsh experimental conditions (e.g., aggressive solvents, relativ temperature, using sealed vessel…) as synthetic method to produce COFs Straightfo researchers have been attempting to figure out new synthetic routes to obtain COFs: i.e., m and room temperature synthesis (which are mechanochemical and rapid solution-phase a and massive synthesis [67], among others [62]. Prof. Cooper and co-workers attempted first time to use microwave method by synthe COF-5 and COF-102 based on boron linkage [68]. They successfully obtain COFs 200 times f Nanostructured systems are usually synthetized by solvothermal method allowing the latter to finely control the morphology and the crystallinity of the materials [64][65][66]. However, this approach includes the quite harsh experimental conditions (e.g., aggressive solvents, relatively high temperature, Energies 2020, 13, 5602 6 of 48 using sealed vessel . . . ) as synthetic method to produce COFs Straightforwardly, researchers have been attempting to figure out new synthetic routes to obtain COFs: i.e., microwave and room temperature synthesis (which are mechanochemical and rapid solution-phase approach) and massive synthesis [67], among others [62]. Prof. Cooper and co-workers attempted first time to use microwave method by synthesizing of COF-5 and COF-102 based on boron linkage [68]. They successfully obtain COFs 200 times faster (i.e., 20 min reaction) compared to solvothermal method (72 h). Furthermore, mechanochemical synthesis is fast and environmentally friendly allowing to minimize the amount of solvents employed and the energy required throughout the synthetic procedure. COFs based on imine-bond were synthesized through this method by Biswal et al. too [69]. A simple and facile room-temperature solution-phase route for the fabrication of spherical COF-TpBD was carried out by Yan's group [70]. The obtained COFs showed quite good thermal stability and very short synthetic times (i.e., 30 min). On one hand, MW and RT synthesis could be usefully employed in the laboratory scale; on the other hand, massive synthesis method could be easily implemented for industrial production of COFs. Dye-Sensitized Solar Cells (DSSCs) Dye-sensitized solar cells (DSSCs) [71,72] are among the more studied photovoltaic devices among hybrid photovoltaic technologies. They were first reported by O'Regan and Grätzel in 1991 [73] and, since then, they drew a lot of attention in the scientific community: the replacement of a thin layer of titanium dioxide with a mesoporous one allowed to enhance the device efficiency up to 7.1% and the latter has reached 11% by sensitizing nanocrystalline TiO 2 semiconductor with Ru-based dye molecules, namely N719 or CYC-B11 [74,75]. Dye-sensitized solar cells were widely exploited due to their low-cost, relatively high solar-to-energy conversion efficiency and easy, cheap and scalable fabrication process. Up to date, the most efficient DSSC has reached 14% using porphyrin sensitizers as reported by Grätzel and co-workers [76]. With respect to the device architecture, there are five main components to constitute a classical DSSC as shown in Figure 3: a transparent conductive oxide (TCO) as substrate [77], a nanostructured n-type or p-type semiconductor as photoanode or photocatode, respectively [78,79], a visible-light absorbing dye chemisorbed onto the semiconductor surface [80], an electrolyte containing a redox mediator [81] and a counter electrode [82]. The working principle of DSSCs is inspired from natural photosynthesis. A thorough analyses of DSSC working principle falls outside the scope of the present review and it has been aplenty discussed in some excellent reviews and it is just briefly recalled hereafter [83,84]: in a working device, sunlight passes through the photoanode and causes the promotion of an electron from the fundamental state to an excited state of the dye. From the latter, the electron is injected into the conduction band of the n-type semiconductor (usually TiO 2 ) and, at the same time, the reduced species in the electrolyte regenerates the fundamental state of the sensitizers. From the conduction band of TiO 2 , the electron flows through an external circuit reaching the counter-electrode where it is used to reduce the oxidized species of the redox mediator thus allowing to close the electronic circuit and produce energy [85]. MOFs in DSSCs Actually, in classical DSSCs, both MOFs and COFs, due to their chemical versatility, do not have a unique role. Indeed, they could be employed as sensitizer materials [86][87][88][89], electrolyte additives [90] or electrode materials [91][92][93]. In some cases, they have been employed as templates to obtain materials with tailored properties [94][95][96][97]. Hereafter, we report on the different approaches that have been exploited in literature. It is worth mentioning that, in some cases, univocal categorization is unfeasible due to the hybrid nature of the investigated material [1,[98][99][100]. A really unique utilization of MOF was recently proposed by Sarwar et al. [101]; they employed an Al-based MOF as gelling agent in quasi solid-state DSSCs. The optimization of the MOF/electrolyte ratio allowed to modulate the photoelectrochemical response of the devices. The presence of Al 3+ Energies 2020, 13, 5602 7 of 48 cation (albeit embedded in the MOF structure) tune the V OC and mild the electronic recombination, reaching slightly better efficiency compared to classical DSSCs. More remarkably, a good stability was reached: just a 6% drop (after 250 h) was recorded when the devices were stressed at 60 • C. MOFs in DSSCs Actually, in classical DSSCs, both MOFs and COFs, due to their chemical versatility, do not have a unique role. Indeed, they could be employed as sensitizer materials [86][87][88][89], electrolyte additives [90] or electrode materials [91][92][93]. In some cases, they have been employed as templates to obtain materials with tailored properties [94][95][96][97]. Hereafter, we report on the different approaches that have been exploited in literature. It is worth mentioning that, in some cases, univocal categorization is unfeasible due to the hybrid nature of the investigated material [1,[98][99][100]. A really unique utilization of MOF was recently proposed by Sarwar et al. [101]; they employed an Al-based MOF as gelling agent in quasi solid-state DSSCs. The optimization of the MOF/electrolyte ratio allowed to modulate the photoelectrochemical response of the devices. The presence of Al 3+ cation (albeit embedded in the MOF structure) tune the VOC and mild the electronic recombination, reaching slightly better efficiency compared to classical DSSCs. More remarkably, a good stability was reached: just a 6% drop (after 250 h) was recorded when the devices were stressed at 60 °C. MOF as Photoelectrode Titanium dioxide is the most exploited semiconducting material as photoanode in DSSCs due to its low-cost and extreme stability under solar irradiation. Additionally, being a large band-gap semiconductor (BG = 3.2 eV) [102], it does not adsorb visible light; very interestingly it could be obtained in mesoporous arrays that permit one to load very high amounts of sensitizer. TiO2 is a low cost, widely available, non-toxic and biocompatible material usually employed in health care products or domestic applications [103,104]. Various alternative materials have been explored as photoanodes in DSSCs: as an example, ZnO has a similar conduction band and work function when compared to TiO2 and shows the same time higher carrier mobility [105], but its low stability under solar radiation and in acidic environments as well as the likely formation of aggregates on its surface are important drawbacks [106]. Some other semiconductors have been exploited, e.g., CeO2 [107], WO3 [108], SrTiO3 [109] and Nb2O5 [110] among others, but the photoelectrochemical performance assured by titanium oxide is still unbeaten. With respect to conventional photoelectrodic materials, MOFs have some unique features: indeed, they could simultaneously act as both electrode and sensitizers, being the metallic core directly linked to (chromophore) organic units (see also paragraph 2.1.2). The wide surface area of this class of materials could improve the dye loading as well as tune the charge transfer kinetic at the electrode/electrolyte interface. MOF as Photoelectrode Titanium dioxide is the most exploited semiconducting material as photoanode in DSSCs due to its low-cost and extreme stability under solar irradiation. Additionally, being a large band-gap semiconductor (BG = 3.2 eV) [102], it does not adsorb visible light; very interestingly it could be obtained in mesoporous arrays that permit one to load very high amounts of sensitizer. TiO 2 is a low cost, widely available, non-toxic and biocompatible material usually employed in health care products or domestic applications [103,104]. Various alternative materials have been explored as photoanodes in DSSCs: as an example, ZnO has a similar conduction band and work function when compared to TiO 2 and shows the same time higher carrier mobility [105], but its low stability under solar radiation and in acidic environments as well as the likely formation of aggregates on its surface are important drawbacks [106]. Some other semiconductors have been exploited, e.g., CeO 2 [107], WO 3 [108], SrTiO 3 [109] and Nb 2 O 5 [110] among others, but the photoelectrochemical performance assured by titanium oxide is still unbeaten. With respect to conventional photoelectrodic materials, MOFs have some unique features: indeed, they could simultaneously act as both electrode and sensitizers, being the metallic core directly linked to (chromophore) organic units (see also paragraph 2.1.2). The wide surface area of this class of materials could improve the dye loading as well as tune the charge transfer kinetic at the electrode/electrolyte interface. In photocatalytic applications, semiconductor nanoparticles are generally adopted due to their large band-gap energy allowing absorption in the UV range of the electromagnetic spectrum. However, the sintering procedure required to electronically connect them to each other leads to a remarkable decrease of their surface area, then the dye-loading and light harvesting become lower. To overcome such a limitation, the use of nanostructured, porous matrices could be somehow feasible. Indeed, MOFs are suitable for this role owing to a large surface area (<1000 m 2 /g) as well as a high porosity [111,112]. Indeed, numerous studies have been reported highlighting the promising features of MOFs as photoactive materials. For instance, Li et al. reported for the first time the application of MOFs in DSSCs [93]. They showed that a highly porous MOF known as zeolitic imidazolate framework (ZIF)-8 composed by zinc ions coordinated with four imidazolate ligands, employed as electrode/electrolyte interfacial layer, allowed for improving the dye loading and minimizing the interfacial charge Energies 2020, 13, 5602 8 of 48 recombination due to its electrical insulating property. They synthesize an ultrathin Zn-based MOF (i.e., ZIF-8, Figure 4) that was coated onto a TiO 2 anode through a post-treatment approach [93]. the growth time of the ZIF-8 layer. The best photovoltaic properties were achieved for a growth time of about 7 min, whereas a further increase in ZIF-8 growth time caused a decrease in both the short circuit photocurrent (JSC) and power conversion efficiency (η) [93]. More recently, He et al. reported on the employment of UiO-66 [117] or ZIF-8 in conjunction with reduced graphene oxide (rGO) to modify the photoanode [118]. The Uio-66-RGO/TiO2 and ZIF 8-RGO/TiO2 were prepared by a physical mixture process to produce a slurry that was then deposited by doctor-blade onto FTO glasses. The graphene oxide-MOFs photoanode showed enhanced photovoltaic performance increasing scattering capacity of incident light as well as demonstrating low loss ratio of photogenerated electrons by small dark current and leading to a PCE close to 8%. To further improve the morphological features of TiO2, Chi and co-workers reported the use of MIL-125(Ti) along with poly(ethylene glycol) diglycidyl ether (PEGDGE) [119]. MIL-125(Ti) was used to synthesize mesoporous hierarchical TiO2 (hier-TiO2) photoanodes with a large surface area and a variety of nanostructures depending on the calcination procedure [120]. Simultaneously, the use of The very high specific surface area of MOFs [113][114][115] was proved to almost double the amount of absorbed dye (N719), i.e., from 0.71 × 10 −7 to 1.13 × 10 −7 mol cm −2 (+43%) for the bare and modified photoanode, respectively. Unfortunately, the core/shell structure of the TiO 2 /ZIF-8 electrode heavily hampered the injection of electrons from the sensitizer into the conduction band of TiO 2 by reducing the short circuit current. Very interestingly, the value of V OC linearly correlated with the thickness of ZIF-8 coating layer [116]. They also investigated the variation of photovoltaic performance by varying the growth time of the ZIF-8 layer. The best photovoltaic properties were achieved for a growth time of about 7 min, whereas a further increase in ZIF-8 growth time caused a decrease in both the short circuit photocurrent (J SC ) and power conversion efficiency (η) [93]. More recently, He et al. reported on the employment of UiO-66 [117] or ZIF-8 in conjunction with reduced graphene oxide (rGO) to modify the photoanode [118]. The Uio-66-RGO/TiO 2 and ZIF 8-RGO/TiO 2 were prepared by a physical mixture process to produce a slurry that was then deposited by doctor-blade onto FTO glasses. The graphene oxide-MOFs photoanode showed enhanced photovoltaic performance increasing scattering capacity of incident light as well as demonstrating low loss ratio of photo-generated electrons by small dark current and leading to a PCE close to 8%. To further improve the morphological features of TiO 2 , Chi and co-workers reported the use of MIL-125(Ti) along with poly(ethylene glycol) diglycidyl ether (PEGDGE) [119]. MIL-125(Ti) was used to synthesize mesoporous hierarchical TiO 2 (hier-TiO 2 ) photoanodes with a large surface area and a variety of nanostructures depending on the calcination procedure [120]. Simultaneously, the use of poly(ethylene glycol) diglycidyl ether (PEGDGE,) lead to the formation of larger particles and resulted in a change in the morphology of the NPs from circular plates to bipyramids. This provides better mechanical stability and faster ion transportation via Grotthus hopping compared to the ones reported by other groups [121,122]. They measured the J-V curves of the DSSCs with different photoanode structures at 1 Sun. According to the photovoltaic parameters, the hier-TiO 2 bilayer photoanodes boosted the solar energy conversion efficiency up to 7%, much higher than that of the nanocrystalline TiO 2 (nc-TiO 2 ) monolayer photoanode (4.6%). They performed durability tests for the cell efficiency, and it was observed there is no efficiency decrease at room temperature up to several days. A similar approach was attempted by Ramasubbu and co-workers [123] who prepared a 3D mesoporous TiO 2 -Ni-MOF composite aerogels via sol-gel method and used as photoanode materials Energies 2020, 13, 5602 9 of 48 for quasi-solid dye-sensitized solar cell (QSDSC). The photoanode was prepared by spin coated method and assembled into complete devices leading to PCE close to 9%, higher than the one obtained with unmodified aerogels (7%), ascribable to increased photocurrent density, reduced charge-transfer resistance and suppressed electron recombination as proved through photocurrent density-applied voltage curves and electrochemical impedance measurements. In Table 1, the most effective MOFs implemented as photoelectrodes in DSSCs are summarized. Table 1. List of MOFs effectively employed as photoelectrodes in DSSC. For sake of brevity, the SBUs of the MOF (i.e., metal and the organic linker) are indicated separately. Note that the reported values of photoconversion efficiency cannot be compared to each other, even though they could shine light on the potentiality of this class of material. Metals Organic Linkers Interfacial Layer Zn ZIF-8 better mechanical stability and faster ion transportation via Grotthus hopping compared to the ones reported by other groups [121,122]. They measured the J-V curves of the DSSCs with different photoanode structures at 1 Sun. According to the photovoltaic parameters, the hier-TiO2 bilayer photoanodes boosted the solar energy conversion efficiency up to 7%, much higher than that of the nanocrystalline TiO2 (nc-TiO2) monolayer photoanode (4.6%). They performed durability tests for the cell efficiency, and it was observed there is no efficiency decrease at room temperature up to several days. A similar approach was attempted by Ramasubbu and co-workers [123] who prepared a 3D mesoporous TiO2-Ni-MOF composite aerogels via sol-gel method and used as photoanode materials for quasi-solid dye-sensitized solar cell (QSDSC). The photoanode was prepared by spin coated method and assembled into complete devices leading to PCE close to 9%, higher than the one obtained with unmodified aerogels (7%), ascribable to increased photocurrent density, reduced charge-transfer resistance and suppressed electron recombination as proved through photocurrent density-applied voltage curves and electrochemical impedance measurements. In Table 1, the most effective MOFs implemented as photoelectrodes in DSSCs are summarized. 5.34 [93] Zn photoanode structures at 1 Sun. According to the photovoltaic parameters, the hier-TiO2 bilayer photoanodes boosted the solar energy conversion efficiency up to 7%, much higher than that of the nanocrystalline TiO2 (nc-TiO2) monolayer photoanode (4.6%). They performed durability tests for the cell efficiency, and it was observed there is no efficiency decrease at room temperature up to several days. A similar approach was attempted by Ramasubbu and co-workers [123] who prepared a 3D mesoporous TiO2-Ni-MOF composite aerogels via sol-gel method and used as photoanode materials for quasi-solid dye-sensitized solar cell (QSDSC). The photoanode was prepared by spin coated method and assembled into complete devices leading to PCE close to 9%, higher than the one obtained with unmodified aerogels (7%), ascribable to increased photocurrent density, reduced charge-transfer resistance and suppressed electron recombination as proved through photocurrent density-applied voltage curves and electrochemical impedance measurements. In Table 1, the most effective MOFs implemented as photoelectrodes in DSSCs are summarized. nanocrystalline TiO2 (nc-TiO2) monolayer photoanode (4.6%). They performed durability tests for the cell efficiency, and it was observed there is no efficiency decrease at room temperature up to several days. A similar approach was attempted by Ramasubbu and co-workers [123] who prepared a 3D mesoporous TiO2-Ni-MOF composite aerogels via sol-gel method and used as photoanode materials for quasi-solid dye-sensitized solar cell (QSDSC). The photoanode was prepared by spin coated method and assembled into complete devices leading to PCE close to 9%, higher than the one obtained with unmodified aerogels (7%), ascribable to increased photocurrent density, reduced charge-transfer resistance and suppressed electron recombination as proved through photocurrent density-applied voltage curves and electrochemical impedance measurements. In Table 1, the most effective MOFs implemented as photoelectrodes in DSSCs are summarized. cell efficiency, and it was observed there is no efficiency decrease at room temperature up to several days. A similar approach was attempted by Ramasubbu and co-workers [123] who prepared a 3D mesoporous TiO2-Ni-MOF composite aerogels via sol-gel method and used as photoanode materials for quasi-solid dye-sensitized solar cell (QSDSC). The photoanode was prepared by spin coated method and assembled into complete devices leading to PCE close to 9%, higher than the one obtained with unmodified aerogels (7%), ascribable to increased photocurrent density, reduced charge-transfer resistance and suppressed electron recombination as proved through photocurrent density-applied voltage curves and electrochemical impedance measurements. In Table 1, the most effective MOFs implemented as photoelectrodes in DSSCs are summarized. MOFs as Sensitizers In dye-sensitized solar cells, the sensitizer plays a crucial role and its choice is not trivial. In fact, a good dye should fulfil some very strict requirements [129][130][131][132]. First of all, it should possess a very high molar extinction coefficient and a broad absorption in the visible/near infrared (NIR) region of the solar spectrum in order to catch as much radiation as possible. It should also possess one or more anchoring units in order to be firmly bound to the semiconductor surface and to assure a proper matching between the HOMO or LUMO level of the sensitizer with the valence or conduction band of the semiconductor. At this regard, a sensitizer should be properly designed in order to be able, on the one hand, to assure fast and quantitative charge injection into the semiconductor and, on the other hand, to be easily regenerated by the redox mediator [133,134]. Furthermore, a dye should be thermally, photochemically and physically stable once chemisorbed onto the semiconductor surface and it should help to enhance the electrode wettability with respect to the electrolytic solution. Both metal-organic and fully organic sensitizers have been deeply investigated and their properties tuned; yet, they somehow suffer for both thermal and photophysical stability. In this context, MOFs offer a viable alternative due their extremely robustness; additionally, their high ordering (both on medium and long range) could create channels that the electrolyte could soak assuring a very fast regeneration of the excited state of the sensitizer. Ruthenium-based sensitizers assure a very high photoconversion efficiency, but they partially limit the long-term stability of the device due to the desorption from photoanode surface when in contact with an organic-based electrolyte, i.e., ACN or MPN. Additionally, ruthenium is a quite rare metal and its cost could prevent a large-scale production of DSSCs. Nowadays, organo-metallic sensitizers have been replaced by metal-free dyes keeping relatively high efficiency. These molecules are generally constituted by an electron donor group (e.g., triphenylamine) linked to an acceptor group (e.g., cyanoacrylic acid) [135][136][137] generating an acceptor-π-donor system (A-π-D) with the acceptor moiety close to the electrode surface to assure an effective charge injection. In this respect, MOFs could be used as a sensitizer by using different dye molecules as a building blocks extending the spectral response of the TiO 2 to the visible region. The light harvesting capacity of MOFs is basically based on the electronical and morphological features of the organic linker. The most common ones are capable of absorbing light only from UV to blue regions. Recently, some novel structures e.g., 2,5-dihydroxyterephthalic acid (H 4 DOBDC) and 2-aminoterephthalic acid (NH 2 -bdc) were synthetized to further extend the absorption over the whole solar spectrum [98,99]. In 2014, Gao et al. reported on the synthesis of an organic linker (i.e., H 4 DOBDC) to build up a p-type MOF (i.e., Ti(IV)-based NTU-9) with visible-light photoresponse capabilities: the latter showed a strong absorption in the visible region (up to 750 nm). To further exploit the absorption spectra Long and co-workers synthesized a MOF in which Zr 6 O 32 units were linked each other with NH 2 -bdc (or 2-aminoterephthalate): UV-vis diffuse reflectance studies revealed that this MOF has an optical bandgap of 2.75 eV (absorption peak centered at 450 nm) [100]. In 2012, Kundu et al. reported, for the first time, on the implementation of a ZnO-based photoanode using MOFs that was not sensitized with any dyes ( Figure 5) [92]. Indeed, MOFs behaved as a real sensitizer. They obtained hexagonal column shaped, rod-shaped or elliptical structures of ZnO as a result of different synthetic environment (i.e., under nitrogen or air) and starting material (ZnCl 2 or ZnBr 2 ). The so-obtain photoanodes were able to effectively absorb visible light between 510 and 605 nm. Nevertheless, the photoconversion efficiency remained very low due to a poor charge transport features of the hybrid electrode. Lee et al. tried to overcome such an issue by an unconventional but very promising approach: they produced a TiO 2 -based DSSC in which a thin layer of copper-based MOF (i.e., copper(II) benzene-1,3,5-tricarboxylate) behaved as a light-absorber [86]. The MOF film was deposited by a layer-by-layer method and further doped with iodine to enhance its conductivity and control charge transfer reactions. I-doped device allow to obtain an overall photoconversion efficiency up to 0.26% (J SC = 1.25 mA·cm −2 , under 1 Sun illumination) that consist in a 40-fold increase compared to the undoped DSSC. This was ascribed to a reduction of the charge transfer resistance as proved by electrochemical impedance spectroscopy (EIS). Ruthenium-based sensitizers assure a very high photoconversion efficiency, but they partially limit the long-term stability of the device due to the desorption from photoanode surface when in contact with an organic-based electrolyte, i.e., ACN or MPN. Additionally, ruthenium is a quite rare metal and its cost could prevent a large-scale production of DSSCs. Nowadays, organo-metallic sensitizers have been replaced by metal-free dyes keeping relatively high efficiency. These molecules are generally constituted by an electron donor group (e.g., triphenylamine) linked to an acceptor group (e.g., cyanoacrylic acid) [135][136][137] generating an acceptor-π-donor system (A-π-D) with the acceptor moiety close to the electrode surface to assure an effective charge injection. In this respect, MOFs could be used as a sensitizer by using different dye molecules as a building blocks extending the spectral response of the TiO2 to the visible region. The light harvesting capacity of MOFs is basically based on the electronical and morphological features of the organic linker. The most common ones are capable of absorbing light only from UV to blue regions. Recently, some novel structures e.g., 2,5-dihydroxyterephthalic acid (H4DOBDC) and 2-aminoterephthalic acid (NH2-bdc) were synthetized to further extend the absorption over the whole solar spectrum [98,99]. In 2014, Gao et al. reported on the synthesis of an organic linker (i.e., H4DOBDC) to build up a p-type MOF (i.e., Ti(IV)-based NTU-9) with visible-light photoresponse capabilities: the latter showed a strong absorption in the visible region (up to 750 nm). To further exploit the absorption spectra Long and co-workers synthesized a MOF in which Zr6O32 units were linked each other with NH2-bdc (or 2-aminoterephthalate): UV-vis diffuse reflectance studies revealed that this MOF has an optical bandgap of 2.75 eV (absorption peak centered at 450 nm) [100]. In 2012, Kundu et al. reported, for the first time, on the implementation of a ZnO-based photoanode using MOFs that was not sensitized with any dyes ( Figure 5) [92]. Indeed, MOFs behaved as a real sensitizer. They obtained hexagonal column shaped, rod-shaped or elliptical structures of ZnO as a result of different synthetic environment (i.e., under nitrogen or air) and starting material (ZnCl2 or ZnBr2). The so-obtain photoanodes were able to effectively absorb visible light between 510 and 605 nm. Nevertheless, the photoconversion efficiency remained very low due to a poor charge transport features of the hybrid electrode. Lee et al. tried to overcome such an issue by an unconventional but very promising approach: they produced a TiO2-based DSSC in which a thin layer of copper-based MOF (i.e., copper(II) benzene-1,3,5-tricarboxylate) behaved as a light-absorber [86]. The MOF film was deposited by a layer-by-layer method and further doped with iodine to enhance its conductivity and control charge transfer reactions. I-doped device allow to obtain an overall photoconversion efficiency up to 0.26% (JSC = 1.25 mA·cm −2 , under 1 Sun illumination) that consist in a 40-fold increase compared to the undoped DSSC. This was ascribed to a reduction of the charge transfer resistance as proved by electrochemical impedance spectroscopy (EIS). Lee et al. reported on a TiO 2 -MWCNTs composite photoanode sensitized with a Cu-based MOF (i.e., copper (II) benzene-1,3,5-tricarboxylate) [126]. Since Cu-MOFs have an insulating behavior [86], Energies 2020, 13, 5602 12 of 48 iodine doping was required to enhance the conductivity of the obtained film. Even if the bare TiO 2 -MWCNT composite film did not show any significant light absorption, after sensitization with Cu-MOFs, the composite film presented visible light absorption at around 680 nm [93]. Yet, the so obtained device based on TiO 2 /MOF composite a quite low photoconversion efficiency (nearly 0.3%) that was then boosted up to 0.46% by using a MWCNT nanotubes composited anode. In successive study, the same authors used Ru instead of Cu as metal linker. Both HOMO and LUMO energy levels of the Ru-based MOFs were found to be more appropriate with respect to TiO 2 conduction band ( Figure 6). The latter evidence led to a power conversion efficiency up to 1.22% which is three times higher than the previous Cu-MOF-based device [88]. In a different study, Maza and co-workers employed a Ru-based MOF (namely, Ru(II)L 2 L (L = 2,2 -bipyridyl, L = 2,2 -bipyridine-5,5 -dicarboxylic acid) as sensitizer on a TiO 2 electrode [87]. Lee et al. reported on a TiO2-MWCNTs composite photoanode sensitized with a Cu-based MOF (i.e., copper (II) benzene-1,3,5-tricarboxylate) [126]. Since Cu-MOFs have an insulating behavior [86], iodine doping was required to enhance the conductivity of the obtained film. Even if the bare TiO2-MWCNT composite film did not show any significant light absorption, after sensitization with Cu-MOFs, the composite film presented visible light absorption at around 680 nm [93]. Yet, the so obtained device based on TiO2/MOF composite a quite low photoconversion efficiency (nearly 0.3%) that was then boosted up to 0.46% by using a MWCNT nanotubes composited anode. In successive study, the same authors used Ru instead of Cu as metal linker. Both HOMO and LUMO energy levels of the Ru-based MOFs were found to be more appropriate with respect to TiO2 conduction band ( Figure 6). The latter evidence led to a power conversion efficiency up to 1.22% which is three times higher than the previous Cu-MOF-based device [88]. In a different study, Maza and co-workers employed a Ru-based MOF (namely, Ru(II)L2L′ (L = 2,2′-bipyridyl, L′ = 2,2′-bipyridine-5,5′dicarboxylic acid) as sensitizer on a TiO2 electrode [87]. Porphyrin-based MOFs can produce quite limited photocurrents under visible light (PCE: 0.0026-0.45%) as reported by Wöll [89,138], Allendorf [139] and others [140]. On the other hand, Ru(bpy)3 2+ -based MOFs have slightly better PCE [87]. Their relatively low photoelectrochemical performances could be ascribed to the slow exciton migration and ineffective charge transport properties of 3D porous frameworks [89,138]. Recently, Gordillo et. al. reported on a spontaneous solvothermal growth of (100) oriented PPF-11 [50] films onto a ZnO/FTO electrodes [127]. The so obtained MOF-sensitized photoanodes displayed unpredictable phototoelectrochemical parameters (i.e., JSC = 4.6 mA·cm −2 ; VOC = 476 mV; PCE = 0.86%). These values are 1000 and 300 times greater than conventional devices concerning current density and efficiency, respectively. Some further experiments are still required to clarify such results; the authors claimed better charge separation and faster charge transport/injection features assured by the highly oriented MOF structure. As known, MOFs could be employed as precursors in the synthesis of MOF/TiO2 composites behaving as both anode and sensitizer [92,125,141,142]. Khajavian and Ghani reported on the application of a polycrystalline and highly oriented copper-based MOF [i.e., Cu2(bdc)2(bpy)]n thin films onto mesoporous titania surface [128]. The MOF was synthesized by a layer-by-layer approach and it is possible to regulate shape evolution of MOF crystals within the films using acetic acid and pyridine as capping agents added to metal solution. Indeed, the former could drive the growth and the orientation of MOF crystal whereas the latter allows to obtain leaf-structured crystals. Even if this approach is very interesting thanks to its cheapness and scalability, the implementation of MOF/TiO2 composite in a DSSC led to very poor photoconversion efficiency (lower than 0.1%) mainly because Porphyrin-based MOFs can produce quite limited photocurrents under visible light (PCE: 0.0026-0.45%) as reported by Wöll [89,138], Allendorf [139] and others [140]. On the other hand, Ru(bpy) 3 2+ -based MOFs have slightly better PCE [87]. Their relatively low photoelectrochemical performances could be ascribed to the slow exciton migration and ineffective charge transport properties of 3D porous frameworks [89,138]. Recently, Gordillo et. al. reported on a spontaneous solvothermal growth of (100) oriented PPF-11 [50] films onto a ZnO/FTO electrodes [127]. The so obtained MOF-sensitized photoanodes displayed unpredictable phototoelectrochemical parameters (i.e., J SC = 4.6 mA·cm −2 ; V OC = 476 mV; PCE = 0.86%). These values are 1000 and 300 times greater than conventional devices concerning current density and efficiency, respectively. Some further experiments are still required to clarify such results; the authors claimed better charge separation and faster charge transport/injection features assured by the highly oriented MOF structure. As known, MOFs could be employed as precursors in the synthesis of MOF/TiO 2 composites behaving as both anode and sensitizer [92,125,141,142]. Khajavian and Ghani reported on the application of a polycrystalline and highly oriented copper-based MOF [i.e., Cu 2 (bdc) 2 (bpy)] n thin films onto mesoporous titania surface [128]. The MOF was synthesized by a layer-by-layer approach and it is possible to regulate shape evolution of MOF crystals within the films using acetic acid and pyridine as capping agents added to metal solution. Indeed, the former could drive the growth and the orientation of MOF crystal whereas the latter allows to obtain leaf-structured crystals. Even if this approach is very interesting thanks to its cheapness and scalability, the implementation of MOF/TiO 2 composite in a DSSC led to very poor photoconversion efficiency (lower than 0.1%) mainly because of the thinness of deposited layer (lower than 100 nm), giving insufficient light absorption, and the not-optimized MOF/TiO 2 interface. Nevertheless, a further engineering of the deposition method could lead to improved photoconversion features. of 48 As already evidenced in the previous section, metal-organic frameworks are complex molecules with highly tunable properties. This tunability could be largely exploited in order to design molecules with tailored features. In this context, Du et al. investigated the effect of different ions, i.e., K(I) and In(III), during the growth process of a single crystal heterometal-organic framework as sensitizer in DSSCs [143]. The two MOF were named [In 0.5 K(3-qlc)Cl 1.5 (H 2 O) 0.5 ] 2n and [InK(ox) 2 -(H 2 O) 4 ] n (with 3-Hqlc = quinoline-3-carboxylic acid; H 2 ox = oxalic acid). The so-obtained MOFs were then implemented in a (co)sensitized DSSC (along with a classical dye like N719) in order to extend the absorption region within the ultraviolet and blue-violet region of the solar cells. Indeed, the energetic matching between the energy states of MOF and TiO 2 allows to extend the photoelectrochemical response of the device below 350 nm. By means of that, a photoconversion efficiency higher than 8% was achieved. Additionally, the investigated MOFs exhibit tunable luminescence (from blue to yellow to white) as a function of temperature. These very interesting features could open the doors to the application of MOF in different fields, e.g., light emitting electrochemical cells/diodes, LEEC [144] or LED [145], respectively [146]. Even if MOFs have been proved to be effective photosensitizer material, the specific role and potential effect of unreacted MOF precursor (considering the in-situ growth of the material) on both device performance and stability has not been explicated so far. In order to clarify this point, Spoerke and co-workers isolated MOF crystals to be anchored as a photosensitizer onto the photoanode surface [139]. More in details, they synthesized the pillared porphyrin framework (PPF) MOF (Zn-based) through solvothermal method (heated in a closed vessel at 80 • C for 72 h) from a solution of Zinc(II) meso-tetrakis(4-carboxyphenyl)porphyrin (Zn-TCPP), Zn(NO 3 ) 2 ·4H 2 O, and 4,4 -bipyridine in di-ethyl-formamide (DEF) and ethanol. To rule out the effect of the eventual presence of unreacted precursor in the sensitization solution, they prepared MOF crystals with and without a known excess of linker. The obtained MOFs were implemented as sole sensitizer in complete TiO 2 -based DSSC and efficiency was obtained as a 0.0023 (±0.0003) with PPF-MOF-sensitized solar cell, 0.0011 ± 0.0002 for unsensitized TiO 2 . It is worth mentioning that, up to now, MOFs (in the sensitizer role) have been mainly employed in liquid-junction DSSC. Ahn et. al. reported on, for the first time, MOF-sensitized based solid-state photovoltaic cell [147]. More in details, conductive cobalt-based MOFs (Co-DAPV) consisting of Co (II) ions and a redox active di(3-diaminopropyl)-viologen (i.e., DAPV) ligand were investigated. This MOF was proved to effectively match the energetic level of TiO 2 leading to a power conversion efficiency of the solid junction solar cell was measured up to 2% which is higher than the homologous liquid-junction counterparts, opening the doors to further investigation on this topic ( Figure 7). In a classical DSSCs, the role of the sensitizer is played by an organic or metallorganic molecular dye. Yet, a different approach has been recently exploited consisting in the employment of inorganic quantum dots (QDs) as effective sensitizer in DSSCs [12,148]. Indeed, QDs offered various advantages such as multiple exciton creation, surface effect, quantum size effect, and tunnelling effect that could be hardly reached by using a molecular dye [148][149][150]. Additionally, they are much more In a classical DSSCs, the role of the sensitizer is played by an organic or metallorganic molecular dye. Yet, a different approach has been recently exploited consisting in the employment of inorganic quantum dots (QDs) as effective sensitizer in DSSCs [12,148]. Indeed, QDs offered various advantages such as multiple exciton creation, surface effect, quantum size effect, and tunnelling effect that could be hardly reached by using a molecular dye [148][149][150]. Additionally, they are much more stable considering their intrinsic inorganic nature. Nevertheless, once implemented in complete device (i.e., QD-sensitized solar cells, QDSSCs) this material assured lower photoconversion efficiency because they possess relatively low molar extinction coefficient and a quite sharp absorption spectrum; furthermore, the presence of surface states that behaves as trap states partially prevent an effective charge injection [150] and they could be hardly anchored onto mesoporous TiO 2 leading to an inhomogeneous covering of the electrode surface [151]. In this context, MOFs could be advantageously employed to improve the light harvesting efficiency as a consequence of the energy transfer from QDs to MOFs. Jin, and co-workers reported on the synthesis of a porphyrin-based MOF functionalized with CdSe/ZnS core/shell quantum dots (QDs) and its implementation in complete device [152]. The broad absorption band of the QDs in the visible region suggested excellent coverage of the solar spectrum via QD−MOF hybrid structures. Indeed, QD-MOF hybrids allowed to harvest photons even at wavelengths in which the MOF has a low or no absorption. They were able to obtain very high quantum efficiencies (up to 84%) by altering size of the QDs. Quite recently, Kaur et al. employed an europium-based MOF as a co-sensitizer in quantum dot DSSC (QD-DSSCs) [153]. In details, Eu-MOF acted as a supporting layer for CdTe QDs. Remarkably, the implementation of CdTe@Eu-MOF sensitizer in complete device led to a significant decrease in charge recombination allowing to obtain an 80% photoconversion efficiency compared to device not embodying Eu-MOF (i.e., 3.0% vs. 1.7%, respectively). The same authors replaced the Eu-based MOF with a Ti-based one (namely NTU-9) that allowed to obtain a more broadened absorption spectra leading to quite good results [154]. Even if they recorded very high photocurrent density (up to 23 mA·cm −2 ), the devices suffer for very low FF, lower than 30%. Therefore, before further exploiting these materials as sensitizers in QD-DSSCs, an optimization of the anode/sensitizer/electrolyte interface is required. In Table 2, the most effective MOFs implemented as sensitizers in DSSCs are summarized. Metal-organic frameworks could also be employed to synthesize composite material as photoanodes for DSSCs. In a recent study, sol-gel synthesis of TiO 2 aerogel-MOF (i.e., a Zn-based MOF, namely Zn(N-(4-pyridylmethyl)-L-valine·HCl)(Cl)](H 2 O) 2 ) nanocomposite have been used as photoanodic materials in quasi-solid dye-sensitized solar cells (QSDSSC) [124]. Thanks to their high surface area, MOF could be effectively employed to load dye molecules. Even if the dye-loading is higher compared to TiO 2 electrode. once implemented in a QSDSSC, the MOF-modified electrode showed a photoconversion efficiency up to 2.34% that is lower compared to classical photoanode (i.e., 3.0%) because of a quite inefficient collection charge efficiency; indeed, the presence of MOF leads to slower photoinjected charge diffusion throughout the photoanode because Zn anions could behave as trap states. A new approach, proposed by Li and co-workers [125], consists in using MOFs as a scattering layer in ZnO-based DSSCs. Hierarchical ZnO parallelepipeds were produced using MOF-5 precursor. It was noted that the implementation of such a scattering layer improved the cell efficiency from 3.15% to a 3.67%. Table 2. MOFs effectively employed as sensitizers in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Metals Organic Linker(s) MOFs effectively employed as sensitizers in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Counter-Electrode CE Besides photoanode and sensitizers, the counterelectrode (CE) plays a crucial role in a DSSC because it should quantitively regenerate the redox mediator avoiding any current loss at the CE/electrolyte interface. To behave properly, a CE should assure a wide surface area, a good contact with the electrolytic solution and a fast kinetic of the charge transfer reactions. To perform this role, scientists mainly focused their attention on platinum. Indeed, platinum, in the form of thin layer, is an extraordinary catalytic compound in DSSCs [155,156] to regenerate the redox mediator at the CE. On the other hand, Pt is a noble metal, very rare on Earth and relatively expensive (considering large-scale production). Therefore, in order to reduce the costs of CEs, some studies reported on the synthesis and the implementation of Pt-free counter-electrodes. These include graphene [157], carbon nanotubes [158], metallic PEDOT [159] and transition metal sulphides [160]. The more promising class of alternative materials is represented by metal sulphides, due to its low production cost, great availability, high stability and durability [160][161][162]. In this context, cobalt sulphide (CoS) is one of the most exploited materials both in n [160] and p-DSSCs [163]. For example, Grätzel electrochemically deposited CoS counter-electrodes obtaining a photoconversion efficiency up to 6.5%. Starting from this, Hui Hsu and co-workers obtained the highest power conversion efficiency ever reported for CoS-CEs (i.e., nearly 8.1%) by preparing CoS nanoparticles via a surfactant-assisted synthesis in which ZIF-67 (a cobalt-based MOF, namely Co(mim) 2 , mim = 2-methylimidolate) was used as template. The relatively high efficiency is consistent with an increase in both V OC and FF values. This proved that the reported synthetic method could be considered as a promising approach [164]. ZIF-67 is a common type of MOF constituted by cobalt metal linked with four nitrogen atoms (Co-N4) to tetrahedral frameworks [165]. A similar approach was employed by Liu and co-workers to obtain MoS 2 @Co 3 S 4 composites employing ZIF-67 as both template and cobalt source [95]. The composites showed promising synergistic effect on the catalysis of triiodide reduction. Dye-sensitized solar cells fabricated with MoS 2 @Co 3 S 4 reached a PCE of 7.86%, slightly higher than the Pt-based counterpart. This composite material is a promising candidate as an efficient and low-cost counter electrode material. Selenization or sulfurization process of ZIF-67 could be a feasible approach to produce Co-based selenide (sulphide)/N-doped carbonaceous hybrid material. Wang and co-workers synthesized CoSe 2 /NC and CoS 2 /NC (NC; N-doped carbon) electrocatalytic film as counter-electrode in DSSCs [166]. These composites were proved to assure huge catalytic activity and high conductivity and a PCE of 9% was achieved. Interestingly, CoSe 2 /NC were more effective than both Pt-based and CoS 2 /NC-based CEs (8% and 8.7%, respectively). It is worth mentioning that this work opens the doors to the use of selenium-based materials in place of the more exploited sulphur-based counterparts ( Figure 8). More recently, Wu et al. reported CoSe 2 -D as effective MOF-derived CEs [96]. They synthesized the MOF precursor directly on FTO glass via electrodeposition followed by a solvothermal process and then in-situ selenized. The so-obtained film is characterized by a porous structure leading to an easier diffusion of iodine-based redox couple and, straightforwardly, to very promising PCE (close to 7%). Energies 2020, 13, 5602 17 of 48 slightly higher than the Pt-based counterpart. This composite material is a promising candidate as an efficient and low-cost counter electrode material. Selenization or sulfurization process of ZIF-67 could be a feasible approach to produce Co-based selenide (sulphide)/N-doped carbonaceous hybrid material. Wang and co-workers synthesized CoSe2/NC and CoS2/NC (NC; N-doped carbon) electrocatalytic film as counter-electrode in DSSCs [166]. These composites were proved to assure huge catalytic activity and high conductivity and a PCE of 9% was achieved. Interestingly, CoSe2/NC were more effective than both Pt-based and CoS2/NC-based CEs (8% and 8.7%, respectively). It is worth mentioning that this work opens the doors to the use of selenium-based materials in place of the more exploited sulphur-based counterparts (Figure 8). More recently, Wu et al. reported CoSe2-D as effective MOF-derived CEs [96]. They synthesized the MOF precursor directly on FTO glass via electrodeposition followed by a solvothermal process and then in-situ selenized. The soobtained film is characterized by a porous structure leading to an easier diffusion of iodine-based redox couple and, straightforwardly, to very promising PCE (close to 7%). It is worth mentioning that MOFs could also be very useful during the preparation step if not directly embodied in a cell component. A new class of MOF is the so-called surface-anchored metal-organic framework thin film (SURMOF). They have received substantial attentions in electrical application, and they showed enhanced catalytic activity and higher electrical conductivity compared to classical MOFs when employed as CEs in DSSCs. Very recently, Ou et al. present a facile method to prepare CoS@Ndoped carbon-based counterelectrode starting from a cobalt-based MOF (namely. PIZA-1) [167]. Briefly, the partial sulfurization of the MOF lead to a transparent thin film that acted as cathodic material in a conventional DSSC (Figure 9). They observed that CoS@N-doped carbon film assured extremely good catalytic performances toward the reduction of triiodide as a result of a synergetic activity of homogeneous CoS nanoparticles, strong adhesion onto glass substrate and low resistance at the interface with FTO layer. These features result in a photoconversion efficiency of 9.11% (compared to 8.04% with convention Pt-based CEs). Tian et al. also presented Pt-doped metal−porphyrin framework thin films for efficient bifacial dye-sensitized solar cells [168]. The transparent 2D MOF Zn-TCPP thin film was obtained on FTO glass using the van der Waals liquid-phase epitaxial growth approach. The obtained highly Ptdispersed Zn-TCPP (Zn-TCPP-Pt) thin film for CE in DSSCs showed similar catalytic activity compared to conventional Pt-based CEs but a better light transmission capacity. Power conversion efficiencies of the Zn-TCPP-Pt thin film CE was measured 5.48 and 4.88% under front-side and rear-side irradiation, respectively. To overcome the poor conductivity and short chemical stability of CoS2, Cui et al. [169] exploited a groundbreaking approach: they loaded CoS2 into carbon nanocages using ZIF-67 as surfactant. They tested different treatment times and after 4 h they obtained the highest photovoltaic conversion efficiency (PCE) of 8.20%, higher than those of DSSCs made up of other CoS2 CEs and Pt-based DSSC (7.88%). Zhong and co-workers applied the same concept to quantum dot DSSCs (QD-DSSCs [170]: indeed, they grew crystalline ZIF-67 framework by using layered double hydroxides (LDHs) as scaffold and Co, N co-doped porous carbon materials as active material. The latter showed a very good electrocatalytic activity in the reduction of plysulphide-based electrolyte leading to a PCE higher than 13.5% (JSC = 25.93 mA/cm 2 , VOC = 0.778 V, FF = 0.672). It is worth mentioning that MOFs could also be very useful during the preparation step if not directly embodied in a cell component. A new class of MOF is the so-called surface-anchored metal-organic framework thin film (SURMOF). They have received substantial attentions in electrical application, and they showed enhanced catalytic activity and higher electrical conductivity compared to classical MOFs when employed as CEs in DSSCs. Very recently, Ou et al. present a facile method to prepare CoS@N-doped carbon-based counterelectrode starting from a cobalt-based MOF (namely. PIZA-1) [167]. Briefly, the partial sulfurization of the MOF lead to a transparent thin film that acted as cathodic material in a conventional DSSC (Figure 9). They observed that CoS@N-doped carbon film assured extremely good catalytic performances toward the reduction of triiodide as a result of a synergetic activity of homogeneous CoS nanoparticles, strong adhesion onto glass substrate and low resistance at the interface with FTO layer. These features result in a photoconversion efficiency of 9.11% (compared to 8.04% with convention Pt-based CEs). Tian et al. also presented Pt-doped metal−porphyrin framework thin films for efficient bifacial dye-sensitized solar cells [168]. The transparent 2D MOF Zn-TCPP thin film was obtained on FTO glass using the van der Waals liquid-phase epitaxial growth approach. The obtained highly Pt-dispersed Zn-TCPP (Zn-TCPP-Pt) thin film for CE in DSSCs showed similar catalytic activity compared to conventional Pt-based CEs but a better light transmission capacity. Power conversion efficiencies of the Zn-TCPP-Pt thin film CE was measured 5.48 and 4.88% under front-side and rear-side irradiation, respectively. To overcome the poor conductivity and short chemical stability of CoS 2 , Cui et al. [169] exploited a groundbreaking approach: they loaded CoS 2 into carbon nanocages using ZIF-67 as surfactant. They tested different treatment times and after 4 h they obtained the highest photovoltaic conversion efficiency (PCE) of 8.20%, higher than those of DSSCs made up of other CoS2 CEs and Pt-based DSSC (7.88%). Zhong and co-workers applied the same concept to quantum dot DSSCs (QD-DSSCs [170]: indeed, they grew crystalline ZIF-67 framework by using layered double hydroxides (LDHs) as scaffold and Co, N co-doped porous carbon materials as active material. The latter showed a very good electrocatalytic activity in the reduction of plysulphide-based electrolyte leading to a PCE higher than 13.5% (J SC = 25.93 mA/cm 2 , V OC = 0.778 V, FF = 0.672). Energies 2020, 13, 5602 18 of 48 By employing a similar approach, Li and co-workers reported on the employment of MOFderived Co,N-bidoped carbons as effective CEs in quantum dot DSSCs (QD-DSSCs) [171]. A bimetallic (i.e., Zn and Co) zeolite-type MOF were pyrolyzed to obtain the desired doped material showing a homogeneous di dispersion of the doping atom in the carbon matrix. Additionally, this method assured to obtain large surface area and good electronic properties leading to an overall photoconversion efficiency up to 9% (i.e., 9.12%, VOC = 0.635 V, JSC = 26.2 mA·cm −2 , FF = 0.55). ZIF-8 assured high photocatalytic performance due to its microporous structure a surface and wide surface area. Yet, micropores could lead to substantial limitation in mass transfer, especially when bulky redox mediators (e.g., Co or Cu complexes) are employed. To overcome such a problem, Kang et al. modulated the dwelling time of Zn in ZIF-8 [172]. The longer dwelling time lead to the formation of a mesoporous structure without sensibly influence the active area. Photoconversion efficiency up to 9% was reached when these CEs were implemented in complete device with a I − /I3 − redox mediator. This evidence proves that also the photocatalytic activity toward triiodide was improved going from micro to mesoporous electrodes. It is worth mentioning that a serious issue to be solved to enhance the photoconversion efficiency of QDSSCs concerns the development of an electrocatalysts with more active sites, high conductivity and good photoelectrochemical and thermal stability. Very recently, Wang et al. reported on the implementation of copper NPs@carbon nanorod (Cu@CNR) composites synthetized by direct pyrolysis of HKUST-1 (copper-based MOF) [173]. Cu@CNR showed high catalytic activity towards polysulfide reduction. This approach allows to obtain a large amount of CuxS active sites whose charge transfer ability is enhanced by the presence of CNRs as supporting material leading to an overall photoconversion efficiency close to 10%. In Table 3, the most effective MOFs implemented as counter-electrodes in DSSCs are summarized. Table 3. MOFs effectively employed as counter-electrodes in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class By employing a similar approach, Li and co-workers reported on the employment of MOF-derived Co,N-bidoped carbons as effective CEs in quantum dot DSSCs (QD-DSSCs) [171]. A bimetallic (i.e., Zn and Co) zeolite-type MOF were pyrolyzed to obtain the desired doped material showing a homogeneous di dispersion of the doping atom in the carbon matrix. Additionally, this method assured to obtain large surface area and good electronic properties leading to an overall photoconversion efficiency up to 9% (i.e., 9.12%, V OC = 0.635 V, J SC = 26.2 mA·cm −2 , FF = 0.55). ZIF-8 assured high photocatalytic performance due to its microporous structure a surface and wide surface area. Yet, micropores could lead to substantial limitation in mass transfer, especially when bulky redox mediators (e.g., Co or Cu complexes) are employed. To overcome such a problem, Kang et al. modulated the dwelling time of Zn in ZIF-8 [172]. The longer dwelling time lead to the formation of a mesoporous structure without sensibly influence the active area. Photoconversion efficiency up to 9% was reached when these CEs were implemented in complete device with a I − /I 3 − redox mediator. This evidence proves that also the photocatalytic activity toward triiodide was improved going from micro to mesoporous electrodes. It is worth mentioning that a serious issue to be solved to enhance the photoconversion efficiency of QDSSCs concerns the development of an electrocatalysts with more active sites, high conductivity and good photoelectrochemical and thermal stability. Very recently, Wang et al. reported on the implementation of copper NPs@carbon nanorod (Cu@CNR) composites synthetized by direct pyrolysis of HKUST-1 (copper-based MOF) [173]. Cu@CNR showed high catalytic activity towards polysulfide reduction. This approach allows to obtain a large amount of Cu x S active sites whose charge transfer ability is enhanced by the presence of CNRs as supporting material leading to an overall photoconversion efficiency close to 10%. In Table 3, the most effective MOFs implemented as counter-electrodes in DSSCs are summarized. Table 3. MOFs effectively employed as counter-electrodes in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Metals Organic Linkers Co PISA-1 Energies 2020, 13, 5602 19 of 49 Table 3. MOFs effectively employed as counter-electrodes in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Table 3. MOFs effectively employed as counter-electrodes in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Table 3. MOFs effectively employed as counter-electrodes in DSSC. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Usually, MOFs-derived materials are obtained by direct carbonization/sulfurization methods; indeed, the so-synthetized nanoparticles tend to agglomerate and collapse. This might cause the decrease of the effective surface area and the growth of a physical barrier for the electrolyte diffusion [177]. Porous 1D nanomaterials are known as one of the most promising electrocatalytic materials in energy-related applications [178,179]. Unfortunately, synthesis of 1D structure materials starting from individual MOFs is very challenging due to fast nucleation and growth assured by the presence of the MOF typical porous structure. Straightforwardly, scientists' attention was focused on 2D structures, such as nanotubes (NTs). Very recently, metal selenides have been employed as CE materials in place of the more exploited sulphides. Li et. al. optimized a facile synthetic approach to obtain hierarchical hollow CoSe nanoparticles embodied into nitrogen-doped porous carbon anchored onto nitrogen-doped carbon nanotubes (CoSe@NPC/NCNTs) as CE material in DSSCs [180]. They used a cobalt-containing MOF, i.e., ZIF-67, and polypyrrole (PPy) as template and nitrogen source, respectively, to obtain the ZIF nanocrystals growing along the surfaces of PPy to constitute core-shell nanotubes. The tubular structure and the wide surface area of these material boosted the electron transfer and improved charge mobility throughout the NT leading to good photoconversion efficiency (h = 7.6% compared to 7.3% with Pt-based CEs). MOFs It is worth mentioning that MOFs could be directly used as counter-electrode in place of Pt because they could somehow play the role of electrocatalyst toward the reduction of triiodide. In a pivotal work, Chen et al. reported on the synthesis of a carbon cloth/MOF (i.e., sulfonated poly(thiophene-3-[2-(2-methoxyethoxy)ethoxy]-2,5-diyl) (s-PT), MOF-525/s-PT) composite film deposited on a flexible substrate [175]. They proved that, on the one hand, carbon fibres in CE supported electron transfer behaving as a conductive core, whereas, on the other hand, MOF-525/s-PT acts as the effective catalyst for the reduction of I3 − . The investigated MOF-525/s-PT composite 9 Usually, MOFs-derived materials are obtained by direct carbonization/sulfurization methods; indeed, the so-synthetized nanoparticles tend to agglomerate and collapse. This might cause the decrease of the effective surface area and the growth of a physical barrier for the electrolyte diffusion [177]. Porous 1D nanomaterials are known as one of the most promising electrocatalytic materials in energy-related applications [178,179]. Unfortunately, synthesis of 1D structure materials starting from individual MOFs is very challenging due to fast nucleation and growth assured by the presence of the MOF typical porous structure. Straightforwardly, scientists' attention was focused on 2D structures, such as nanotubes (NTs). Very recently, metal selenides have been employed as CE materials in place of the more exploited sulphides. Li et. al. optimized a facile synthetic approach to obtain hierarchical hollow CoSe nanoparticles embodied into nitrogen-doped porous carbon anchored onto nitrogen-doped carbon nanotubes (CoSe@NPC/NCNTs) as CE material in DSSCs [180]. They used a cobalt-containing MOF, i.e., ZIF-67, and polypyrrole (PPy) as template and nitrogen source, respectively, to obtain the ZIF nanocrystals growing along the surfaces of PPy to constitute core-shell nanotubes. The tubular structure and the wide surface area of these material boosted the electron transfer and improved charge mobility throughout the NT leading to good photoconversion efficiency (h = 7.6% compared to 7.3% with Pt-based CEs). It is worth mentioning that MOFs could be directly used as counter-electrode in place of Pt because they could somehow play the role of electrocatalyst toward the reduction of triiodide. In a pivotal work, Chen et al. reported on the synthesis of a carbon cloth/MOF (i.e., sulfonated poly(thiophene-3-[2-(2-methoxyethoxy)ethoxy]-2,5-diyl) (s-PT), MOF-525/s-PT) composite film deposited on a flexible substrate [175]. They proved that, on the one hand, carbon fibres in CE supported electron transfer behaving as a conductive core, whereas, on the other hand, MOF-525/s-PT acts as the effective catalyst for the reduction of I3 − . The investigated MOF-525/s-PT composite 9.30 [176] Usually, MOFs-derived materials are obtained by direct carbonization/sulfurization methods; indeed, the so-synthetized nanoparticles tend to agglomerate and collapse. This might cause the decrease of the effective surface area and the growth of a physical barrier for the electrolyte diffusion [177]. Porous 1D nanomaterials are known as one of the most promising electrocatalytic materials in energy-related applications [178,179]. Unfortunately, synthesis of 1D structure materials starting from individual MOFs is very challenging due to fast nucleation and growth assured by the presence of the MOF typical porous structure. Straightforwardly, scientists' attention was focused on 2D structures, such as nanotubes (NTs). Very recently, metal selenides have been employed as CE materials in place of the more exploited sulphides. Li et. al. optimized a facile synthetic approach to obtain hierarchical hollow CoSe nanoparticles embodied into nitrogen-doped porous carbon anchored onto nitrogen-doped carbon nanotubes (CoSe@NPC/NCNTs) as CE material in DSSCs [180]. They used a cobalt-containing MOF, i.e., ZIF-67, and polypyrrole (PPy) as template and nitrogen source, respectively, to obtain the ZIF nanocrystals growing along the surfaces of PPy to constitute core-shell nanotubes. The tubular structure and the wide surface area of these material boosted the electron transfer and improved charge mobility throughout the NT leading to good photoconversion efficiency (h = 7.6% compared to 7.3% with Pt-based CEs). It is worth mentioning that MOFs could be directly used as counter-electrode in place of Pt because they could somehow play the role of electrocatalyst toward the reduction of triiodide. In a pivotal work, Chen et al. reported on the synthesis of a carbon cloth/MOF (i.e., sulfonated poly(thiophene-3-[2-(2-methoxyethoxy)ethoxy]-2,5-diyl) (s-PT), MOF-525/s-PT) composite film deposited on a flexible substrate [175]. They proved that, on the one hand, carbon fibres in CE supported electron transfer behaving as a conductive core, whereas, on the other hand, MOF-525/s-PT acts as the effective catalyst for the reduction of I 3 − . The investigated MOF-525/s-PT composite counter electrode allowed to obtain an amazing cell efficiency, as high as 9.75% that is higher than traditional Pt-based CE (8.21%). Very interestingly, Liu and co-workers employed bimetallic (i.e., nickel and cobalt) MOFs as counter-electrode ( Figure 10) [176]. More in details, if compared to the one obtained with conventional Pt-CEs. The above-mentioned results should be considered just as a starting point, yet they proved that MOF (or at least MOF supported on carbonaceous or plastic material) could be seriously investigated as an effective low-cost and feasible alternative to platinum as counter-electrodes in DSSC. Covalent Organic Frameworks (COFs) in DSSCs Differently from MOFs, just few examples have been reported concerning the implementation of covalent organic frameworks in photovoltaic devices. Most of the published studies concern the investigation of their photoelectrochemical properties, nevertheless, we decide to tackle the latter being an interesting starting point for further optimization. COFs as Photoactive Materials Jiang et al. reported, for the first time, the synthesis of a photoconductive COF (i.e., PPy-based COF) obtained by self-condensation of pyrene diboronic acid to constitute a boroxine-linked COF [180]. This was obtained as micrometric cubic crystals. They measured the photoconductivity of PPy-COF by evaporating a thin film onto an Al electrode and then covered the COF with an Au layer. If irradiated with a xenon lamp (in the visible region) the COF-modified Al-electrode showed a linear I-V response. Additionally, the on-off ratio was not modified even after multiple switching procedures. As already evidenced in the MOF section, the most appealing feature to further develop covalent organic frameworks in DSSCs is the higher stability assured with respect to conventional dyes as well as the opportunity to tune both their photo-and electrochemical properties. Another photoactive COF was presented by Ding and co-workers: they synthesized COF using Ni-phthalocyanine through condensation reaction with 1,4-benzenediboronic acid (BDBA) [181]. This COF, implemented in an electrode, exhibits a photocurrent of 3 μA when irradiated with a xenon lamp. In a subsequent study, the same authors discovered that pthalocyanine-based n-channel 2D-NiPc-BTDA COFs allowed a faster transport of electrons due to the AA-type stacking. Additionally, the latter absorbs light over a wide range of wavelengths up to 1000 nm [182]. In place of phthalocyanine COFs, porphyrin-based ones were made by condensation reaction with benzene diboronic acid (DBA) to obtain photoactive COFs with different electron and hole mobility following on from the nature of the metal in the porphryin building block [183]. There are various examples of conductive and photoactive COFs. Nevertheless, their assemblage into complete devices is very challenging due to their insolubility being hardly coated (homogeneously) onto the surface of electrodes or conductive substrates. Following on from this evidence, a feasible approach to avoid the above-mentioned issues, is to directly grow the COF onto the electrode surface. Indeed, Dichtel and co-workers were able to grow oriented COF thin films by using 1,4-phenylenebis(boronic acid) (PBBA) with 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) Covalent Organic Frameworks (COFs) in DSSCs Differently from MOFs, just few examples have been reported concerning the implementation of covalent organic frameworks in photovoltaic devices. Most of the published studies concern the investigation of their photoelectrochemical properties, nevertheless, we decide to tackle the latter being an interesting starting point for further optimization. COFs as Photoactive Materials Jiang et al. reported, for the first time, the synthesis of a photoconductive COF (i.e., PPy-based COF) obtained by self-condensation of pyrene diboronic acid to constitute a boroxine-linked COF [180]. This was obtained as micrometric cubic crystals. They measured the photoconductivity of PPy-COF by evaporating a thin film onto an Al electrode and then covered the COF with an Au layer. If irradiated with a xenon lamp (in the visible region) the COF-modified Al-electrode showed a linear I-V response. Additionally, the on-off ratio was not modified even after multiple switching procedures. As already evidenced in the MOF section, the most appealing feature to further develop covalent organic frameworks in DSSCs is the higher stability assured with respect to conventional dyes as well as the opportunity to tune both their photo-and electrochemical properties. Another photoactive COF was presented by Ding and co-workers: they synthesized COF using Ni-phthalocyanine through condensation reaction with 1,4-benzenediboronic acid (BDBA) [181]. This COF, implemented in an electrode, exhibits a photocurrent of 3 µA when irradiated with a xenon lamp. In a subsequent study, the same authors discovered that pthalocyanine-based n-channel 2D-NiPc-BTDA COFs allowed a faster transport of electrons due to the AA-type stacking. Additionally, the latter absorbs light over a wide range of wavelengths up to 1000 nm [182]. In place of phthalocyanine COFs, porphyrin-based ones were made by condensation reaction with benzene diboronic acid (DBA) Energies 2020, 13, 5602 22 of 48 to obtain photoactive COFs with different electron and hole mobility following on from the nature of the metal in the porphryin building block [183]. There are various examples of conductive and photoactive COFs. Nevertheless, their assemblage into complete devices is very challenging due to their insolubility being hardly coated (homogeneously) onto the surface of electrodes or conductive substrates. Following on from this evidence, a feasible approach to avoid the above-mentioned issues, is to directly grow the COF onto the electrode surface. Indeed, Dichtel and co-workers were able to grow oriented COF thin films by using 1,4-phenylenebis(boronic acid) (PBBA) with 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) (to produce COF-5) and Ni-phthalocyanine-PBBA COF by incorporating single-layer graphene (SLG) supported on copper, silicon carbide, and silicium dioxide substrates under operationally simple solvothermal conditions [184]. Remarkably, COF layers deposited onto SLG showed improved crystallinity if compared to COF powders. The Ni-phthalocyanine-PBBA-based COF on SLG/SiO 2 films absorbed strongly over the visible range of the spectrum. Therefore, porous phthalocyanine COFs are depicted as suitable candidates to be implemented in organic photovoltaics. Jiang and co-workers synthetized fullerene-loaded CS-COF, a conductive and chemically stable COF obtained by the co-condensation of triphenylene hexamine (TPHA) and tert-butylpyrene tetraone (PT) [185]. Experimentally, the dispersion of CS-COF*C60 in N-methyl-2-pyrrolidone was performed by stirring the solution at 80 • C under argon flux for 1 week. Then, a mixture of PCBM in o-dichlorobenzene (40 mg·mL -1 ) and the obtained suspension (40 mg·mL -1 for CS-COF·C60) were spin coated (1000 rpm, 30 s) onto an ITO substrate for organic solar cells. They built up a 1 cm 2 sandwiched device with a Al/poly(methyl methacrylate (PMMA) as a glue called CSCOF*C60/Au cell geometry. This device supplies a power conversion efficiency of 0.9% with a very large open circuit voltage of 0.98 V. Very interestingly, conductivity measurements, performed by flash photolysis time-resolved microwave method (FP-TRMC), evidence that CS-COF is one of the best hole-transporting organic semiconductors ever reported having a hole-conducting mobility of 4.2 cm 2 ·V −1 ·s −1 . Bein and co-workers successfully obtained thiophene-based COF (TT-COF) to be implemented in a photo-voltaic device by co-condensing thieno-[3,2-b]-thiophene-2,5-diyldiboronic acid (TTBA) and hexa-hydroxytriphenylene (HHTP) [186]. They prepared a thin film COF that was employed as photoactive material to produce a photovoltaic cell (i.e., ITO/TT-COF:PCBM/Al) with an overall efficiency up to 0.05% (OCV = 0.62 V). As a result, designing a COF with larger pores and better packing of PCBM into the pores should raise the photoconversion efficiency and assure a better charge transfer [187]. Similarly, Cheng and co-workers synthesized a COF by condensing (2,3,9,10,16,17,23,24-(octahydroxyphthalocyanito) zinc (ZnPc[OH] 8 ) with a blend of BDBA or a BDBA-derivative that included a pendent azide moiety (N3-BDBA), they inserted covalently-bonded C60 units within the COF pores and proved that the charge was effectively transferred [188]. Bein et al. reported on the synthesis of an oriented thin COF film containing benzodithiophene units and loaded with C60. The synthesis was carried out by co-condensing benzo[1,2-b:4,5-b']dithiophene-2,6-diyl diboronic acid (BDTBA) and HHTP under solvothermal conditions onto an ITO-coated glass substrate. Then two different solutions (i.e., [60] PCBM and [70] PCBM) were spin coated onto the COF-modified electrode. Thin BDT-COF films presented two important optical absorbance bands in the UV spectral region. Furthermore, the oriented BDT-COF films played a host role for different fullerene-based acceptors molecules. The photoluminescence of the BDT-COF film resulted to be quenched when these acceptors were loaded into COF and this evidence confirmed that charge transfer is taking place [189]. Covalent organic frameworks are also synthetized as donor-acceptor building blocks; the obtainment of such COFs with a good crystallinity degree is highly challenging. Indeed, a high crystallinity diminishes the occurrence of internal charge recombination that is detrimental for the photoconductive features of the material. Jiang and co-workers successfully obtained COFs made by columnar arrays of D-A blocks that exhibited vertically ordered p-n heterojunction leading to a remarkably enhanced photoconductivity without any additional dopants, as reported in Figure 11 [190]. Another donor-acceptor COF was obtained by Jin and co-workers. They synthesized DZnPc-ANDI-COF, based on zinc phthalocyanine as donor and naphthalene diimide as acceptor [191]. They also reported the substitution of the central metal ion, i.e., Zn, Cu or Ni to compare the physical properties of the different COFs [192]. When Zn is replaced by Cu or Ni the acceptor unit is still stable, and the charge separation lifetimes were very similar, even if the meta nature influenced the charge lifetimes. The copper-based COF achieved the longest lifetime, i.e., up to 33 μs. These results evidence how the thoughtful choice of the metal is a key parameter in the design of an effective D-A COF. In Table 4, the most effective COFs implemented in DSSCs are summarized. Figure 11. Schematic representation of 2D D-A COF with self-sorted and periodic electron donor-acceptor ordering and bicontinuous conducting channels (right: structure of one hexagon; left: a 3 × 3 grid). Reproduced with permission from [190]. Another donor-acceptor COF was obtained by Jin and co-workers. They synthesized DZnPc-ANDI-COF, based on zinc phthalocyanine as donor and naphthalene diimide as acceptor [191]. They also reported the substitution of the central metal ion, i.e., Zn, Cu or Ni to compare the physical properties of the different COFs [192]. When Zn is replaced by Cu or Ni the acceptor unit is still stable, and the charge separation lifetimes were very similar, even if the meta nature influenced the charge lifetimes. The copper-based COF achieved the longest lifetime, i.e., up to 33 µs. These results evidence how the thoughtful choice of the metal is a key parameter in the design of an effective D-A COF. In Table 4, the most effective COFs implemented in DSSCs are summarized. columnar arrays of D-A blocks that exhibited vertically ordered p-n heterojunction leading to a remarkably enhanced photoconductivity without any additional dopants, as reported in Figure 11 [190]. Figure 11. Schematic representation of 2D D-A COF with self-sorted and periodic electron donoracceptor ordering and bicontinuous conducting channels (right: structure of one hexagon; left: a 3 × 3 grid). Reproduced with permission from [190]. Another donor-acceptor COF was obtained by Jin and co-workers. They synthesized DZnPc-ANDI-COF, based on zinc phthalocyanine as donor and naphthalene diimide as acceptor [191]. They also reported the substitution of the central metal ion, i.e., Zn, Cu or Ni to compare the physical properties of the different COFs [192]. When Zn is replaced by Cu or Ni the acceptor unit is still stable, and the charge separation lifetimes were very similar, even if the meta nature influenced the charge lifetimes. The copper-based COF achieved the longest lifetime, i.e., up to 33 μs. These results evidence how the thoughtful choice of the metal is a key parameter in the design of an effective D-A COF. In Table 4, the most effective COFs implemented in DSSCs are summarized. Table 4. COFs effectively employed in DSSC. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. Name Building Blocks Perovskite Solar Cells (PSCs) In the last five years, organic-inorganic hybrid perovskite solar cells (PSCs) have gained great attention thanks to their high cell efficiency exceeding 20% [195][196][197][198]. PSCs are multilayer devices in which the perovskite film (behaving as light-harvester) is sandwiched between an electron transporting layer (ETL) and a hole transporting layer (HTL). In a n-i-p geometry the ETL (usually mesoporous or compact TiO 2 ) also acts as substrate for the deposition of the perovskite film. The HTM (e.g., a (un)doped organic polymer) is then deposited onto the PSK film and the device is finished with a metallic contact (Au, Ag . . . ) as shown in Figure 12 [199,200]. Perovskite solar cells are moderately cheap to produce (considering a low waste of materials in the scaling up) and can reach photoconversion efficiency comparable as those of commercially available silicon-based devices [5,[203][204][205][206][207]. Indeed, their efficiency has rapidly raised from 3.8% [208] in 2009 to 23.3% in 2018 in single-junction architecture and to 27.3% in silicon/PSK tandem devices [5] being the latter more efficient than single junction counterparts. Indeed, perovskite solar cells are the fastest-growing solar technology up to today [209]. The possibility of creating low cost, thin, high efficiency, lightweight and flexible solar cells make them really fascinating. Despite their major advantages, some issues could partially prevent the large-scale production of this class of solar cells: the use of lead is being debated today due to its toxicity and environmental harmfulness; the poor stability in the atmosphere due to degradative interaction with both oxygen and water and the partial photothermal instability of the perovskite layer itself [210]. In this context, the optimization of the ETL layer and its interface with perovskite film is of great importance to ameliorate the photostability of the device. Among different strategies [211], the employment of MOF-modified ETL could be a low-cost and feasible route. Three-dimensional perovskite structures The general formula of hybrid perovskite formula is ABX 3 in which A is methyl ammonium or another monovalent cation, B is Pb (II) or Sn (II), and X is Cl − , Br − , or I − . If lead is the bivalent cation, methyl ammonium the monovalent cation and I − the anion, we obtain a methylammonium lead iodide perovskite (MAPbI 3 ). In this material, the orbitals of lead and iodine contributes to the energetic level of the PSK (HOMO and LUMO) whereas the methylammonium cation does not give any contribution to the electronic properties of the absorbing layer, being involved in the establishment of the 3D network of the crystal, though. Straightforwardly, MA deeply influences the optical properties of the PSK layer. MAPbI 3 , and more broadly PSKs, has a direct bandgap of around 1.55 eV (absorption onset of about 800 nm) and large absorption coefficient in the visible range. So, this generates a high density of photoexcited charges and assures efficient light-harvesting [201,202]. Perovskite solar cells are moderately cheap to produce (considering a low waste of materials in the scaling up) and can reach photoconversion efficiency comparable as those of commercially available silicon-based devices [5,[203][204][205][206][207]. Indeed, their efficiency has rapidly raised from 3.8% [208] in 2009 to 23.3% in 2018 in single-junction architecture and to 27.3% in silicon/PSK tandem devices [5] being the latter more efficient than single junction counterparts. Indeed, perovskite solar cells are the fastest-growing solar technology up to today [209]. The possibility of creating low cost, thin, high efficiency, lightweight and flexible solar cells make them really fascinating. Despite their major advantages, some issues could partially prevent the large-scale production of this class of solar cells: the use of lead is being debated today due to its toxicity and environmental harmfulness; the poor stability in the atmosphere due to degradative interaction with both oxygen and water and the partial photothermal instability of the perovskite layer itself [210]. In this context, the optimization of the ETL layer and its interface with perovskite film is of great importance to ameliorate the photostability of the device. Among different strategies [211], the employment of MOF-modified ETL could be a low-cost and feasible route. Three-dimensional perovskite structures consisted of highly porous TiO 2 -MOF that have many advantages such as high stability, fast charge carrier, and high absorption coefficient [212] leading to the obtainment of photoactive ETLs. Up to now, just few examples of implementation of metal organic frameworks in PSCs have been reported. In the following subsection we briefly review them. MOFs in PSCs Vinogradov and co-workers implemented, for the first time, a crystalline metal-organic framework, namely MIL-125 (Ti 8 O 8 (OH) 4 -(O 2 C-C 6 H 4 -CO 2 ) 6 , as an active material in a perovskite solar cell [213]. They performed a single-step synthesis to obtain a TiO 2 -MOF composite during which water was added as a limiting reactant for Ti(OC 3 H 7 ) 4 causing a step-by-step growth of a hetero-phasic system. Initially, a titanium oxyhydroxide precipitate was formed and then it was converted into crystalline anatase by a hydrothermal treatment. By varying the initial concentration of precursor the MOF content in TiO 2 was tuned, being at least 5% w/w. The solid compound was washed twice with methanol and dried at 200 • C before being employed as ETL in a PSC. 3% MOF-modified TiO 2 was found to be the most efficient ETL, leading to a 6.4% photoconversion efficiency (being FAPbI 3 the PSK layer). The band gap of commercial TiO 2 is approximately 3.3 eV that could cause problems in exciting and injecting the electrons, which gives rise to inefficient electron transportation and then poor electrical conductivity. To improve the latter, Nguyen and Bark reported on the doping of TiO 2 with Co-metal organic framework by solvothermal method [214]. They obtained a PCE close to 16% by employing 1 wt% Co-doped TiO 2 shifting the band gap low to 2.4 eV. High crystallinity and proper morphology are two key parameters to obtain efficient PSCs; up to now, the deposition methods to obtain the wanted features are quite expensive, time-demanding and they required numerous synthetic steps [215][216][217][218]. To overcome these issues, Chang et al. tried to increase the crystallinity of PSK layer by embedding the latter in a MOF nano-crystalline (140 nm) matrix by using just one deposition step [219]. More in detail, they use a Zn-based MOF (i.e., MOF-525) [220] with a cubic structure to incorporate the perovskite crystals by driving a more efficient and homogeneous crystallization process. Once implemented in a complete PSCs, they obtain an overall conversion efficiency up to 12%. They performed chronoamperometric experiments to figure out the reason the difference in redox activity of the MOF-525 and metal-based MOF-525 thin films proving that the apparent diffusion coefficient (D app ) of the Zn-MOF-525 thin film is higher than that of the Co-MOF-525 thin film. Moreover, Co-MOF-525 thin film obtains higher D app than the MOF-525 thin film. As a result, it is observed that the Zn-MOF-525 thin film supplies faster charge hopping or faster ion diffusion compared to the Co-MOF-525 and MOF-525 thin films. Li et. al. presented for the first time ZIF-8 in DSCCs customizing the surface of a TiO 2 electrode and they succeeded to enhance the open-circuit voltage [93]. More recently, also used ZIF-8 in PSCs trying to correlate the quality of the MOF layer at the interface between ETL and PSK and the photovoltaic performance [221]. A thin interlayer of ZIF-8 was coated on the surface of mesoporous-TiO 2 (mp-TiO 2 ) to control the growth of a perovskite crystalline layer. When ZIF-8 was employed as additional layer at the interface between mp-TiO2 and the perovskite film, they obtain a substantial improvement of both crystallinity and morphology of the perovskite thin film that lead to a PCE close to 17% and higher than reference performances. PCE for the PSC with pure mp-TiO 2 as the ETL was 14.75% (J SC of 20.77 mA cm 2 , V OC of 1.00 V, FF of 0.71). On the other hand, the device with the ZIF-8-20 interfacial layer reached a better efficiency of 16.99% (J SC of 22.82 mA cm 2 , V OC of 1.02 V, FF of 0.73). A similar approach to enhance optical harvesting and electron extraction efficiency was recently reported by Zhang and co-workers, introducing a MOF-derived ZnO (MZnO) with dodecahedron porous structure, namely ZIF-8 [222]. The introduction of MOF-derived ZnO as ETL allows to reach more efficient electron extraction and to reduce trap state density leading to a lower electron-hole recombination. Thus, higher fill factor (0.74) and short-circuit current density (22.1 mA cm 2 ) were achieved (PCE = 18.1%). When comparing the effect of different sizes of M-ZnO on the performance of perovskite films and PSCs, the optimum was reached for a MOF particle size of about 120 nm. Additionally, the so-obtained devices showed almost no hysteresis effect, and performance attenuation in the ambient atmosphere over time was eliminated. Oriented microporous metal−organic framework (MOF) thin films made through a liquid phase technique for CH 3 NH 3 PbI 2 X (X = Cl, Br, and I) perovskite was reported by Chen et al. [223]; PbI 2 and CH 3 NH 3 X (MAX) precursors were introduced into MOF HKUST-1 (Cu 3 (BTC) 2 , BTC = 1,3,5-benzene tricarboxylate) thin film and allowed to crystallize in order to obtain extra small MAPbI 2 X (X = Cl, Br, and I) quantum dots with a diameter lower than 2 nm (i.e., comparable with the dimensions of the micropores in the MOF structure). The so-obtained hybrid structure was found to be quite irresponsive to air exposure having high stability. Unfortunately, the authors did not report the implementation of this PSK layer into a complete device. It is worth mentioning that just few studies about incorporation of MOFs into perovskite solar cells have been published so far. One of these studies reports on the implementation of an In-based MOF, i.e., [In 2 (phen) 3 Cl 6 ]·CH 3 CN·2H 2 O (namely In2), as hole transport material in PSCs. Its implementation allowed to finely tune the band alignment between the energy levels of PSK and HTM. Straightforwardly, the authors enhanced short-current density (from 19.53 to 21.03 mA cm −2 ), open circuit voltage (from 0.98 to 1.10) and FF (from 0.67 to 0.74) in comparison with reference devices leading to a significant increase in the cell efficiency from 12.8% to 15.8% [224]. The authors ascribed these unexpected results to both a decrement in the concentration of pin holes in the HTM and an increase in the light absorption of the device assured by the In2 behaving as a scattering layer. Additionally, the emission spectra of the employed MOF are partially superimposed to the absorption profile of the perovskite. In order to obtain flexible but high performing PSCs, the nanometric TiO 2 particle should be replaced by ultra-small ones that should require relatively low sintering temperature to not damage the flexible substrate (usually made of PET/ITO, with the polymeric substrate being unstable above 120 • C). In this context, Ti-based MOFs could be effectively employed being well-ordered materials constituted by Ti-oxo clusters and organic linkers [48,225]. Ryu and co-workers used nanocrystalline nTi-MOF (ca. 6 nm) as ETL in PSCs [226]. This MOF was proved to effectively assure the electrons transport throughout the ETL. Remarkably, the embodiment of PCBM into the MOF matrix further enhanced the film conductivity. Perovskite solar cells made by nTi-MOF showed good photoconversion efficiency both in rigid and flexible architecture, 18.9% and 17.4%, respectively. Very interestingly, the stability of the device was maintained after more than 700 bending cycles (i.e., 15.4%, 0.88 of its initial value). Therefore, this approach could be very promising to obtain flexible, efficient and stable PSCs. [227]: they obtained cakelike nanocrystals leading to a better crystallization of the perovskite film and reducing, in turns, the electron-hole pair recombination. The relative devices reached a PCE close to 13% and almost null hysteresis. The efficiency of perovskite/Zr-based MOF heterojunction in PSCs were investigated by means of two types of Zr-MOFs, and UiO-66 and MOF-808 was selected as a MOFs combination because of their chemical and moisture durability [228]. When MOFs were used as an interlayer (deposited onto the ETL one before the growth of the PSK layer) they drove the growth of the perovskite layer leading to a better crystallinity. UiO-66/MOF-808-modified PSCs exhibited power conversion efficiencies up to 17.0% and 16.6%, outperforming the control device (15.8%). Furthermore, both MOFs partially acted as UV-filters leading to a better photostability of the device. Indeed, the hybrid MOFs distribute over the perovskite grain boundary contributing a grain-locking effect to simultaneously passivate the defects and to strengthen the film's durability against moisture invasion. Over 70% of the initial PCE was maintained after being stored in air (25 • C and relative humidity of 60 ± 5%) for over 2 weeks. In Table 5, the most effective COFs implemented in PSCs are summarized. Table 5. Metal-organic frameworks implemented in perovskite solar cells. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. The main role of the MOF is bold in the first column. Metals Organic Linker(s) MIL-125(Ti) nTi-MOF (ca. 6 nm) as ETL in PSCs [226]. This MOF was proved to effectively assure the electrons transport throughout the ETL. Remarkably, the embodiment of PCBM into the MOF matrix further enhanced the film conductivity. Perovskite solar cells made by nTi-MOF showed good photoconversion efficiency both in rigid and flexible architecture, 18.9% and 17.4%, respectively. Very interestingly, the stability of the device was maintained after more than 700 bending cycles (i.e., 15.4%, 0.88 of its initial value). Therefore, this approach could be very promising to obtain flexible, efficient and stable PSCs. Zhao et al. employed the same MOF (MIL-125) as ETL [227]: they obtained cakelike nanocrystals leading to a better crystallization of the perovskite film and reducing, in turns, the electron-hole pair recombination. The relative devices reached a PCE close to 13% and almost null hysteresis. The efficiency of perovskite/Zr-based MOF heterojunction in PSCs were investigated by means of two types of Zr-MOFs, and UiO-66 and MOF-808 was selected as a MOFs combination because of their chemical and moisture durability [228]. When MOFs were used as an interlayer (deposited onto the ETL one before the growth of the PSK layer) they drove the growth of the perovskite layer leading to a better crystallinity. UiO-66/MOF-808-modified PSCs exhibited power conversion efficiencies up to 17.0% and 16.6%, outperforming the control device (15.8%). Furthermore, both MOFs partially acted as UV-filters leading to a better photostability of the device. Indeed, the hybrid MOFs distribute over the perovskite grain boundary contributing a grain-locking effect to simultaneously passivate the defects and to strengthen the film's durability against moisture invasion. Over 70% of the initial PCE was maintained after being stored in air (25° C and relative humidity of 60 ± 5%) for over 2 weeks. In Table 5, the most effective COFs implemented in PSCs are summarized. Table 5. Metal-organic frameworks implemented in perovskite solar cells. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. The main role of the MOF is bold in the first column. Metals Organic Linker(s) transport throughout the ETL. Remarkably, the embodiment of PCBM into the MOF matrix further enhanced the film conductivity. Perovskite solar cells made by nTi-MOF showed good photoconversion efficiency both in rigid and flexible architecture, 18.9% and 17.4%, respectively. Very interestingly, the stability of the device was maintained after more than 700 bending cycles (i.e., 15.4%, 0.88 of its initial value). Therefore, this approach could be very promising to obtain flexible, efficient and stable PSCs. Zhao et al. employed the same MOF (MIL-125) as ETL [227]: they obtained cakelike nanocrystals leading to a better crystallization of the perovskite film and reducing, in turns, the electron-hole pair recombination. The relative devices reached a PCE close to 13% and almost null hysteresis. The efficiency of perovskite/Zr-based MOF heterojunction in PSCs were investigated by means of two types of Zr-MOFs, and UiO-66 and MOF-808 was selected as a MOFs combination because of their chemical and moisture durability [228]. When MOFs were used as an interlayer (deposited onto the ETL one before the growth of the PSK layer) they drove the growth of the perovskite layer leading to a better crystallinity. UiO-66/MOF-808-modified PSCs exhibited power conversion efficiencies up to 17.0% and 16.6%, outperforming the control device (15.8%). Furthermore, both MOFs partially acted as UV-filters leading to a better photostability of the device. Indeed, the hybrid MOFs distribute over the perovskite grain boundary contributing a grain-locking effect to simultaneously passivate the defects and to strengthen the film's durability against moisture invasion. Over 70% of the initial PCE was maintained after being stored in air (25° C and relative humidity of 60 ± 5%) for over 2 weeks. In Table 5, the most effective COFs implemented in PSCs are summarized. Table 5. Metal-organic frameworks implemented in perovskite solar cells. For sake of brevity, we prefer to report the metal and the organic linker separately. The reported values of photoconversion efficiency could not be compared between each-others, yet they could shine light on the potentiality of this class of material. The main role of the MOF is bold in the first column. Metals Organic Linker(s) 48 In 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite 15 48 In 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite 18 48 In 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite 17 [228] Hole Transport Material (HTM) In 48 In 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite 18.51 [207] Hole Transport Material (HTM) In 48 In 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite 19.47 [208] As one can see from Table 4, MIL-125 was employed in both the photoactive material [213] and in the ETL [226]. This is a meaningful proof of the versatility of metal-organic frameworks. It is worth mentioning that just few works have been presented on the implementation of MOFs in perovskite solar cells; yet, the photoconversion efficiency reached is very promising and there is plenty of room for implementation. In the last few years, growing attention was dedicated to the employment of MOFs and MOF-derivatives as doping agents to enhance the performances and the stability of conventional HTM, namely SPIRO-o-MeTAD. Aiming at this, the first attempts were made with indium-based MOFs: in 2019, Yang et al. reported In10 ([In 0.5 K (3-qlc) Cl 1.5 (H 2 O) 0.5 ] 2n ) as an effective oxidative host for SPIRO-o-MeTAD leading to a higher conductivity of the HTL and suppressing charge recombination at the interface with PSK [229]. As an added property, In10 increased the light response of PSCs due to its photoluminescence properties: this led to an overall efficiency of 17% (J SC = 24.3 mA cm −2 , V OC = 1 V, FF = 0.7). A parallel approach was proposed by the same group [230]: they employed In-Aipa, [In(Hipa-NH 2 ) 2 (ipa-NH 2 ) 2 ]·5H 2 O, a 2D indium-based MOF as an effective auxiliary additive (in conjunction with both Li-TFSI and TBP that are very useful to increase the quality of HTM but jeopardize the long-term stability of the device) in SPIRO-o-MeTAD [231,232]. The resulting devices reached a PCE close to 19% and, more remarkably, they retained over 85% of the initial PCE after 720 h of air exposure. This result can be promising in terms of highly efficient and long-term stable PSC devices. A further step toward more stable PSCs was made by the same group through the replacement of TBP with a different In-based MOF, namely In(HPyia)Cl 2 ]·CH 3 CN (In-Pyia) [233]. As a matter of fact, TBP, which is highly volatile liquid phase component, caused the aggregation, hydration, and ion penetration of lithium salts into the PSK layer. The In-Pyia-modified PSCs boosted power conversion efficiency up to 20% surpassing the typical devices including t-BP (18%). In addition, In-Pyia-based device achieved a longer stability if compared with the TBP-based counterpart maintaining 81% of the initial PCE after 16 days without any encapsulation (below 20% for control device). The examples analyzed above employed In-based MOFs. Other approaches involved lead [234] and copper-based [235] materials. Zeng et al. combined a 2D Pb-based MOF with Spiro-OMeTAD as innovative HTL in PSCs [234]. The authors evidenced a haloing orienting effect of the latter leading to a smoother interface with PSK, higher hydrophobicity and an upshifting of the energy level when compared to Spiro-O-MeTAD, leading to a 25% increase in PCE and a longer stability. Instead, Cu-based MOF was employed by Fan's group who proposed the first application of hybrid material as oxidant in the HTM layer [235]. They tested a hybrid polyoxometalate@metal-organic framework (POM@MOF) material, [Cu 2 (BTC)4/3-(H 2 O) 2 ] 6 [H 3 PMo 12 O 40 ] 2 for the oxidation of spiro-OMeTAD with Li-TFSI and TBP. POMs began to attract attentions in PSCs, owing to their strong electron-accepting and oxidation ability, [236] being able to oxidize spiro-OMeTAD in an inert condition, and increase conductivity and performance of device [237]. In this context, POM@Cu-BTC composite showed dual-functions during chemical doping spiro-OMeTAD increasing the hole mobility of the resulting HTM by a factor of 2 and an overall PCE close to 22%. COFs in PSCs Initially, researchers' efforts were focused on obtaining and implementing 2D COFs whereas more recently 3D COFs have also shown some interesting features. Concerning emerging photovoltaics, Wu et. al. has recently reported highly conjugated three-dimensional COFs based-on spirobifluorene and their employment in perovskite solar cells [238]. They employed these COFs as an additive into the perovskite layer. They obtained 3D ordered porous frameworks with an orthogonal configuration of biplanar spirobifluorene units as tetrahedral nodes. This structure presented several electron-transporting channels in the frameworks with highly ordered array by having rigid and long-range conjugated systems. When a SP-3D-COFs-mdified PSK layer was used in a complete device led to a power conversion efficiency up to 18.3% for SP-3D-COF 1 and 18.7% for SP-3D-COF 2 that are extremely higher if compared to reference device (PCE = 15.8%). More recently, Kuo et al. used 2D-COF based on a building block of tetraphenylethylene as a hole transport layer for the modification of both PTAA or NiO in inverted perovskite solar cells [239]. In the case of PTAA-based PSCs, the establishment of π-π interactions between COF and the HTL led to an amelioration of the photoconversion efficiency, approaching 20% ( Figure 13). Furthermore, the presence of COF interlayer seems to improve both the crystallinity and the morphology of the perovskite layer. A new approach for synthesis of 2D imine-based COFs was reported by Li et al. [240]: pyrene units, containing two different functional groups which are formyl and amino groups, were used and polymerized through self-assembly condensation reaction. They also obtained pyrene-COFs using two different pyrene units by co-condensation reactions. The former approach allows to obtain COFs exhibiting higher crystallinity as well as higher porous surface (BET Surface; 1200-1372 m 2 /g). This is actually a promising strategy to obtain COFs in a mild condition. This pyrene-COF was also used as a HTLs combining with PTAA in PSCs reaching a photoconversion efficiency as high as 6.36% (V OC of 0.76 V, J SC of 15.4 mA/cm 2 , and a FF of 54%). As far as we are aware, these are the only examples regarding the effective implementation of a covalent organic framework in perovskite solar cells. Yet, following on from the extremely good photoconversion efficiency, it could soon become the milestone for future improvements in this field. We are quite confident that the best is yet to come! A new approach for synthesis of 2D imine-based COFs was reported by Li et al. [240]: pyrene units, containing two different functional groups which are formyl and amino groups, were used and polymerized through self-assembly condensation reaction. They also obtained pyrene-COFs using two different pyrene units by co-condensation reactions. The former approach allows to obtain COFs exhibiting higher crystallinity as well as higher porous surface (BET Surface; 1200-1372 m2/g). This is actually a promising strategy to obtain COFs in a mild condition. This pyrene-COF was also used as a HTLs combining with PTAA in PSCs reaching a photoconversion efficiency as high as 6.36% (VOC of 0.76 V, JSC of 15.4 mA/cm 2 , and a FF of 54%). As far as we are aware, these are the only examples regarding the effective implementation of a covalent organic framework in perovskite solar cells. Yet, following on from the extremely good photoconversion efficiency, it could soon become the milestone for future improvements in this field. We are quite confident that the best is yet to come! Conclusions Throughout the present review, we discuss on the implementation of both metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) in emerging photovoltaics, namely dye-sensitized solar cells (DSSCs) and perovskite solar cells (PSCs). Both MOFs and COFs have a unique structure with a great tunability of their organic and inorganic components that confer incredible chemical versatility. For this reason, they can be used in many areas: from capture, storage, separation, and conversion of gases to (photo)catalysis and drug delivery, from optoelectronic to sensors, and from magnetism and ferroelectricity to light harvesting and energy transfer. As far as the photovoltaic field is concerned, they do not have a unique role but, based on their photophysical and chemical properties, they could be effectively employed as photoactive material, electrodes or Conclusions Throughout the present review, we discuss on the implementation of both metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) in emerging photovoltaics, namely dye-sensitized solar cells (DSSCs) and perovskite solar cells (PSCs). Both MOFs and COFs have a unique structure with a great tunability of their organic and inorganic components that confer incredible chemical versatility. For this reason, they can be used in many areas: from capture, storage, separation, and conversion of gases to (photo)catalysis and drug delivery, from optoelectronic to sensors, and from magnetism and ferroelectricity to light harvesting and energy transfer. As far as the photovoltaic field is concerned, they do not have a unique role but, based on their photophysical and chemical properties, they could be effectively employed as photoactive material, electrodes or charge carriers, accomplishing mainly all the single components of a PV device. So, we do not feel to propose new applications but the main idea that should come out from this review is that there is still a large space for the implementation of the single PV device component with MOF-and COF-based materials (in particular as photoactive material or as hole transporting material in PSCs or photoanodic material or solid-state sensitizers in DSSCs). Indeed, based on their photophysical and chemical properties, they could be effectively employed as photoactive materials, electrodes or charge carriers. Indeed, MOFs are complex materials with highly tunable properties that could be largely exploited in order to design tailored features. Actually, MOFs can play different roles in a DSSC being able to act as photosensitizers, electrolyte additives or electrode materials. When employed as electrode-modifier and/or interlayer, MOFs have demonstrated to improve the dye loading thanks to their wide surface area and high porosity. Moreover, they are very good candidates as photoelectrode materials since, due to their electrical insulating properties, they can effectively minimize the interfacial charge recombination between the electrolyte and the electrons injected in the conduction band of TiO 2 . The highest efficiency reached is over 7% when MOF was used as photoanode exploiting the excellent ability to increase scattering capacity of incident light as well as demonstrating low loss ratio of photogenerated electrons by small dark current. The light harvesting capacity of MOFs can be easily tuned by choosing the appropriate organic linker, extending the absorption region within the ultraviolet and blue-violet region of the solar cells. The best overall efficiency is above 8% for a liquid-junction DSSC employing Indium(III)-Potassium(I) Metal-Organic Framework as a sensitizer in combination with N719. Some recent examples also report the use of MOFs as photosensitizers in solid-state photovoltaic cells. Another possible role of MOFs is their use as counter-electrode in order to reduce the cell costs getting rid of Pt. The preliminary results reported so far are very promising with overall photoconversion efficiency up to 9% due to large surface area and good electronic properties, i.e., an easier diffusion of iodine-based redox couple. In particular, it is worth noting that MOFs could be directly used as counter-electrode in place of Pt and could be seriously investigated as an effective low-cost and feasible alternative to platinum as counter-electrodes in DSSCs. A different, but somehow complementary, approach consists in the employment of custom-made MOFs as precursors to obtain, by thermal decomposition, a counter-electrode with specific properties and morphology (usually metal sulphides and oxides are obtained by this approach). The high versatility of MOFs is even more evident in the different roles they can fill in PSCs: indeed, they could be effectively applied as photoactive material, HTM or ETL. In all the reported cases, the photoconversion efficiency reached, still remaining well below the state of the art, demonstrates the great possibilities in the implementation of the final device. The use of MOF as electron transport layer can enhance optical harvesting and electron extraction efficiency reducing the trap state density leading to a lower electron-hole recombination. In addition, the MOF-based devices showed almost no hysteresis effect and improved stability. The same turned out to be true when MOF is employed as doping agent for conventional HTMs. The overall efficiency is above 19% with remarkable stability. On the other side, the specific role of covalent organic frameworks in DSSCs and PSCs is still not very clear since very few examples are present in the literature. Most of the studies reported on the possible implementation of COF structures in emerging PV but they mainly investigate their photoelectrochemical properties without showing their use in a real device. Some examples concern the use of COFs in PSCs with efficiency exceeding 18% where the presence of COFs seemed to improve both the crystallinity and the morphology of the PSK layer. These few examples are in any case very promising results for an effective implementation of COFs in PSCs. Funding: The authors acknowledge a grant of the Department of Chemistry of University of Turin on "Harnessing the power of light in photoactive metal-organic frameworks" within the initiative denoted "Tomorrows" to promote the collaboration among research groups. Conflicts of Interest: The authors declare no conflict of interest.
25,599
2020-10-26T00:00:00.000
[ "Materials Science", "Engineering", "Physics", "Chemistry" ]
CatLC: Catalonia Multiresolution Land Cover Dataset The availability of large annotated image datasets represented one of the tipping points in the progress of object recognition in the realm of natural images, but other important visual spaces are still lacking this asset. In the case of remote sensing, only a few richly annotated datasets covering small areas are available. In this paper, we present the Catalonia Multiresolution Land Cover Dataset (CatLC), a remote sensing dataset corresponding to a mid-size geographical area which has been carefully annotated with a large variety of land cover classes. The dataset includes pre-processed images from the Cartographic and Geological Institute of Catalonia (ICGC) (https://www.icgc.cat/en/Downloads) and the European Space Agency (ESA) (https://scihub.copernicus.eu) catalogs, captured from both aircraft and satellites. Detailed topographic layers inferred from other sensors are also included. CatLC is a multiresolution, multimodal, multitemporal dataset, that can be readily used by the machine learning community to explore new classification techniques for land cover mapping in different scenarios such as area estimation in forest inventories, hydrologic studies involving microclimatic variables or geologic hazards identification and assessment. Moreover, remote sensing data present some specific characteristics that are not shared by natural images and that have been seldom explored. In this vein, CatLC dataset aims to engage with computer vision experts interested in remote sensing and also stimulate new research and development in the field of machine learning. Background & Summary Pixel-wise classification of remote sensing images is a challenging task that often requires fieldwork and manual annotation because of the importance of its role in several critical applications. Therefore, mapping agencies and organizations are on the quest to explore how to minimize these strenuous and time-consuming manual tasks by using computer-assisted processes. To do so, current automatic land cover segmentation techniques have benefited from better raw data sources, but they still require improvement in terms of accuracy and also better integration strategies with humans in the loop. Land cover mapping is among the primary use cases of airborne and satellite images and the proposed focus of this work. Land cover maps are used in different applications such as forest inventory and management, hydrology, crop management or geologic risk identification and assessment. Therefore, accurate and updated knowledge about land dynamics is essential for territory management with different purposes and in multiple fields but, nowadays, high-resolution land segmentation is still done mainly by employing photointerpretation techniques, entailing high costs in terms of time and human resources. The transformation towards a computer-assisted solution faces a critical point: the scarcity of high-quality datasets, being the labeling process one of the main causes of this situation. Labeling natural image datasets, such as ImageNet 1 and PASCAL VOC 2 , does not pose an interpretation problem as their classes are well defined, distinctive, and can be easily understood by any human annotator. However, labeling remote sensing images correctly might need expert knowledge and requires access to different sources of raw data. For example, differentiating between deciduous or evergreen forest is not an easy task even for the expert. When data are scarce, the best strategy when developing classification models is to adapt models that have been developed in similar fields with plenty of data. The case of remote sensing is not an exception, and the use of traditional natural image segmentation architectures is the paradigm in the field. There are some reasons to think that these models are not optimal for remote sensing images because of their inadequate inductive biases, but this hypothesis can only be validated by having access to large datasets of carefully labeled remote sensing data. It is necessary to tackle issues such as the restricted translational invariance of these images or the variable resolution of their bands. Additionally, in the near future, full automation of high-resolution cartographic tasks such as land mapping will be the norm, and better strategies to develop powerful deep learning models with a human in the loop are necessary. Traditional pixel-wise deep learning segmentation techniques must also be adapted to this end 3 . In this paper, we present the Catalonia Multiresolution Land Cover Dataset (CatLC). This dataset (see Fig. 1) comprises a large variety of images: RGB and infrared orthophotos from airborne sensors at high resolution, radar imagery from Sentinel-1 satellites, multispectral data from Sentinel-2 satellites and compositions of topographic maps-all those accompanied by a land cover map labeled by experts in photointerpretation. Using different combinations of images from the dataset, we offer a benchmark that could serve as a starting point to explore different artificial intelligence techniques for remote sensing segmentation purposes. CatLC dataset aims to engage with computer vision experts interested in remote sensing and stimulate research and development. Methods In this section, the CatLC dataset is presented in detail. It includes a set of images, obtained by airborne and satellite sensors, from the catalogs of the Cartographic and Geological Institute of Catalonia (ICGC) and the European Space Agency (ESA). Labels correspond to the current ICGC's land cover map. The CatLC dataset covers the entire territory of Catalonia (Spain) (see Fig. 2), approximately 32000 km 2 , providing a high quality source of information for the application of Artificial Intelligence (AI) and Deep Learning (DL) techniques, both regarding the variety of the information and their extension. All the images were acquired during 2018 at different spectral bands and spatial resolutions. They are provided in GeoTiff raster format and share a common georeferencing system projection (WGS84 UTM31N Reference System). They cover the same geographic extension, given by the following bounding box: UTM X West: 240000, UTM X East: 540000, UTM Y North: 4780000, and UTM Y South: 4480000. The different data layers, with spatial resolutions varying between 1 and 10 meters depending on the product and sensor used, are presented in detail in the following subsections. In such subsections we illustrate the images provided by the dataset (land cover, orthophoto, Sentinel-1, Sentinel-2 and topographic maps). Figures 5,6,7,8,9,11,12 and 13 show different images on three geographical areas in Catalonia. We have summarized all available data in Table 1. Land cover map. The 2018 land cover map presented here has 41 different classes (see Fig. 3), including different agricultural areas, forest areas, urban areas and water bodies. Photointerpreters from ICGC followed a standardized procedure during its generation process. The minimum area for labeling an element was 500 www.nature.com/scientificdata www.nature.com/scientificdata/ squared meters and the minimum length for linear features such as roads, rivers, railroad tracks, etc. was between 8 and 10 meters (https://datacloud.ide.cat/especificacions/cobertes-sol-v1r0-esp-01ca-20160919.pdf). Supervision has been performed on a sample of 811 points throughout this territory, resulting in a thematic accuracy of 81%. The final 41 labels (see Fig. 3) presented in this publication are delivered at spatial resolution of 1 m. The distribution of the land covers within the mapped territory is heterogeneous. Some covers as herbaceous crops or dense coniferous forests are much more common than airport areas or water bodies. In Fig. 4 we can see the histogram for the complete dataset (see also Fig. 5). Orthophoto. An orthophoto is a cartographic document consisting of a vertical aerial image that has been rectified in such a way as to maintain a uniform scale over the entire image surface. It consists of a geometric representation at a given scale of the Earth's surface. Original images were taken with a resolution of 25 centimeters, but because the land cover map has a resolution of 1 meter, we have decided to rescale the orthophoto raster layer also to 1 meter. Fig. 5 The distribution of the land covers within the mapped territory is heterogeneous. Some covers as herbaceous crops or dense coniferous forests are much more common than airport areas or water bodies. In Fig. 4, we can see the histogram for the complete dataset. www.nature.com/scientificdata www.nature.com/scientificdata/ This layer comprises four distinct bands, each providing information from different zones of the electromagnetic spectrum. Three of them belong to the visible area of the spectrum (RGB) (see Fig. 6) and one of them to the infrared area (see Fig. 7). A continuous image is generated based on several thousands of independent photoshoots processed with a combination of commercial software (Trimble/Inpho) and in-house developments. On this cartographic document, digital retouching tasks have been carried out to minimize artifacts that may have originated during the acquisition and processing of the images. The applicable specification can be found in 4 . www.nature.com/scientificdata www.nature.com/scientificdata/ Sentinel-1. The Sentinel-1 dataset has been generated from Synthetic Aperture Radar (SAR) images in GRD (Ground Range Detected) mode from the year 2018 at 10-meter spatial resolution. The Sentinel-1 satellite constellation is made up of two twin satellites, A and B, from the European Space Agency (ESA). These satellites emit a microwave signal (frequency 5.405 GHz) and subsequently receive the echo of the reflection on the ground surface. Therefore, Sentinel-1 images contain information on the reflectivity of the terrain that depending on its type (urban, vegetation, crops, water, etc.) will have different intensities, thus providing valuable information for land cover classification. For this purpose, 12 acquisitions have been chosen, one for each month of the year, covering the entire territory of Catalonia. Full coverage has been achieved by combining two orbits in www.nature.com/scientificdata www.nature.com/scientificdata/ ascending mode (orbits 30 and 132) and VV (Vertical-Vertical) polarizations in similar dates. The descending orbit and VH (Vertical-Horizontal) polarization have not been included in the present dataset because the information is mostly redundant. However, its use can be explored in case it provides improvements in segmentation. Additionally, an average image of the year 2018 has been generated with improved radiometry (multitemporal speckle reduction) by combining all 12 monthly images into one (see Fig. 8). Consequently, the average image cannot provide information on temporary changes during 2018 but does provide a lower noise-level image. Land cover map -1 www.nature.com/scientificdata www.nature.com/scientificdata/ The images were processed with the SNAP (Sentinel Application Platform) software 5 from ESA using the following procedure: 1. Download of the precise orbit for each image using the "Apply-Orbit-File" function, which provides detailed information for its correct georeferencing. 2. Deletion of noisy pixels from the edge of the image using the "Remove-GRD-Border-Noise" function. 3. Radiometric calibration of each image providing calibrated reflectivity information for the Sentinel-1 images. A correct calibration is necessary for the multitemporal study of the data. 4. Topographic effects Compensation using the "Terrain-Flattening" function. The acquisition geometry of the SAR images is oblique, which generates distorting artifacts in the reflectivity associated with the terrain topography (layover, foreshortening and shadowing). This processing compensates for these artifacts to obtain an image that is as independent as possible from the topography. 5. Georeferencing using the "Terrain-Correction" function and final mosaicking of the images. A video comparing the average Sentinel-1 image and the image for each month is on the CatLC webpage 6 . Sentinel-2. Sentinel-2 provides multispectral imagery data at different resolutions approximately every five days. We have selected two relevant dates for this dataset, the first one in April and the second one in August. We have chosen these two dates to follow the phenological evolution of the vegetation throughout the spring and late summer. As we are in the Mediterranean area, with these two dates it is possible to detect both winter and summer herbaceous crops as well as evergreen and deciduous forest areas. Due to the presence of clouds, multiple data takes have been necessary to make a cloud-free mosaic (see Fig. 9). The images obtained by the MSI sensor from the Sentinel-2A and 2B satellites, from the European Commission Copernicus program, have been atmospherically corrected by means of the ESA sen2cor v2.8 software 7 to yield Level-2A images. The main purpose of sen2cor is to correct single-date Sentinel-2 Level-1C Top-Of-Atmosphere (TOA) radiance from the effects of the atmosphere in order to deliver a Level-2A Bottom-Of-Atmosphere (BOA) reflectance. The process may optionally use a DEM (Digital Elevation Model) to correct the changes in the radiometry related to the topographic relief. A 10 m gridded DEM generated at ICGC by photogrammetric techniques has been used in this study. A total of 10 bands, at 10 m and 20 m resolution, are preserved as input features for the Deep Learning process. Figure 10 presents Sentinel-2 images before and after they have been corrected. www.nature.com/scientificdata www.nature.com/scientificdata/ Digital elevation model. This is a standard layer freely distributed by ICGC and is built upon the altimetric information of the Topographic Base of Catalonia 1:5000 version 2 (BT-5m v2.0) that includes profiles, altimetric coordinates, break and contour lines, all of them obtained from the terrain. It consists of a raster image at 5 m pixel size and its estimated altimetric accuracy is 0.9 m RMS (see Fig. 11). The specification followed for its generation can be found in 8 . Two typical subproducts for remote sensing applications are the slope, which indicates each pixel's steepness, and the aspect, yielding the orientation of the maximum slope between adjacent pixels. These values have been calculated from the DEM and thus contain redundant information. We include them because they might be helpful for the interpretation of the results. Digital surface model. The Digital Surface Model (DSM) is a raster layer at 1 m pixel size containing orthometric heights. It represents the topmost height for every pixel position on the grid, be it the ground or features such as forest canopy and buildings (see Fig. 12). It is generated using Trimble/Inpho's software package MATCH-T DSM. It works fully automatically using different image matching techniques like feature-based matching (FBM), cost-based matching (CBM) and least squares matching (LSM) to produce highly dense point clouds. The process follows a hierarchical approach www.nature.com/scientificdata www.nature.com/scientificdata/ starting from an upper level of the image pyramid and generating an approximate DSM for the next lower pyramid level. Different levels of smoothing can be applied as a function of terrain roughness to filter or reject outliers from the generated point cloud. Large point clouds (>5 mio. points) are automatically split into a squared tile structure. From the final point cloud (tiles) a raster file with the selected 1-m grid size is interpolated. The same aerial photogrammetric images at 0.25 m-0.35 m used to produce the orthophoto are employed, thus guaranteeing a good consistency between these products. Canopy height model. The Canopy Height Model (CHM) is a high resolution (1 m) raster dataset that maps all the objects over the terrain as a continuous surface. It is advantageous to delineate the forest extent, but it also includes urban landscape data. Each pixel of this model represents the height of the trees above the ground topography. In urban areas, the CHM represents the height of buildings or other built objects (see Fig. 13). This layer is created through subtraction of the 2016-2017 LiDAR DEM (interpolated from the standard 2 m-pixel ICGC standard product 9 ) from the 2018 photogrammetric DSM. Note that this product is not dependent on the aforementioned Digital Elevation Model. Data Records The complete CatLC dataset is available at the following link 6 . The data files and their formats are detailed in Table 2. technical Validation Assessing the quality of the images in the dataset is very important to ensure that the input data is of optimal quality, apart from its characteristics such as the spatial resolution or the number of spectral bands. In order to show the quality of the presented data, bellow there is a summary of the Quality Controls (QC) used during their processing: 1. Land Cover map: The land cover map update process includes periodic checks of some selected polygons among those that have been geometrically and semantically modified. In the case of systematic errors or misunderstandings about the legend, they are corrected. At the end of the 2018 update, an internal quality supervision has been carried out on a sample of 811 points throughout the territory, resulting in a thematic accuracy of 81%. 2. Orthophoto: According to the specification 4 , several automatic and manual checks are carried out. The main ones are positional accuracy (RMSE 0.5 m), geometric and radiometric continuity, dynamic range and image quality. Additional manual inspection after retouch ensures that remaining artifacts cover less than 1% of the total area of Catalonia. 3. Sentinel-1 and Sentinel-2: The processing of the Sentinel-1 and Sentinel-2 images has been carried out with the quality standards of the ESA SNAP 5 and sen2cor 7 software respectively. This guarantees a good radiometric and geometric calibration of the images. After processing, an evaluation of the location of various control points has been performed to validate a geolocation error of less than one pixel (10 m • DSM in the Pyrenees: better than 40 cm. • DSM in the rest of Catalonia: better than 30 cm. • CHM in the Pyrenees: better than 45 cm. • CHM in the rest of Catalonia: better than 35 cm. Since the DSM and the CHM are automatically generated products, their quality can be considerably decreased in areas where the matching algorithm did not achieve optimal results (e.g. in shadow areas). It should be also noted that in areas covered with some kind of forests and mildly sparse trees the DSM/CHM does not always represent the height of the canopy, depending on the tree density and the presence of foliage. Usage Notes An initial benchmark accompanies the CatLC dataset as a starting point and to show a helpful pipeline to train a model with provided data. Note that the use of these data is subject to a Creative Common International Recognition 4.0 license, and contains Sentinel Copernicus data modified by the ICGC. Unlike other datasets that have multiple images, CatLC has only one large image. To work with it, we will need to access smaller tiles, so the first step has been to create a list with the indexes of all the tiles that we are going to use in the dataset of dimension 960 × 960 pixels (in the higher spatial resolution images of 1 m). This list was then randomly divided into three groups, 60% for training, 20% for validation, and 20% for testing. Being this a segmentation problem, we have not been able to have a homogeneous distribution of the three groups because usually tiles contain multiple classes. The distribution for the sets can be found in Fig. 14. As the main baseline, we have selected the classical U-Net neural network 10 , which is used as a starting point in most applications that require image segmentation. This baseline has been implemented in PyTorch, running in a workstation with a Nvidia Quadro P5000 GPU. Cross entropy loss has been used, together with an Adam optimizer with 0.0001 as the learning rate. Experiments consider three different scenarios: 1. Use as input data RGB orthophotos and infrared band. We know by experience that the high resolution should give good results in the border between different classes, but its limitations regarding spectral bands makes it harder to differentiate classes that belong to the agricultural or forest superclasses. 2. Use as input data two Sentinel-2 images corresponding to April and August. This time, the low resolution will penalize the frontiers, but there should be an improvement in differentiating agricultural or forest superclasses. 3. Use as input data the complete CatLC dataset. It does not make sense to use Sentinel-1 or topographical data all alone because most of its information is about elevation or reflectivity. But the combinations of those with orthophotos and Sentinel-2 data should improve the results. To better visualize the results, we have compressed the data in a four superclasses confusion matrix (Fig. 15) and mean intersection over union metrics (Fig. 16) as recommended in COCO dataset 11 . In Figs. 17-20 a confusion matrix and a mean intersection over union for all the 41 classes are shown. As we stated before, Sentinel-2 outperforms the orthophoto in agricultural and forest zones, but it loses when we need more resolution as in urban areas. Finally, using the complete dataset gives better results overall. In Fig. 21 there is an example of a segmentation using the complete CatLC dataset. Code availability CatLC is available for download, along with all the necessary information and tutorials, on the following website 6 . There is a tutorial on how to manage the data in the following url: https://github.com/OpenICGC/CatLC/. There is also the code to reproduce the training presented in the article. We provide the logs for the whole training that can be visualized using Tensorboard.
4,750.2
2022-09-08T00:00:00.000
[ "Environmental Science", "Geography", "Computer Science" ]
Mesons in ultra-intense magnetic field: an evaded collapse Spectra of $q \bar q$ mesons are investigated in the framework of the Hamiltonian obtained from the relativistic path integral in external homogeneous magnetic field. The spectra of all 12 spin-isospin s-wave states generated by $\pi$- and $\rho$-mesons with different spin projections, are studied analytically as functions of the field strength. Three types of behavior with characteristic splittings are found. The results are in agreement with recent lattice calculations. Introduction The interest to the behavior of quarks, hadrons and atoms in strong magnetic field (MF) has been very high during the last decade. The outbrake of the research activity in this field was inspired by the fact that MF up to eB ∼ Λ 2 QCD ∼ 10 19 G 1 is generated during the early stages of peripheral heavyion collisions at RHIC and LHC. The field about four orders of magnitudes less is anticipated to operate in magnetars. The immediate question is what happens to the mass and the wave function of a meson embedded in such a strong MF. The answer to this question has been searched for in various approaches (see [1] for a list of references) including lattice simulations. In the present work the problem is investigated in the framework of the relativistic path integral Hamiltonian (PIH) formalism [2][3][4]. For pion this method has to be supplemented by the elements of chiral dynamics [5]. The analytical results will be compared with the lattice calculations presented recently in [1]. Before getting involved with the details of calculations it makes sense to relate the MF strength to some characteristic physical parameter which defines the spectrum of quark-antiquark meson states. From the textbooks we know that for the hydrogen atom the critical, or the so-called "atomic field", is B a = α 2 m 2 e |e| = 2.35 · 10 9 G. This value corresponds to the situation when the magnetic, or Landau radius l B = (|e|B) −1/2 is equal to the Bohr radius. The QCD coupling constant α s ∼ 1, the meson radius at eB = 0 is determined by the QCD string tension σ (0.15 − 0.18) GeV 2 [4]. It is therefore natural to define for the hadron spectra the critical MF as B σ = σ/|e| 10 19 G which yields l B 0.6 fm. This value is approximately equal or smaller than the typical hadron size. a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>1 We use the relativistic system of units = c = 1, e 2 = 4πα. Then 1 GeV 2 5.12 · 10 19 G The determination of the hadron spectrum in MF is not an easy task. The first problem is to separate the center-of-mass (c.m.) motion. For the neutral nonrelativistic system in MF this can be done (with some qualifications) making use of the Pseudomomentum [6][7][8][9]. This approach was extended to the relativistic sector within the PIH framework in [4]. For a charged meson Pseudomementum method is applicable only for an unphysical model of a meson with two equally charged quarks [4]. In this contribution we present the results on the meson spectrum in MF both within the Pseudomomentum approach and in a new analytical method of constituents separation (CS). It will be argued that its accuracy is within 15% for ultra-strong MF (eB σ), and within 20% for eB < σ. The method allows to study neutral and charged mesons in the same way. The results for the neutral mesons will be obtained both in Pseudomomentum and CS approaches. In this way the accuracy of the CS will be tested. The most important question we have to answer is whether the meson spectrum in MF is bounded from below. In other words, does the meson mass reaches zero value at some MF strength. We shall point out the two dynamical mechanisms that might have led to such a collapse and explain why this does not happen. The paper is organized as follows. In section 2 the relativistic Hamiltonian based on path integral Feynman-Fock-Schwinger representation is written down and the spectral problem is formulated. In section 3 we discuss the possible types of meson mass trajectories in MF. Section 4 contains the analysis of perturbative corrections and potential reasons for the collapse of meson state in MF. In section 5 we present the main results in comparison with lattice calculations. The relativistic Hamiltonian and the spectral problem To find the meson masses in MF we use the path integral Hamiltonian (PIH) method based on the Feynman-Fock-Schwinger representation [2][3][4]. It allows with the help of Wilson loop to treat the interaction of quarks with external Abelian and non-Abelian fields in a gauge-invariant way. As it was shown in [2][3][4]10] the quark-antiquark spectral problem in MF in PIH formalism is reduced to the bound states problem for the relativistic Hamiltonian which includes all the non-perturbative dynamics Here ω i is the i-th quark dynamical mass, or the einbein variable [2][3][4]. The MF is convenient to take in the symmetric gauge A i = 1 2 (B × r i ) since this gauge allows to define the angular momentum projection of each quark as a quantum number. The next step is to perform minimization with respect to ω i which yields the physical spectrum The total meson mass is a sum of the non-perturbative (dynamical) one obtained from (1)-(3) and the first-order perturbative contributions where V oge is the one-gluon exchange potential, and a ss and ∆M S E are the spin-spin and self energy contributions. For the neutral hadrons (mesons and baryons) the eigenvalue problem (2) admits the exact solution which is obtained by the separation of the c.m. motion. To this end the pseudomomentum operator is introduced [6- In MF the pseudomomentum takes the role of the mechanical momentum, commutes with the Hamiltonian, and is therefore a constant of motion. Physically, F is conserved since it takes into account the Lorentz force acting on particles in MF. In [1] we have proposed a more general approach which allows to investigate the mass spectra of both neutral and charged mesons. This is the constituent separation (CS) method. The c.m. position r 0 is fixed at the origin and an effective string tension σ i is attributed to each quark In this picture quarks may be considered as quasi-independent of the non-perturbative part of the interaction. Meson trajectories in strong MF In strong MF eB σ it is convenient to use for the spin degrees of freedom in (1) a basis in which the operator B( e 1 2ω 1 σ 1 + e 2 2ω 2 σ 2 ) is diagonal. The four vectors forming this basis are |++ , |−− , |+− , |−+ . One can easily obtain three types of asymptotic meson trajectories at eB → ∞. The character of the trajectory is determined by the signs of the quark charges and the spin directions. According to the terminology adopted in atomic physics the trajectory is called low-field seeking (LFS) if the energy decreases as the MF decreases. The state which at eB → ∞ is MF-independent may be called zerofield seeking (ZFS). The types of asymptotic trajectories are a) ZFS : e 1 σ z 1 > 0, e 2 σ z 2 > 0 : The trajectory LFS2 exhibits stronger MF dependence than LFS1. From (7) it follows that the (π + , ρ + ) family which contains u andd quarks is distributed among the three above classes in the following way: ρ + (s z = 1) belongs to ZFS, π + (s z = 0) and ρ + (s z = 0) belong to LFS1, ρ + (s z = −1) rests in LFS2. The same situation up to the sign change holds for (π − , ρ − ). The states π 0 and ρ 0 (s z = 0) contain uū and dd components. The charges of u and d are different and this results in an additional double splitting. As one can see from Fig.1, the color Coulomb collapse is really evaded. We remind that in superstrong MF radiative corrections screen the Coulomb potential in the hydrogen atom thus leading to the freezing of the ground state energy at the value E 0 = −1.7 KeV [11,12]. It is interesting to note that asymptotically at eB → ∞ the matrix element ψ 0 |V OGE |ψ 0 vanishes. Another threat of a collapse comes from the hyperfine spin-spin interaction a S S . In the firstorder perturbation theory in PIH formalism it corresponds to the color-magnetic interaction of the form In strong MF the ground state wave function acquires the form of an ellipsoid elongated in the direction of MF. At eB → ∞ the transverse and longitudinal radii are r ⊥ ∼ 1/ √ eB, r z ∼ 1/ √ σ. This means that the focusing of the wave function at the origin and a divergent factor |ψ(0)| 2 ∼ eB in the matrix element of V S S . Not that the problem of singularity due to δ-function interaction exists without MF as well. It is cured by smearing δ-function [13,14]. In PIH formalism there is a natural cut-off parameter λ ∼ 1 GeV −1 . It corresponds to the correlation length of the stochastic vacuum gluonic field. The δ-function is replaced by In this way the "fall-to-the-center"is prevented for all ZFS states except for the π 0 -meson. The π 0 trajectory is stabilized if one takes its chiral degrees of freedom into account. We also note that in [3] a general theorem was proven according to which the eigenvalues of the relativistic Hamiltonian in MF are positive. The explicit account of pion chiral dynamics [5] confirm this result. The main point is that GMOR relations remain valid for neutral pions in arbitrary-strong MF, while charged pions loose their chiral properties at eB > σ. Below we present the results of our analytic calculations in comparison with the recent lattice results from [1]. In Fig.2 the ρ − meson mass evolution in MF is shown. . Mass evolution of (π 0 , ρ 0 )(uū) family from analytic and lattice data (hollow circles are from [16]). Results and conclusions In Fig.3 and Fig.4 results for π 0 and ρ 0 are exposed. One should keep in mind that uū and dd components give rise to their own trajectories. The growing trajectories belong to the LFS2 class and the splitting is equal to √ 2. In Fig.5 we present the mass evolution of chiral and non-chiral π − in comparison with the lattice data. The chiral effects provide the decrease of the mass to its physical value at eB → 0. In this work we have evaluated the trajectories of π and ρ meson masses as functions of the external MF. The meson quark content and pion chiral dynamics were thoroughly taken into account. The most interesting problem was whether the mass remains finite in arbitrary strong MF. The collapse might have happened either due to color Coulomb interaction, or due to spin-spin potential proportional to δ-function. We have shown that in both cases there are physical reasons why collapse is evaded. The analytic calculations of meson mass trajectories give the results which are in agreement with recent lattice simulations.
2,646
2016-11-30T00:00:00.000
[ "Physics" ]
Characterization of Composite Powder Feedstock from Powder Bed Fusion Additive Manufacturing Perspective This research aims at evaluating the characteristics of the 5 wt.% B4C/Ti-6Al-4V composite powder feedstock prepared by two different categories of mechanical mixing for powder bed fusion (PBF) additive manufacturing (AM) of metal matrix composites (MMCs). Microstructural features, particle size, size distribution, sphericity, conditioned bulk density and flow behavior of the developed powders were examined. The flowability of the regularly mixed powders was significantly lower than that of the Ti-6Al-4V powder. However, the flowability of the ball-milled systems was a significant function of the milling time. The decrease in the flowability of the 2 h ball-milled powder compared to the Ti-6Al-4V powder was attributed to the mechanical interlocking and the entangling caused by the B4C particles fully decorating the Ti-6Al-4V particles. Although the flattened/irregular shape of powder particles in the 6 h milled system acted to reduce the flowability, the overall surface area reduction led to higher flowability than that for the 2 h milling case. Regardless of the mixing method, incorporation of B4C particles into the system decreased the apparent density of the Ti-6Al-4V powder. The composite powder obtained by 2 h of ball milling was suggested as the best possible condition, meeting the requirements of PBF–AM processes. Introduction Metal matrix composites (MMCs) are outstanding materials bilaterally benefitting from the properties of at least two constituents: the metal matrix (usually an alloy), and the reinforcement (in general, an oxide, an intermetallic compound, a carbide or a nitride) [1][2][3]. Incorporation of reinforcements into the metallic matrix is generally associated with the improvement in hardness, specific strength, wear resistance, fracture toughness, and stiffness compared to the monolithic counterparts [4][5][6][7]. Owing to their desired structural and functional properties, MMCs have found their application in numerous technological fields including automotive, aerospace and biomedical industries [8,9]. Several conventional manufacturing processes already exist for incorporating reinforcements into the metallic matrices to produce a wide variety of MMCs [10][11][12][13][14]. However, it is rather challenging to fabricate geometrically complex parts using these processes. Additive manufacturing (AM) is regarded as a major revolution in the manufacturing technology and competes with conventional manufacturing processes in many aspects, including, but not limited to, the design freedom, fabrication cost and time, accuracy and the part quality [15,16]. Powder bed fusion (PBF) refers to the AM processes in which an object is manufactured layer-by-layer from a batch of loose powder using a mobile heat source. During this process, a thin layer of powder is deposited on the building platform by the recoater. The powder flow behavior during the recoating process plays a critical role on the uniformity, surface roughness and the thickness of the deposited powder layer and consequently, the dimensional accuracy of the final part [17,18]. On the other hand, the powder bed packing density directly affects the density and mechanical properties of the additively manufactured components [19,20]. Therefore, the flowability and the packing density of the powders need to be investigated prior to the AM process to ensure the soundness of the final parts [21][22][23][24]. The desired properties of MMCs are achieved when the reinforcements are homogeneously distributed throughout the matrix with a strong reinforcement/matrix interfacial bonding [25,26]. Conventional MMC fabrication methods generally yield inhomogeneous microstructures, making it rather difficult to fully exploit the strengthening potentials of reinforcements. Therefore, it is of crucial importance to develop new fabrication routes providing a more homogenous distribution of reinforcements within the matrix [16]. The noticeably localized melt pool and the extremely high solidification rates associated with the PBF-AM technology can lead to MMC structures which are much more homogenous than the conventionally processed parts [27]. However, there are still some challenges with the PBF-AM processing of MMCs for achieving highly uniform microstructures due to the following reasons [3,28]: i. In nano-composites, reinforcing particles tend to agglomerate and form coarsened clusters in the matrix due to the presence of van der Waals attraction forces among them. ii. A large difference between the densities of the reinforcing particles and the liquid matrix encourages the non-uniform distribution of reinforcements in the microstructure. iii. The convection flows (i.e., Marangoni effect) induced in the melt pool may not be sufficient to disperse the reinforcing particles throughout the system. Therefore, when it comes to the PBF-AM of MMCs, particular emphasis should be placed on pre-processing of the composite powder feedstock in order to achieve parts with homogenous microstructures and consequently, uniform mechanical properties. Development of a suitable composite powder feedstock with a uniform distribution of reinforcing particles is the first step to mitigate the mentioned challenges. Since the feedstock of MMCs is not commercially available for AM processing [29], several techniques have been employed in recent years to prepare these powders. The mechanical routes such as regular mixing [30][31][32] and ball milling [33][34][35][36][37][38][39], and non-mechanical methods including gas atomization of a pre-alloyed system [40,41], agent-assisted deposition [42,43] and electrodeposition [44,45] are among these methods. Compared to the mechanical mixing routes, which have attracted a great deal of attention in recent years, the non-mechanical methods have been rarely adopted to prepare composite powder feedstocks for AM purposes. Although the non-mechanical methods can probably produce composite powder systems with a better flowability and a more uniform distribution state of powder constituents compared to the mechanical routes, they are much more complex and expensive. Due to their low cost as well as their applicability to many powder systems, the mechanical routes are the most frequently used methods in powder feedstock preparation for the PBF-AM of MMCs [29,46]. Incorporation of the guest powder particles (e.g., ceramic particles) into the host powder particles (usually metallic particles) leads to the production of composite powder feedstocks with different characteristics from the individual constituents. The particle size, particle size distribution, and distribution state of the guest powder particles are among these features which dictate the apparent density, flowability, and laser absorptivity of the composite powder [15,16]. The laser absorptivity influences the heat absorption, and the melt pool size [44,47,48], while the flowability and the apparent packing density of the powder play crucial roles in layer thickness (dimensional accuracy) and density of the final part [17,[49][50][51][52]. A literature review of the AM processing of MMCs reveals that the mixing of powders, in most cases, has been performed to achieve a distribution of free (non-attached) guest particles throughout the mixed powder system [30][31][32][33][34][35][36][37][38][39][53][54][55]. On the other hand, preserving the spherical shape of metallic powder particles has been considered in some of these studies [36,53,55,56]. However, the effect of powder preparation routes and process variables on the composite powder features affecting the quality of AM processed parts is still unclear and needs in-depth analysis and characterization from the AM perspective to obtain high-quality MMCs. The present research aims to study the characteristics of the mechanically mixed 5 wt.% B 4 C/Ti-6Al-4V (Ti64) composite powder feedstock from the PBF-AM viewpoint. For this purpose, the regular mixing and the ball-milling methods were employed with different mixing times to prepare a wide variety of mixed powder systems. The effects of the mixing method and the mixing time on the size, size distribution, sphericity, shape, distribution state of guest particles, phase formation, plastic deformation, apparent density, and flow behavior of the prepared composite powder systems were studied. Moreover, the mechanisms involved in the flow behavior of the developed feedstocks were identified, and the best possible powder feedstock meeting the requirements of PBF-AM processing of MMCs was proposed. Powder Preparation The powders used in this research were Ti64 (15-45 µm) and B 4 C (1-3 µm) named as "host" and "guest" powders, respectively. The nominal chemical composition of these powders is provided in Table 1. For preparing the composite powder feedstock, the regular mixing and the ball-milling processes were employed, which are schematically shown in Figure 1. In both cases, the mixed powder systems contained 5 wt.% B 4 C (guest), and the powders were mixed under a protective argon gas atmosphere to avoid oxidation. The mixing process was performed using a high-performance planetary Pulverisette 6 machine at a fixed rotational speed of 200 rpm and mixing times in the range of 1 to 6 h. The regularly mixed and the ball-milled samples have been marked as R(1-6) and B(1-6), respectively, depending on their mixing time. In the regular mixing, the powders were mixed without balls. However, stainless steel balls with a diameter of 10 mm were added to the system in the ball-milling process. The ball-to-powder ratio was selected to be 5:1, and every 30 min of milling was followed by a 15 min pause in order to avoid a temperature increase during the process [57]. For each composite powder system, the mass was measured after the mixing to find the material loss. Microstructure and XRD Analysis The morphology of the starting Ti64 (host) and B 4 C (guest) powders, as well as the composite powder systems, were studied using Vega Tescan scanning electron microscopy (SEM) operating at an accelerating voltage of 20 kV. The X-ray diffraction (XRD) analysis was employed to study the effect of the mixing method and the mixing time on the phase formation and the plastic deformation of the developed composite powder feedstocks. This analysis was performed at ambient temperature over a wide range of 2θ = 20 • -80 • using a PANalytical X'Pert powder X-ray diffractometer (Westborough, MA, USA, Cu Kα target, operating at the voltage and the current of 45 kV and 35 mA, respectively, with a step size of 0.0167 • ) equipped with an X-ray monochromator. In order to have a better understanding of the microstructural features, the starting Ti64 and composite powders were also sectioned and characterized using SEM. Particle Size, Size Distribution and Sphericity A Retsch Camsizer X2 machine (Haan, Germany) was used to measure the particle size, size distribution and sphericity of the Ti64 and the developed 5wt.% B 4 C/Ti64 composite powder systems fabricated through the regular mixing and ball-milling methods. This equipment utilizes a high-resolution dual-camera system to characterize fine and agglomerating powders ranging from 800 nm to 8 mm in diameter [58]. The reported results are the average of three measurements. The sphericity of the powder particles was calculated based on the equation suggested by the ISO 9276-6 standard [59]: where P and A are the measured circumference and area covered by a particle projection, respectively. Given the fact that the sphericity of an ideal sphere is unity, deviation from the ideal spherical shape results in lower sphericities. Flow Characteristics The flow behavior of powder particles could be characterized using different techniques such as ring shear cell tester, Hausner ratio (HR), angle of repose (AOR)/Hall flowmeter, avalanche angle and Freeman Technology 4 (FT4) powder rheometer [60][61][62]. When deciding to choose one of these techniques, characteristics of both the technique and the process need to be considered since the selected flowability measurement technique should be as close to the employed process as possible. In recent years, the FT4 Powder Rheometer has emerged as a unique technology to measure the flow behavior of the powder, whilst the powder is in motion. In this device, a precision 'blade' is rotated and moved downwards through a fixed mass of the powder bed to establish a flow pattern. The work required to drive the rotating impeller a certain distance into the powder bed yields the flow energy. Due to its dynamic nature, this technique is capable of differentiating the flowability of powders that exhibit similar behavior under other flow measurement techniques [63]. The FT4 powder rheometer technique provides measurement of several parameters related to the process performance of powders. The interaction of the precision blade with the powder in this technique resembles that of recoater/powder in the PBF-AM process. It is worth noting that the control over the speed of the blade provided by this technique enables the characterization of powder sensitivity to the changes in flow rate. This unique feature also facilitates analysis of the powder flowability for different PBF-AM machines with different recoater speeds. The flow characteristics of the host powder, as well as the developed composite powder feedstocks in the present study, were measured using an FT4 powder rheometer (Freeman technology, Tewkesbury, UK). In order to study the powder rheological properties using this technique, the standard "Stability and Variable Flow Rate (SVFR)" method was employed, which consists of a stable tip speed zone with seven test cycles followed by a variable tip speed zone having four test cycles. During each test cycle, the precision blade rotated downwards and upwards through the fixed mass of powder to establish a flow pattern, where the powder resistance to the blade yielded the flow properties. During the stability part, the blade operated with a tip speed of −100 mm/s (anti-clockwise) and a helix angle of 5 • . For the variable flow rate zone, the tip speed varied as −100, −70, −40 and −10 mm/s for the test cycles eight, nine, ten and eleven, respectively, with the same helix angle of 5 • . The upward speed remained constant at 40 mm/s, and the helix angle was −5 • upwards throughout the experiment. It is noteworthy that before all tests, a conditioning cycle was performed that involves the downward and then upward movement of the blade into the powder bed to gently slice the powder and provide a reproducible, uniform and low-stress packing state, allowing an objective comparison of samples. For each sample in this study, three tests were run to ensure consistency of the results. The flow characteristics of the samples were studied by analyzing the variation in the flow energy as well as by measuring the basic flowability and specific energy (SE). In addition, the conditioned bulk density (CBD) of the samples was examined to study the powder bed density. The basic flowability, defined as the ability of the powder to flow when forced, is qualitatively measured as the basic flow energy (BFE). The BFE represents the energy required for the rotation of the blade for the seventh test cycle (BFE = E test 7, down ) of the stable tip speed part. The SE shows the energy required to establish a particular flow pattern in a precise volume of conditioned powder and is defined as the average energy of the upward blade rotation for the 7th and 8th test cycles divided by the mass of the remaining powder in the vessel (Equation (2)). By gently lifting the powder, the upward motion of the blade generates a low stress and unconfined flow mode in the powder. where m split is the mass of the powder after the excess powder is removed. The CBD describes the packing state or the density of the powder in its reference state. In order to measure the CBD, each powder system was gently filled in a 25 mL volume splitting cylindrical vessel with a diameter of 25 mm. The conditioning process was performed with a conditioning blade, which slices the powder bed to remove the excess air and create a uniform powder bed with a low-stress packing state. After conditioning, the vessel was split in order to remove the excess mass of powder, so that the remaining powder had a volume of 25 mL. For each sample, three measurements were performed, and the average value was reported as the CBD, based on Equation (3): In which v split signifies the volume of the powder (the vessel volume), after the excess powder is removed. Results and Discussions 3.1. XRD Analysis: Plastic Deformation and Phase Formation Figure 2a presents the XRD patterns of the starting host and guest powders, as well as the developed composite powder systems subjected to the regular mixing and ball milling for different mixing times. The diffraction peaks of Ti64 in the R6 system had almost the same position and intensity as that of the starting Ti64. However, those for the B2 and the B6 systems exhibited the Ti64 diffraction peaks with a decreased intensity and increased width. The observed phenomenon was more pronounced for the B6 sample than for the B2 system. Moreover, a close examination of the Ti64 peaks in the ball-milled systems revealed that the severe plastic deformation induced in the ball milling process led to the shift in the peaks' position due to the structural changes such as the crystallite refinement and the accumulation of micro-strain [64,65]. The lattice micro-strain of the Ti64 constituent in the composite powder systems was determined using the standard Williamson-Hall analysis as follows [66]: where k is the shape factor (0.9), λ represents the wavelength of the X-ray (1.5406), θ signifies the diffraction angle, t is the effective crystallite size, β is the full width at the half maximum of the XRD peak, and ε is the micro-strain. By constructing a linear plot of (β cosθ) against (4 sin θ), the slope gives the strain (ε). According to the micro-strain results provided in Figure 2b, the B2 and B6 systems showed increased lattice strain compared to the regularly mixed feedstocks. Also, longer milling times led to higher lattice strain (B6 compared to B2) due to the higher levels of plastic deformation imparted to the powder particles. where is the shape factor (0.9), represents the wavelength of the X-ray (1.5406), signifies the diffraction angle, is the effective crystallite size, β is the full width at the half maximum of the XRD peak, and is the micro-strain. By constructing a linear plot of (βcos ) against (4 sin ), the slope gives the strain ( ). According to the micro-strain results provided in Figure 2b, the B2 and B6 systems showed increased lattice strain compared to the regularly mixed feedstocks. Also, longer milling times led to higher lattice strain (B6 compared to B2) due to the higher levels of plastic deformation imparted to the powder particles. Referring to Figure 2a, all the peaks obtained for the composite powder feedstocks corresponded to those for the Ti64 and B4C powders due to two probable scenarios: (i) no in-situ reaction has been activated in the system during the applied range of mixing times, or (ii) if formed, the amount of insitu synthesized phases is below the detection limit of the XRD analysis. As indicated in Figure 2a, the intensity of the B4C peaks in the R6 composite powder system decreased compared to those for the starting B4C powder. When employing the ball-milling method, the intensity of these weak peaks further decreased for the B2 sample and finally, disappeared in the B6 system. Figure 3 presents SEM micrographs of the starting host Ti64 and the guest B4C powders used in this research. As observed in Figure 3a,b, the starting host powder particles had an almost spherical morphology and a very smooth surface, which are both characteristics of the gas-atomized Ti64 powders [17]. However, B4C particles exhibited an irregular morphology (Figure 3c). Despite their smooth surface, some of the Ti64 particles have satellites. These satellites are formed when the finer solidified particles stick to the molten or semi-molten surface of the coarser ones as a result of the in- Referring to Figure 2a, all the peaks obtained for the composite powder feedstocks corresponded to those for the Ti64 and B 4 C powders due to two probable scenarios: (i) no in-situ reaction has been activated in the system during the applied range of mixing times, or (ii) if formed, the amount of in-situ synthesized phases is below the detection limit of the XRD analysis. As indicated in Figure 2a, the intensity of the B 4 C peaks in the R6 composite powder system decreased compared to those for the starting B 4 C powder. When employing the ball-milling method, the intensity of these weak peaks further decreased for the B2 sample and finally, disappeared in the B6 system. Figure 3 presents SEM micrographs of the starting host Ti64 and the guest B 4 C powders used in this research. As observed in Figure 3a,b, the starting host powder particles had an almost spherical morphology and a very smooth surface, which are both characteristics of the gas-atomized Ti64 powders [17]. However, B 4 C particles exhibited an irregular morphology (Figure 3c). Despite their smooth surface, some of the Ti64 particles have satellites. These satellites are formed when the finer solidified particles stick to the molten or semi-molten surface of the coarser ones as a result of the in-flight collisions before the solidification of the coarser molten droplets [67]. Microstructural Characterization to those for the Ti64 and B4C powders due to two probable scenarios: (i) no in-situ reaction has been activated in the system during the applied range of mixing times, or (ii) if formed, the amount of insitu synthesized phases is below the detection limit of the XRD analysis. As indicated in Figure 2a, the intensity of the B4C peaks in the R6 composite powder system decreased compared to those for the starting B4C powder. When employing the ball-milling method, the intensity of these weak peaks further decreased for the B2 sample and finally, disappeared in the B6 system. Figure 3 presents SEM micrographs of the starting host Ti64 and the guest B4C powders used in this research. As observed in Figure 3a,b, the starting host powder particles had an almost spherical morphology and a very smooth surface, which are both characteristics of the gas-atomized Ti64 powders [17]. However, B4C particles exhibited an irregular morphology (Figure 3c). Despite their smooth surface, some of the Ti64 particles have satellites. These satellites are formed when the finer solidified particles stick to the molten or semi-molten surface of the coarser ones as a result of the inflight collisions before the solidification of the coarser molten droplets [67]. The cross-sectional SEM micrographs of the starting Ti64 powder, as well as the B2 and the B6 powder systems, are presented in Figure 6. The cross-sectional SEM micrographs of the starting Ti64 powder, as well as the B2 and the B6 powder systems, are presented in Figure 6. As can be observed in Figure 4, the Ti64 host particles maintained their spherical morphology when employing the regular mixing method even with mixing times as long as 6 h (R6). However, even after a long mixing time of 6 h (R6), many of the guest particles were not attached to the host powder particles (Figure 4e), meaning that noticeably longer mixing times may still be required to provide more guest-to-host attachment. The guest particles that are not attached to the host powder As can be observed in Figure 4, the Ti64 host particles maintained their spherical morphology when employing the regular mixing method even with mixing times as long as 6 h (R6). However, even after a long mixing time of 6 h (R6), many of the guest particles were not attached to the host powder particles (Figure 4e), meaning that noticeably longer mixing times may still be required to provide more guest-to-host attachment. The guest particles that are not attached to the host powder particles tend to form agglomerates (as depicted in Figure 4a,c,e). Despite this fact, the regular mixing method has been adopted in numerous studies to produce composite powders due to its relative simplicity [30][31][32]68,69]. Microstructural Characterization Depending on the employed mixing time, the host powder particles experienced different levels of plastic deformation during the ball-milling process (Figures 5 and 6). At a relatively short milling time of 1 to 2 h, the host power particles preserved their spherical shape (Figures 5a and 6b,c). Higher amounts of plastic deformation imparted to the system by a longer milling time of 3 h (B3) resulted in some spherical to quasi-spherical/irregular shape change (Figure 5c). Also, due to the extended time that the hard guest particles were hitting their surface, the enhanced milling time increased the surface roughness of the host powder particles. In the prolonged milling time of 6 h (B6), the desired spherical shape of the host particles altered to a flattened/irregular shape (Figures 5e,f and 6d-f). Microstructural observations also revealed the cold-welding of the ductile host powder particles during the ball milling process. Longer mixing times were found to cause intensified cold-welding in the applied range of ball milling time (Figures 5 and 6). Even at relatively short mixing time of 1 h, a significant guest-to-host attachment was obtained in the ball milling process, while still a few free B 4 C particles could be observed in the composite powder (Figure 5a,b). Increasing the mixing time to 3 h eliminated the non-attached B 4 C particles and led to the host particles fully decorated by the guest ones (Figure 5c,d). Further enhancement of the mixing time to 6 h caused the embedment of the guest particles into the host powder particles (Figures5e,f and 6d-f). The observed shape change and the agglomeration of the particles both observed at relatively long milling times are known as the main issues limiting the application of such composite powder systems in PBF-AM processes [29,36]. Sieving has been suggested as one of the strategies that could be employed to tailor the particle size in the powder systems subjected to the extended milling times [70]. While successful with particle size, this technique does not control the particle shape. Therefore, depending on the ductility of the host powder particles, appropriate milling times need to be found for each system to control the final shape of the particles. Figure 7 presents the results of particle size distribution for the starting Ti64 powder as well as the developed 5 wt.% B 4 C/Ti64 powder feedstocks prepared by the regular mixing and ball-milling methods for 2 and 6 h of mixing. As observed in Figure 7b,c, the regularly mixed powder systems have a bimodal size distribution, while the ball-milled composite powders show a mono-modal size distribution (Figure 7d,e). This can be attributed to the non-attached B 4 C particles in the regularly mixed case as opposed to the full attachment of the guest particles to the host powder particles in the ball-milled powder systems. It is worth noting that even a relatively long mixing time of 6 h in the regular mixing method was not successful in full guest-to-host attachment (Figure 4e,f). However, a relatively short mixing time of 1 h in the ball-milling method led to the B 4 C particles being well-attached to the host powder particles (Figure 5a,b). The enhanced milling time provided attachment of more guest particles to the surface of the host particles, eliminating the free guest particles in the composite powder (Figure 5c,d). Application of longer milling times (B6) resulted in the embedment of B 4 C particles into the host powder particles (Figures 5e,f and 6f). The severe weakening (or even disappearance) of the B 4 C diffraction peaks in this system (B6 powder in Figure 2a) may be attributed to their embedment into the ductile host powder particles. Figure 8 shows the sphericity of the powder particles with respect to their size for the starting Ti64 powder, as well as the developed composite powders. In the regular mixing case shown in Figure 8a, the sphericity of the host powder particles is the same as the starting Ti64 particles (18-50 μm). This can be ascribed to the absence of plastic deformation in the regular mixing method as well as the limited attachment of the guest B4C particles to the host ones ( Figure 4). The non-attached B4C particles existing in the R2 and R6 samples showed particle sizes in the range of 3-18 μm, representing the formation of agglomerated guest particles (Figure 4). The relatively low sphericity of these agglomerates originates from their irregular shape (Figure 3c). According to Figure 8b, three The D10, D50, and D90 of the powder systems derived from the cumulative frequency shown in Figure 7 are provided in Table 2. Referring to Figure 7 and Table 2, the regularly mixed composite powders showed almost the same particle size and particle size distribution as that of the starting Ti64 powder. The slight deviation in D10 and D50 of R2 and R6 systems from those for Ti64 is caused by the presence of free guest B 4 C particles with a significantly smaller particle size compared to the host Ti64 particles (Figure 4). Due to its non-equilibrium nature, the ball-milling process involves persistent deformation, cold-welding, and fracture of powder particles [71,72]. The size of the powder particles subjected to the ball-milling process is determined by the competition between two major mechanisms, namely, cold-welding and fracture [71]. While the cold-welding mechanism facilitates the formation of larger-sized particles by attachment of the host powder particles, the fracture mechanism favors the decrease in particle size. Hence, the refining or coarsening of powder particles during the ball-milling process depends on whether the cold-welding or the fracture mechanism is dominant [73]. For the applied range of milling times, the particle size showed an ascending trend upon increasing the mixing time as a result of the cold-welding mechanism being dominant (Figures 5e and 6b,c,e). For instance, D90 for the B2 and B6 samples is 12% and 35% higher than that of Ti64 powder, respectively. The particle coarsening in the ball-milled composite samples is caused by the guest-to-host attachment as well as the cold-welding of host particles. The decoration of host particles is the dominant factor in increasing the particle size in the B2 sample due to the limited cold-welding caused by the relatively short mixing time (Figures 5 and 6). However, the significant cold-welding induced by the prolonged mixing time in the B6 sample is the main factor governing the particle coarsening, since the guest particles are embedded into the host Ti64 powder (not decorating the host particles). Based on the D90 of the B6 sample (Table 2) and Figure 7, 10% of particles have sizes in the range of 62-275 µm. These particle sizes are noticeably larger than the starting Ti64 powder particles, with 0.3% of particles exceeding 62 µm in size. Figure 8 shows the sphericity of the powder particles with respect to their size for the starting Ti64 powder, as well as the developed composite powders. In the regular mixing case shown in Figure 8a, the sphericity of the host powder particles is the same as the starting Ti64 particles (18-50 µm). This can be ascribed to the absence of plastic deformation in the regular mixing method as well as the limited attachment of the guest B 4 C particles to the host ones ( Figure 4). The non-attached B 4 C particles existing in the R2 and R6 samples showed particle sizes in the range of 3-18 µm, representing the formation of agglomerated guest particles (Figure 4). The relatively low sphericity of these agglomerates originates from their irregular shape (Figure 3c). According to Figure 8b, three different zones can be defined in the sphericity-particle-size curves for the B2 and B6 samples. Zones (I), (II) and (III) refer to the particle sizes in the range of (5-18), and (>65) µm for B2, and (10-25), and (>80) µm for B6, respectively. Since the B2 and B6 samples are free from non-attached B 4 C particles, Zone (I) signifies the fractured host powder particles showing lower sphericity than the starting Ti64 particles. In Zone (II), the deformation and cold-welding, as well as the decoration of the host particles by the guest ones, lead to the decreased sphericity compared to the Ti64 powder. Since the short milling time of 2 h resulted in the limited cold-welding, the host particles decoration and deformation are believed to be the dominant factors, slightly decreasing the sphericity of the B2 sample. According to Figures5a,b and 6b,c, most of the host powder particles preserved their spherical shape at short milling times. However, due to the embedment of the guest particles into the host ones in the B6 sample (absence of decoration), deformation and cold-welding of the host particles are responsible for the sphericities lower than the B2 sample. Zone (III) which is mostly visible in the B6 sample represents the cold-welded quasi-spherical/irregular shape powder particles (agglomerates) with sizes much larger than the starting Ti64 particles (Figures 5e,f and 6d-f). The sphericity of these agglomerates follows a decreasing trend by increasing their size. spherical shape at short milling times. However, due to the embedment of the guest particles into the host ones in the B6 sample (absence of decoration), deformation and cold-welding of the host particles are responsible for the sphericities lower than the B2 sample. Zone (III) which is mostly visible in the B6 sample represents the cold-welded quasi-spherical/irregular shape powder particles (agglomerates) with sizes much larger than the starting Ti64 particles (Figure 5e,f and Figure 6d-f). The sphericity of these agglomerates follows a decreasing trend by increasing their size. Flowability As can be observed in Figure 9a,b, all of the developed composite powder systems showed lower flowability (higher BFE and SE) as compared to the host powder (Ti64). The flow response of the composite powders was found to be dependent on the blade movement direction. In downward movement (BFE), the ball-milled composite powders exhibited higher flow energy (lower flowability) compared to the regularly mixed composite systems (Figure 9b). However, considering the upward movement of the blade (SE), the ball-milled powders showed better flowability (Figure 9a). The major difference between the BFE and SE is the confined flow of the powder in the former case due to the effect of the bottom of the vessel. Since the recoater interacts with powder in an unconfined state during the powder layer deposition process, the SE is a better representative of the powder flow in PBF-AM processes. As the microstructural and sphericity characterizations revealed in Figures 5, 6 and 8, the plastic deformation induced with longer milling times led to some degrees of shape change for the host powder particles from spherical to quasi-spherical/moderately flattened. In addition, the surface roughness of the host particles increased. The surface roughening (not the shape change) of Ti64 particles in the ball-milled composite powders subjected to relatively short mixing times (1-3 h) can be attributed mainly to the guest B4C particles hitting their surface due to two reasons: i. The guest B4C powder particles are noticeably harder than the host Ti64 powder particles [22,[74][75][76]. Accordingly, the noticeably harder B4C particles have a great potential to scratch, punch and Flowability As can be observed in Figure 9a,b, all of the developed composite powder systems showed lower flowability (higher BFE and SE) as compared to the host powder (Ti64). The flow response of the composite powders was found to be dependent on the blade movement direction. In downward movement (BFE), the ball-milled composite powders exhibited higher flow energy (lower flowability) compared to the regularly mixed composite systems (Figure 9b). However, considering the upward movement of the blade (SE), the ball-milled powders showed better flowability (Figure 9a). The major difference between the BFE and SE is the confined flow of the powder in the former case due to the effect of the bottom of the vessel. Since the recoater interacts with powder in an unconfined state during the powder layer deposition process, the SE is a better representative of the powder flow in PBF-AM processes. roughen the surface of softer Ti64 particles. Since they have the same hardness, the host-host inter-particle collisions might not affect the surface roughness of the host particles. ii. As the microstructural observations of the starting powders revealed (Figure 3), the guest B4C particles have an irregular shape as opposed to the spherical shape of the host Ti64 particles. The collision of irregular-shaped B4C particles with the spherical-shape Ti64 particles has a higher chance of making the surface of Ti64 particles rough compared to the host-host collisions. It is worth noting that the guest B4C particles affect the surface roughness of the host Ti64 particles only if the extreme guest-host collisions are provided by the metallic balls (ball milling). As the microstructural characterizations revealed, the regular mixing did not affect the surface roughness of Ti64 particles even at relatively long mixing times such as 6 h (Figure 4). The increase in the SE (decreased flowability) in the B2 feedstock compared to the Ti64 reference sample may be attributed to the contribution of two different factors: (i) the change in the particle morphology [77], and (ii) the decoration of the host particles by the guest particles. The slight spherical to quasi-spherical shape change and the surface roughening of the host particles seem not to play major roles on the reduction of powder flowability of the B2 sample. Therefore, the As the microstructural and sphericity characterizations revealed in Figures 5, 6 and 8, the plastic deformation induced with longer milling times led to some degrees of shape change for the host powder particles from spherical to quasi-spherical/moderately flattened. In addition, the surface roughness of the host particles increased. The surface roughening (not the shape change) of Ti64 particles in the ball-milled composite powders subjected to relatively short mixing times (1-3 h) can be attributed mainly to the guest B 4 C particles hitting their surface due to two reasons: i. The guest B 4 C powder particles are noticeably harder than the host Ti64 powder particles [22,[74][75][76]. Accordingly, the noticeably harder B 4 C particles have a great potential to scratch, punch and roughen the surface of softer Ti64 particles. Since they have the same hardness, the host-host inter-particle collisions might not affect the surface roughness of the host particles. ii. As the microstructural observations of the starting powders revealed (Figure 3), the guest B 4 C particles have an irregular shape as opposed to the spherical shape of the host Ti64 particles. The collision of irregular-shaped B 4 C particles with the spherical-shape Ti64 particles has a higher chance of making the surface of Ti64 particles rough compared to the host-host collisions. It is worth noting that the guest B 4 C particles affect the surface roughness of the host Ti64 particles only if the extreme guest-host collisions are provided by the metallic balls (ball milling). As the microstructural characterizations revealed, the regular mixing did not affect the surface roughness of Ti64 particles even at relatively long mixing times such as 6 h ( Figure 4). The increase in the SE (decreased flowability) in the B2 feedstock compared to the Ti64 reference sample may be attributed to the contribution of two different factors: (i) the change in the particle morphology [77], and (ii) the decoration of the host particles by the guest particles. The slight spherical to quasi-spherical shape change and the surface roughening of the host particles seem not to play major roles on the reduction of powder flowability of the B2 sample. Therefore, the considerable increase in the SE for this system could be related to the presence of the decorating guest powder particles. The corresponding suggested mechanism is schematically illustrated in Figure 10. The guest-decorated host powder particles may experience two different interactions during their flow. While the "mechanical interlocking" mechanism decreases the flowability by resisting the free flow of the powder particles relative to each other (Figure 10b), the "contact surface reduction" mechanism may improve the flowability of the system [78] (Figure 10c). Compared to the non-decorated host powder system, the inter-particle tangling caused by the decorating guest particles leads to the enhanced flow resistivity. On the other hand, if not forming tangles, the presence of the decorating guest particles lowers the contact surface area required for the movement of a guest-decorated host particle (particle 1) from position (I) to position (II) relative to another particle assumed to be fixed (particle 2) (as shown in Figure 10d). By reducing the inter-particle friction and adhesion force, this mechanism can improve the flowability [78][79][80]. As mentioned earlier, although the slight change in the morphology of the host powder particles in the B2 case could have adverse effects on the flowability, these factors seem not to be predominant in the flowability reduction due to their negligible deviation from the starting host powder particles. Accordingly, the significant decrease in the flowability of the B2 powder system could be due to the dominance of the "mechanical interlocking" mechanism over the "contact surface reduction" mechanism. The contribution of the "mechanical interlocking" and the "contact surface reduction" mechanisms in the overall powder flowability is a major function of the guest particle size. When nano-scale guest powder particles are deposited on the surface of the primarily cohesive host powder particles, the artificially generated nano-scale roughness has been reported to enhance the flowability [79,[81][82][83]. Based on the mechanism provided in Figure 10, this could be attributed to the "contact surface reduction" combined with the lack of active "mechanical interlocking" sites in such composite powder systems, reducing the chance of particle entangling. flowability is a major function of the guest particle size. When nano-scale guest powder particles are deposited on the surface of the primarily cohesive host powder particles, the artificially generated nano-scale roughness has been reported to enhance the flowability [79,[81][82][83]. Based on the mechanism provided in Figure 10, this could be attributed to the "contact surface reduction" combined with the lack of active "mechanical interlocking" sites in such composite powder systems, reducing the chance of particle entangling. As indicated in Figure 9a, the B6 sample showed lower SE (better flowability) than the B2 one. In order to describe this finding, the mechanisms influencing the flowability of the B6 sample need to be explored. Referring to Figure 11, the flowability of the B6 powder feedstock is determined by the contribution of three mechanisms. Although the significant spherical to flattened/irregular shape change as well as the inter-locking of the guest-embedded host particles in such a system act to decrease the flowability [84][85][86], the increased particle size caused by the dominancy of the coldwelding over the fracture mechanism may reduce the effective surface area of the powder particles and, consequently, favor higher flowability [86] (Figure 7e and Table 2). Therefore, the overall flow behavior of such composite powder systems is governed by the competition among these mechanisms. The slight dominance of the inter-locking and shape change mechanisms over the surface area reduction in the B6 system is believed to be the reason for the small increment in its SE compared to the Ti64 case ( Figure 11). Referring to Figure 9a, the regularly mixed powder systems depicted SEs about twice that of the host particles. The observed phenomenon can be discussed based on the microstructural characterization of the powder systems. As shown in Figures 4 and 8, the regularly mixed powder feedstocks had the guest particles almost non-attached to the completely spherical (non-deformed) host particles. Accordingly, neither the host particle shape change nor the decoration-induced tangling can be the reason behind the As indicated in Figure 9a, the B6 sample showed lower SE (better flowability) than the B2 one. In order to describe this finding, the mechanisms influencing the flowability of the B6 sample need to be explored. Referring to Figure 11, the flowability of the B6 powder feedstock is determined by the contribution of three mechanisms. Although the significant spherical to flattened/irregular shape change as well as the inter-locking of the guest-embedded host particles in such a system act to decrease the flowability [84][85][86], the increased particle size caused by the dominancy of the cold-welding over the fracture mechanism may reduce the effective surface area of the powder particles and, consequently, favor higher flowability [86] (Figure 7e and Table 2). Therefore, the overall flow behavior of such composite powder systems is governed by the competition among these mechanisms. The slight dominance of the inter-locking and shape change mechanisms over the surface area reduction in the B6 system is believed to be the reason for the small increment in its SE compared to the Ti64 case ( Figure 11). Referring to Figure 9a, the regularly mixed powder systems depicted SEs about twice that of the host particles. The observed phenomenon can be discussed based on the microstructural characterization of the powder systems. As shown in Figures 4 and 8, the regularly mixed powder feedstocks had the guest particles almost non-attached to the completely spherical (non-deformed) host particles. Accordingly, neither the host particle shape change nor the decoration-induced tangling can be the reason behind the lower flowability of regularly mixed powders. However, the enhanced inter-particle friction caused by the presence of fine non-attached guest powder particles with an extremely high surface-to-volume ratio can explain the drastic increase in the SE of these samples. lower flowability of regularly mixed powders. However, the enhanced inter-particle friction caused by the presence of fine non-attached guest powder particles with an extremely high surface-to-volume ratio can explain the drastic increase in the SE of these samples. Conditioned Bulk Density (CBD) The packing density of the powder as the starting material in the PBF-AM processes has a significant influence on the quality of the parts produced. The density of the powders in this study was analyzed by their conditioned bulk density (CBD). Referring to Figure 9c, regardless of the mixing method and time, incorporation of the guest powder particles to form composite powder feedstocks decreased the CBD compared to the starting host powder (Ti64). A portion of this decrease is due to the addition of a less-dense material (B4C) to the Ti64 powder. Moreover, the increased interparticle friction arising from the irregular shape of the B4C particles combined with the large particle size distributions acts to reduce the packing density of the composite powder feedstocks compared to Ti64 powder [83,87,88]. In general, the higher random loose packing provided by the lower friction among the powder particles results in higher CBD. In the case of starting host powder particles, there is only the host/host inter-particle friction which determines the powder density. However, the introduction of the guest particles into the host powder leads to the emergence of new friction sources, namely, host/guest, and guest/guest inter-particle frictions which could be responsible for the lower CBD of the composite powders as compared to the Ti64 host powder. Presence of free (non-attached) guest particles in the composite powder leads to the guest/host and guest/guest inter-particle frictions in addition to the host/host ones. According to Figure 9c, the higher CBD of the R6 sample compared to the R2 one is due to the lower amounts of free B4C particles existing in the system (Figure 4). Attachment of the guest to the host powder particles eliminates the host/guest inter-particle friction, which leads to the decrease in the host/host inter-particle friction by reducing their contact surface area. The morphology of powder particles also has a significant impact on the packing density of the powder bed and consequently, the density of the final component [89,90]. While the full decoration of the host particles by the guest particles in the B2 system promotes the attainment of a higher CBD, the slight deviation of the host particles from a spherical shape adversely influences the density (Figure 6b,c and Figure 8b). Therefore, the same CBD as that of the Conditioned Bulk Density (CBD) The packing density of the powder as the starting material in the PBF-AM processes has a significant influence on the quality of the parts produced. The density of the powders in this study was analyzed by their conditioned bulk density (CBD). Referring to Figure 9c, regardless of the mixing method and time, incorporation of the guest powder particles to form composite powder feedstocks decreased the CBD compared to the starting host powder (Ti64). A portion of this decrease is due to the addition of a less-dense material (B 4 C) to the Ti64 powder. Moreover, the increased inter-particle friction arising from the irregular shape of the B 4 C particles combined with the large particle size distributions acts to reduce the packing density of the composite powder feedstocks compared to Ti64 powder [83,87,88]. In general, the higher random loose packing provided by the lower friction among the powder particles results in higher CBD. In the case of starting host powder particles, there is only the host/host inter-particle friction which determines the powder density. However, the introduction of the guest particles into the host powder leads to the emergence of new friction sources, namely, host/guest, and guest/guest inter-particle frictions which could be responsible for the lower CBD of the composite powders as compared to the Ti64 host powder. Presence of free (non-attached) guest particles in the composite powder leads to the guest/host and guest/guest inter-particle frictions in addition to the host/host ones. According to Figure 9c, the higher CBD of the R6 sample compared to the R2 one is due to the lower amounts of free B 4 C particles existing in the system (Figure 4). Attachment of the guest to the host powder particles eliminates the host/guest inter-particle friction, which leads to the decrease in the host/host inter-particle friction by reducing their contact surface area. The morphology of powder particles also has a significant impact on the packing density of the powder bed and consequently, the density of the final component [89,90]. While the full decoration of the host particles by the guest particles in the B2 system promotes the attainment of a higher CBD, the slight deviation of the host particles from a spherical shape adversely influences the density (Figure 6b,c and Figure 8b). Therefore, the same CBD as that of the R6 system was obtained for the B2 case. The significant morphological change of the host particles in the B6 sample could be the main reason behind its low CBD. The flattened/irregular shape of the powder particles in the B6 system renders a poor packing due to the elevated inter-particle friction [87]. It is also worth noting that the formation of agglomerated guest particles in the regularly mixed composite systems adversely affects their potential in occupying the host particles interstices. Material Loss The mechanical mixing processes involves some material loss due to the cold-welding of the powder particles to the balls and/or jar. Therefore, the amount of the starting and final powder needs to be quantified. Figure 12 presents the variation in the powder mass for both the regularly mixed and ball-milled composite powders as a function of the mixing time. The material loss for both the regularly mixed and ball-milled cases is negligible (<1%). However, the regular mixing resulted in lower material loss than the ball-milling method due to the absence of balls. R6 system was obtained for the B2 case. The significant morphological change of the host particles in the B6 sample could be the main reason behind its low CBD. The flattened/irregular shape of the powder particles in the B6 system renders a poor packing due to the elevated inter-particle friction [87]. It is also worth noting that the formation of agglomerated guest particles in the regularly mixed composite systems adversely affects their potential in occupying the host particles interstices. Material Loss The mechanical mixing processes involves some material loss due to the cold-welding of the powder particles to the balls and/or jar. Therefore, the amount of the starting and final powder needs to be quantified. Figure 12 presents the variation in the powder mass for both the regularly mixed and ball-milled composite powders as a function of the mixing time. The material loss for both the regularly mixed and ball-milled cases is negligible (<1%). However, the regular mixing resulted in lower material loss than the ball-milling method due to the absence of balls. Figure 12. Material loss as a function of the mixing time for the regularly mixed and ball-milled 5 wt.% B4C/Ti-6Al-4V composite powders. The starting powder mixture was 100 g, and the ball-topowder ratio was 5:1. Selection of the Best Possible Composite Powder The ideal mixed powder system for the PBF-AM processing of MMCs needs to have: (i) nonfree guest powder particles which are uniformly and homogeneously distributed throughout the system, (ii) host powder particles preserving their desired spherical shape, and (iii) the same flow behavior and apparent packing density as the starting host powder constituent. Although preserving the spherical shape of the host particles, the inadequate guest-host attachment associated with the regular mixing results in heterogeneous final MMC parts with improper distribution of the guest particles (or in-situ formed phases) or even their agglomeration. In addition, the regularly mixed composite powders showed noticeably lower flowability compared to the ball-milled composite systems. These issues restrict the implementation of the regularly mixed (R2 and R6) powder feedstocks in the PBF-AM of MMCs. Ball milling of the powders for relatively long milling times (B6) significantly improves the distribution state of the guest particles throughout the system by their embedment into the host powder particles. However, the significant spherical to flattened/irregular shape change and the particle coarsening are the main drawbacks of such powder feedstocks ( Figures 7 and 8 and Table 2). While the particle coarsening issue can be solved by sieving, this strategy may not be cost-and-time effective, especially for high-priced materials. When employing shorter milling Figure 12. Material loss as a function of the mixing time for the regularly mixed and ball-milled 5 wt.% B 4 C/Ti-6Al-4V composite powders. The starting powder mixture was 100 g, and the ball-to-powder ratio was 5:1. Selection of the Best Possible Composite Powder The ideal mixed powder system for the PBF-AM processing of MMCs needs to have: (i) non-free guest powder particles which are uniformly and homogeneously distributed throughout the system, (ii) host powder particles preserving their desired spherical shape, and (iii) the same flow behavior and apparent packing density as the starting host powder constituent. Although preserving the spherical shape of the host particles, the inadequate guest-host attachment associated with the regular mixing results in heterogeneous final MMC parts with improper distribution of the guest particles (or in-situ formed phases) or even their agglomeration. In addition, the regularly mixed composite powders showed noticeably lower flowability compared to the ball-milled composite systems. These issues restrict the implementation of the regularly mixed (R2 and R6) powder feedstocks in the PBF-AM of MMCs. Ball milling of the powders for relatively long milling times (B6) significantly improves the distribution state of the guest particles throughout the system by their embedment into the host powder particles. However, the significant spherical to flattened/irregular shape change and the particle coarsening are the main drawbacks of such powder feedstocks (Figures 7 and 8 and Table 2). While the particle coarsening issue can be solved by sieving, this strategy may not be cost-and-time effective, especially for high-priced materials. When employing shorter milling times (2 h), the composite powder system showed almost spherical host particles fully decorated by the guest powder particles, meeting some of the requirements mentioned for the ideal composite powder feedstock (best possible case). Although such a powder ensures achieving homogenous MMC parts due to the proper host/guest attachment and the lack of non-attached (free) agglomerated guest particles, the decorating guest particles act as obstacles to the free flow of host particles and consequently, sacrifice the flowability. Proper selection of the recoater speed in PBF-AM processes could be a strategy to solve this issue when using such composite powder feedstocks [91]. Based on the above discussions, the composite powder system prepared by a relatively short milling time of 2 h (B2) is suggested as the best possible case for PBF-AM processes. Conclusions The characteristics of the 5 wt.% B 4 C(guest)/Ti-6Al-4V(host) mixed powder systems were studied from a powder bed fusion (PBF)-additive manufacturing (AM) perspective. For this purpose, the regular mixing and the ball-milling methods were employed with a wide range of milling times (1-6 h) to produce composite powder feedstocks. The developed powders were examined using microstructural characterization, phase formation, particle size, size distribution, sphericity, apparent density, and flow behavior analyses. Moreover, the mechanisms that play a role in the flowability of the mixed powder systems were analyzed based on the microstructural observations and the flow measurement results. The main outcomes can be outlined as follows: 1. With the regular mixing, the shape of the host powder particles remained unchanged until 6 h of mixing. The ball-milling method led to the change in the shape of host powder particles from spherical to quasi-spherical and then to a flattened/irregular shape by increasing the milling time, resulting in the decreased particle sphericity compared to the starting host particles. 2. The regular mixing method did not provide acceptable attachment of the guest B 4 C particles to the host particles. However, milling times as short as 2 h in the ball-milling case provided the host particles with a full decoration by the guest particles. Longer milling time (6 h) led to the guest particles embedded in the severely deformed host particles. 3. Although the basic flow energy (BFE) results contradict the specific energy (SE) measurements, the SE is believed to be a better representative of the powder layer deposition during PBF-AM process due to the unconfined and low-stress state of the powder. 4. Although being highly dependent on the mixing process variables, the flowability of the developed composite powders was lower than that of the reference Ti-6Al-4V powder. The regularly mixed and ball-milled composite powders exhibited~110%, and 24-57% increase in SE compared to the Ti-6Al-4V powder, respectively. 5. The ball-milled feedstocks showed lower SE (better flowability) than the regularly mixed powders. The flow behavior of developed composite feedstocks was discussed based on the underlying mechanisms. 6. The produced composite powder systems showed 18-24% decrease in density compared to the reference Ti-6Al-4V powder. 7. The composite powder benefitting from fully decorated spherical-shape host particles is suggested as the best possible mechanically processed feedstock for PBF-AM processes. The relatively low flowability of this powder system should be considered when defining the recoater speed in PBF-AM processes.
13,400.8
2019-11-01T00:00:00.000
[ "Engineering", "Materials Science" ]
On the hydrodynamic attractor of Yang-Mills plasma There is mounting evidence suggesting that relativistic hydrodynamics becomes relevant for the physics of quark-gluon plasma as the result of nonhydrodynamic modes decaying to an attractor apparent even when the system is far from local equilibrium. Here we determine this attractor for Bjorken flow in N=4 supersymmetric Yang-Mills theory using Borel summation of the gradient expansion of the expectation value of the energy momentum tensor. By comparing the result to numerical simulations of the flow based on the AdS/CFT correspondence we show that it provides an accurate and unambiguous approximation of the hydrodynamic attractor in this system. This development has important implications for the formulation of effective theories of hydrodynamics. Introduction Heavy-ion collision experiments and their phenomenological description have lead to the realization that relativistic hydrodynamics works very well rather far outside its traditionally understood domain of validity. Variants of Müller-Israel-Stewart (MIS) theory [1,2,3] have successfully been applied in rather extreme conditions, which could hardly be assumed to be close to local equilibrium. Furthermore, model calculations exist where it is possible to study the emergence of universal, hydrodynamic behaviour and test to what extent an effective description in terms of hydrodynamics can match microscopic results [4]. Such calculations were initially carried out in N = 4 SYM using the AdS/CFT correspondence [5,6,7], but similar studies have since also been performed in models of kinetic theory [8,9,10]. The conclusion from these investigations is that the domain of validity of a hydrodynamic description is delimited by the decay of nonhydrodynamic modes [5,6,11,12]. The outcome of this transition to hydrodynamics ("hydronization") is that the system reaches a hydrodynamic attractor [13] which governs its subsequent evolution toward equilibrium. This attractor is a special solution to which generic histories decay exponentially, and do so well before local equilibrium sets in. It incorporates all orders of the hydrodynamic gradient expansion, and at sufficiently late times coincides with the predictions of relativistic Navier-Stokes theory. The existence of an attractor in this sense is a critically important issue for hydrodynamics, because it defines its very meaning. It has conceptual as well as practical implications for the formulation of hydrodynamic theories in general as well as for their application to the physics of quark-gluon plasma. Attractor behaviour was first identified explicitly in the differential equations of hydrodynamics [13,14]. An outstanding problem is the determination of such attractors at the microscopic level [15,4]. The first calculations of this type were described by Romatschke [15], who found approximate attractor solutions in the context of kinetic theory and N = 4 SYM by scanning for the corresponding initial conditions. The purpose of this Letter is to argue that the Borel sum of the hydrodynamic gradient expansion provides a direct way of estimating the attractor. While at late times this calculation clearly must give the correct result (which coincides with the prediction of Navier-Stokes hydrodynamics) it is not obvious a priori that this calculation gives an accurate estimate at earlier times. We will however show explicitly that the result of Borel summation does indeed act as an attractor for histories of Bjorken flow simulated using techniques based on the AdS/CFT correspondence. This should be viewed in the context of the idea that higher orders of the gradient expansion may be relevant for real-world physics [16,17,18,19]. An important point is that the hydrodynamic gradient expansion is the leading element of a transseries [13], and in general the higher order elements ("instanton sectors") play an important role in defining the summation properly. These transseries sectors involve integration constants which need to be fixed. However, their contributions are exponentially suppressed and it is tempting to ignore them as a first approximation. Such an approach will definitely fail at sufficiently early times (before the exponential suppression sets in). However, we will see that it works fine for τT > 0.3, and this is enough to see that the result of the Borel sum acts as an attractor well before the Navier-Stokes approximation to hydrodynamics becomes accurate at τT ≈ 0.7 [7]. A critical issue for Borel summation is the location of singularities of the analytic continuation of the Borel transform. These singularities reflect the spectrum of nonhydrodynamic modes -both at the microscopic level [18] and in hydrodynamics [13,14]. An important testing ground for the feasibility and robustness of Borel summation of the gradient series of N = 4 SYM is the hydrodynamic theory proposed in [20], which we will refer to as HJSW. This theory extends Navier-Stokes hydrodynamics by adding degrees of freedom which mimic the least-damped nonhydrodynamic modes of N = 4 SYM plasma (known from calculations of quasinormal modes of black branes [21]). This results in the same leading singularities [14] as those identified at the microscopic level in Ref. [18]. This should be contrasted with BRSSS hydrodynamics [22], which instead involves only purely decaying modes. In the case of BRSSS theory one cannot ignore the transseries sectors even as an approximation, because the analytically-continued Borel transform of the hydrodynamic series has branchpoint singularities on the real axis (reflecting the purely-decaying MIS nonhydrodynamic mode) and this leads to a complex summation ambiguity. The addition of transseries sectors (which are constrained by resurgence relations [13,14,23]) resolves this ambiguity, but requires an integration constant (the transseries parameter) to be set correctly by comparing the result of the summation to the numerical calculation of the attractor. Luckily, this issue does not arise in N = 4 SYM, nor in HJSW hydrodynamics, because in these cases singularities of the analytic continuation of the Borel transform occur off the real axis. Thus, omitting the instanton sectors is a reasonable first approximation, which is what we focus on here. As a way of determining the range of proper-time where the Borel sum can be expected to give an accurate estimate of the attractor we first calculate the Borel sum of the gradient expansion in the case of HJSW hydrodynamics, where it is easy to check the validity of the answer. The result is unique, unambiguous, and coincides (even at rather early times) with the attractor determined directly from the hydrodynamic equations. This sets the stage for the main theme of this Letter: the Borel summation of the gradient series of N = 4 SYM. This is technically no more challenging than the calculation for HJSW theory, but its significance is that it provides an example of a hydrodynamic attractor obtained directly from a microscopic calculation. This result can only be fully appreciated by inspecting the behaviour of numerically simulated histories of boost-invariant expansion in N = 4 SYM. A very important point to note is that while the attractor coincides with first order hydrodynamics at late times, it turns out to be quite distinct from it even at moderate times. This has implications of foundational nature for relativistic hydrodynamics. A fuller discussion of this result and its ramifications can be found in the concluding section. Bjorken flow Throughout this paper we work with Bjorken flow [24], which imposes powerful simplifying symmetry constraints. We use proper time -rapidity coordinates τ, Y related to Minkowski labframe coordinates t, z by t = τ cosh Y and z = τ sinh Y where z is aligned along the collision axis. A system undergoing Bjorken flow has eigenvalues of the expectation value of the energy momentum tensor which are functions of the proper time τ alone. In a conformal theory, the conditions of tracelessness and conservation can be expressed as [25] The departure of these quantities from the equilibrium pressure at the same energy density, P ≡ E/3, is a measure of how far a given state is from local equilibrium. This is conveniently captured by the pressure anisotropy which we will study as a function not of the proper time τ, but of the dimensionless "clock variable" w ≡ T τ, where T is the effective temperature (defined as the temperature of the equilibrium state with the same energy density). It is critically important to compare states of the system at different values of this dimensionless variable if we wish to see the attractor behaviour which is of central interest here. The hydrodynamic attractor in hydrodynamics Hydrodynamic theories are described by sets of nonlinear partial differential equations. The key simplification brought by the assumption of Bjorken flow is that the equations of hydrodynamics reduce to ordinary differential equations. For example, the evolution equation for the pressure anisotropy in conformal BRSSS theory reads [13,4] where the prime denotes a derivative with respect to w, and the dimensionless constants C η , C τ π , C λ 1 are transport coefficients (whose values in the case of N = 4 SYM are known, see e.g. Ref. [4]). This equation is nonlinear, but it can be solved in powers of 1/w: this is the hydrodynamic gradient expansion whose leading term reproduces the prediction of Navier-Stokes hydrodynamics. It also posesses an attractor, which can be determined numerically by setting initial conditions appropriately [13]. It is important to observe that the attractor becomes indistinguishable from the first order truncation of the gradient series only for w > 0.7. For smaller values of w, the numerical solutions clearly decay to the attractor, not to the truncated gradient series. The pressure anisotropy in HJSW theory satisfies a second order nonlinear ordinary differential equation, whose exact form can be found in Refs. [14,4], and a similar analysis leads to the numerical determination of its attractor solution (to which we shall return shortly). The point we wish to make at this juncture is that we cannot proceed in the same way in N = 4 SYM, because there we cannot write down a closed differential equation like Eq. (4). To find the attractor in this case one has to find another way. The approach explored in this Letter is to sum the hydrodynamic gradient expansion, whose leading 240 coefficients were obtained using the AdS/CFT correspondence in Ref. [18]. In the following we discuss the properties of the series and the summation, using HJSW theory as a testing ground. Large order behaviour In any conformal theory the general form of the gradient expansion of the pressure anisotropy for Bjorken flow is (see, e.g. Ref. [4]) The coefficients a n have been calculated to high order in N = 4 SYM [18], in kinetic theory [10,4], as well as in various hydrodynamic theories [13,14]. It is now well established that this series has a vanishing radius of convergence. In many cases, at large order n the coefficients grow in a way consistent with the Lipatov form [26] a n ∼ n! A n , where A is a real parameter. This formula implies linear behaviour of the ratio of neighbouring coefficients a n+1 /a n ∼ n/A. For the case of the gradient series in BRSSS hydrodynamics this linear behaviour can be seen in the left-hand plot of Fig. 1. In the case of HJSW theory however the pattern is much more complex, as seen in the right-hand plot. The reason for this is that HJSW hydrodynamics instead of a single, purely damped nonhydrodynamic mode has a pair of modes with complex conjugate frequencies [14]. Furthermore, this signals that the hydrodynamic gradient expansion in this case is an element of a two-parameter transseries [27] (while in BRSSS hydrodynamics the transseries involves only one parameter). The analysis of Ref. [14] can be used to find the following approximate formula describing the leading large n behaviour a n ∼ n! A n cos (n + β R )φ − ψ + β I log A n + β R for some real numbers A, φ, ψ, β R , β I (the ψ appearing above is the phase of the Stokes constant of the transseries). Due to the oscillating factor this formula is not as useful as Eq. (6), but it does qualitatively capture the complex pattern in Fig. 1. Importantly, if one plots the ratio of coefficients of the gradient expansion of N = 4 SYM, calculated using the results in Ref. [18], one finds a picture very similar to the right-hand plot of Fig. 1. This happens because HJSW theory was constructed to reproduce the dominant nonhydrodynamic modes of N = 4 SYM, which results in the close similarity of the large-order behaviour. This makes it a useful testbed for assessing the utility of Borel summation in this context, as explained below. The attractor from Borel summation The Borel transform of the gradient series removes the dominant factorial growth of the expansion coefficients: This series will typically define an analytic function within a disc around the origin in the complex ξ plane. The Borel sum of the series is defined by the Laplace transform: whereBA is the analytic continuation of the Borel transform (8) and C is a contour connecting 0 and ∞. The analytic continuation of BA(ξ) (performed using Padé approximants) necessarily contains singularities responsible for the vanishing radius of convergence of the original series. The singularities appearing in the cases of interest here have been discussed at length in the literature. For BRSSS theory one finds a branch point on the real axis [13], which introduces a complex ambiguity in the Borel summation, given by the difference in the values obtained for Eq. (9) by integrating above and below the cut. As stressed earlier, this complication does not arise in the case of N = 4 SYM [18], or HJSW hydrodynamics [14] where the branch points appear away from the real axis. This means that one can perform the integral in Eq. (9) by integrating over real values of ξ from zero to infinity. In practice, this integral has to be performed numerically for a set of values of w. To gauge the effectiveness of this method we begin with HJSW theory, which from this perspective offers qualitatively the same kind of challenge as N = 4 SYM. The series can be calculated numerically to essentially arbitrarily high order [14], but here we will use only the first 240 terms, to have a fair testing ground for the N = 4 SYM case, where the cost of calculating the coefficients is much higher, and at this time only 240 are available [18]. As is clear from Fig. 2, the Borel summation tracks the numerically determined attractor closely down to w ≈ 0.4, and is still quite reasonable at w ≈ 0.3. The attractor of N = 4 SYM The procedure described above can readily be applied to the hydrodynamic gradient expansion of N = 4 SYM using the results of Ref. [18], where the expansion coefficients of the energy density in powers of τ −2/3 were calculated up to order 240. Using equations (2) and (3), these results can be translated into coefficients of the pressure anisotropy (5). As mentioned earlier, their ratios qualitatively follow the pattern described by the approximate formula (7) (and seen in the lower plot in Fig. 1). These coefficients can be used to calculate the Borel transform and its analytic continuation (using a diagonal Pade approximant), which one can integrate numerically for a range of values of w exactly as described above for the case of HJSW theory. The result is reproduced quite accurately by the rational function for essentially all values of w > 0. The attractor determined by Borel summation as described above can be expected to reliable down to w ≈ 0.3, but its utility is best judged by comparing it with the results of numerical holography simulations of Bjorken flow. A large number of such flow histories was studied in Ref. [7] following the earlier work of Refs. [6,28]. The distinguished role played by the attractor is most prominently visible by considering values of w for which it differs appreciably from the truncated gradient expansion. A selection of solutions which reach the attractor at such early times is plotted in Fig. 3, along with the rational fit A 0 given in Eq. (10). We see there the same kind of striking behaviour as seen at the level of hydrodynamics in Ref. [13]. In Ref. [7] the transition to hydrodynamics was defined in terms of the pressure anisotropy matching the truncated gradient expansion using the third order result from Ref. [29]. Each numerical solution followed the hydrodynamic prediction at sufficiently late times. The threshold was found to lie in a range of values of w centered around w H = 0.65, with a large pressure anisotropy A(w H ) ≈ 0.7. However, given the results presented here it is tempting to think of hydronization in terms of reaching the attractor, which implies an even smaller value of w H and a correspondingly higher value of A(w H ). For most of the histories shown in Fig. 3 the pressure anisotropy at hydrodynization exceeds 100%, showing that this observable can exhibit universal behaviour even in the highly nonequilibrium regime. It should however be stressed that this is not a claim concerning the behaviour of generic histories of the flow: the hydronization time depends on the initial conditions. The calculation described above leaves a number of open problems. One of them is the explicit computation of leading transseries coefficients in the lowest instanton sectors. Such a calculation would make it possible to extend the range where the summation can be trusted to lower values of w. It would also allow us to verify that the resurgence relations written down in [14] connecting coefficients of different transseries sectors are satisfied. Another interesting problem to pursue would be a calculation of the N = 4 SYM attractor directly in the holographic representation. An attempt of this kind was recently made by Romatschke [15], who tried to find the special initial condition corresponding to the attractor solution, which he was then able to estimate by evolution using the bulk Einstein equations. The results presented in that paper are in good agreement with those presented here for w > 0.4, while at lower values of w the method of Ref. [15] suggests that the true attractor may flatten out around w = 0.4 and for smaller values of w lies somewhat below what is seen in Fig. 3. Conclusions The goal of hydrodynamics is to mimic universal, late-time behaviour of systems tending toward equilibrium [4]. The BRSSS philosophy [22], which can be seen as an incarnation of the effective field theory paradigm, tells us to match leading terms of the gradient expansion of hydrodynamics with the corresponding terms calculated in the underlying microscopic theory. The developments of the past couple of years suggest that it could make sense to set a more ambitious goal: to try to reproduce the attractor of the underlying theory at the level of hydrodynamics. While the attractor matches low orders of the gradient series at sufficiently late times, earlier on it is different, and the difference depends on the parameters of the theory. Even for N = 4 SYM the attractor is quite distinct from first (or second order) hydrodynamics. For QCD plasma, which almost certainly has a larger relaxation time, this distinction should be even more pronounced. This could have important consequences for the interpretation of observables sensitive to early time dynamics. At this time it would be most useful and interesting to find tractable examples which relax some of the technical elements which we have relied on, such as boost invariance and conformal symmetry. An important point will be to understand which observables reveal attractor behaviour. Any progress should be of interest not only in the context of quark-gluon plasma, but also for other areas of physics [30,31].
4,566.8
2017-08-06T00:00:00.000
[ "Physics" ]
Markov State Models: To Optimize or Not to Optimize Markov state models (MSM) are a popular statistical method for analyzing the conformational dynamics of proteins including protein folding. With all statistical and machine learning (ML) models, choices must be made about the modeling pipeline that cannot be directly learned from the data. These choices, or hyperparameters, are often evaluated by expert judgment or, in the case of MSMs, by maximizing variational scores such as the VAMP-2 score. Modern ML and statistical pipelines often use automatic hyperparameter selection techniques ranging from the simple, choosing the best score from a random selection of hyperparameters, to the complex, optimization via, e.g., Bayesian optimization. In this work, we ask whether it is possible to automatically select MSM models this way by estimating and analyzing over 16,000,000 observations from over 280,000 estimated MSMs. We find that differences in hyperparameters can change the physical interpretation of the optimization objective, making automatic selection difficult. In addition, we find that enforcing conditions of equilibrium in the VAMP scores can result in inconsistent model selection. However, other parameters that specify the VAMP-2 score (lag time and number of relaxation processes scored) have only a negligible influence on model selection. We suggest that model observables and variational scores should be only a guide to model selection and that a full investigation of the MSM properties should be undertaken when selecting hyperparameters. Model summaries Table 1: Model summaries.A summary of the models discussed in the main text.The logistic transform of contact distances is specified below in terms of the center, c, and steepness, s, as: 'logit(dist.)'with '(c, s)' underneath, in units of A and A −1 Protein No. Chignolin model 1 This model has the largest median t 2 after random sampling optimisation. BBA model 2 This model has the second largest median t 2 after random sampling. BBA model 3 This model has the largest median t 2 using the distance feature after random sampling. BBA model 4 This model has the largest median t 2 using the dihedral feature after random sampling. BBA model 6 This model has the largest median t 2 after Bayesian optimisation of VAMP2 eq (2). Figure 1 : Figure1: Logistic transforms in select models.The horizontal axis is the contact distance in A, the vertical axis shows the logistic transform of that distance for BBA model 1 (blue), model 4 (orange) and model 5 (green). Figure 2 : Figure 2: Chignolin, model 1 timescales.Text inset shows the MSM hyperparameters; panel (a) shows the implied timescales for the first nine slow relaxation processes (τ = −1 ns); panel (b)shows the gap between successive successive timescales: the gap for process i is defined as t i /t i+1 (τ = −1 ns); panel (c) shows the mean first passage time between the the unfolded and folded state as a function of τ . Figure 3 : Figure 3: Chignolin, model 1 free-energy surface.Each panel shows different quantities projected onto the first two TICA components (IC1, IC2).Panel (a) shows the free energy surface; panel (b) shows the PCCA+ clustering into folded, unfolded and intermediate states with the crystal structure (PDB accession code 5AWL) marked with a star; panel (c) shows the 2nd right eigenvector (which corresponds to the slowest relaxation process) with an ensemble of structures corresponding to the extremes values of the eigenvector ('min', 'max' marked with a triangle and cross respectively). Figure 4 : Figure4: Chignolin, model 1 validation.The implied timescales plotted as a function of the lag-time.Blue, red, and green lines correspond to the slowest, second slowest, and third slowest process respectively.The solid lines correspond to the timescales from the maximum likelihood MSM.The dashed lines correspond to the mean of Bayesian MSMs.The coloured regions refer to the 0.95 confidence interval.The shaded region shows when the Markov lag time becomes equal to or longer than the implied timescale. Figure 14 : Figure 14: BBA, model 1 timescales.Text inset shows the MSM hyperparameters; panel (a) shows the implied timescales for the first nine slow relaxation processes (τ = 41 ns); panel (b) shows the gap between successive successive timescales: the gap for process i is defined as t i /t i+1 (τ = 41 ns); panel (c) shows the mean first passage time between the the unfolded and folded state as a function of τ . Figure 15 :Figure 16 : Figure 15: BBA, model 1 free energy surface.Each panel shows different quantities projected onto the first two TICA components (IC1, IC2).Panel (a) shows the free energy surface; panel (b) shows the PCCA+ clustering into folded, unfolded and intermediate states with the crystal structure (PDB accession code 1FME) marked with a star; panel (c) shows the 2nd right eigenvector (which corresponds to the slowest relaxation process) with an ensemble of structures corresponding to the extremes values of the eigenvector ('min', 'max' marked with a triangle and cross respectively). Figure 32 : Figure 32: Optimization of MSMs of Chignolin and BBA.The vertical axis is the optimization objective, the horizontal axis is the trial number.The thin line refers to the incumbent over the initialization data ('init.'),and the squares are the incumbent at each trial number.Panels (a) and (c) refer to Chignolin, (b) and (d) to BBA.Panels (a) and (b) refer to Bayesian optimization with either t 2 (red), or dualobjective optimization of both t 2 and the timescale gap, t 2 /t 3 .Panels (c) and (d) refer to optimization of VAMP2 eq (2) (red) and dual-objective optimization of both VAMP2 eq (2) and VAMP2 eq (2)/VAMP2 eq (3).The optimized values of t 2 are shown as labels. Figure 33 : Figure 33: Optimisation of the timescale and VAMP2 eq gap.The blue dots show the incumbent gap during optimisation (shown only for the multi-objective optimisation).Panels (a) and (b) refer to the timescale gap t 2 /t 3 while panels (c) and (d) refer to the VAMP2 eq (2)/VAMP2 e q(3).Panels (a) and (c) are for Chignolin optimisation while the panels (b) and (d) refer to BBA. Figure 34 :Figure 35 :Figure 36 :Figure 37 : Figure34: Pair plot of VAMP2 eq (k = 2) rank with different lag times.The panel at position (0, 1) plots the rank according toVAMP2 eq (2) against the rank according toVAMP2 eq (3), and similarly for other positions.This This model has the largest median t 2 after random sampling.
1,477
2024-01-01T00:00:00.000
[ "Computer Science" ]
THE COMPLEX STATISTICS PARADIGM AND THE LAW OF LARGE NUMBERS The five basic axioms of Kolmogorov define the prob ability in the real set R and do not take into cons ideration the imaginary part which takes place in the complex s t C, a problem that we are facing in applied mathematics. Whatever the probability distribution f the random variable in R is, the corresponding probability in the whole set C equals always to one , so the outcome of the random experiment in C can be predicted totally. This is the consequence of the f act that the probability in C is got by subtracting the chaotic factor from the degree of our knowledge of the syst em. In this study, I will evaluate the complex rand om vectors and their resultant that represents the who le distribution and system in the complex space C. I will also define imaginary and complex expectations and varia nces and I will prove the law of large numbers usin g the concept of the resultant complex vector. In fact, a fter extending Kolmogorov’s system of axioms, the n ew axioms encompass the imaginary set of numbers and t his by adding to the original five axioms of Kolmog orov an additional three axioms. Hence, the concept of c omplex random vector becomes clear, evident and it follows directly from the new axioms added. This re ult will be elaborated throughout this study using discrete probability distributions. Moreover, any experiment xecuted in the complex set C is the sum of the re al set R and the imaginary set M. Therefore, the whole proba bility distribution of random variables can be repr sented totally by the resultant complex random vector Z th at is used subsequently to prove the very well know n law of large numbers. In addition to my previous first paper, this second one elaborates the new field of “Complex Statistics” that considers random variables in the complex set C. Thus, the law of large numbers prove s that this complex extension is successful and fruitful. I. INTRODUCTION Abou Jaoude et al. (2010); Abou Jaoude (2005; 2007); Balibar (1980); Bell (1992); Benton (1996); Dalmedico Dahan et al. (1992); Ekeland (1991); Feller (1968); Gleick (1997); Hoffmann (1975) and Kuhn (1996) by defining the concept of probability using only five basic axioms, Kolmogorov was working in the set of real numbers and was not considering the imaginary part that takes place in the set of complex numbers.This is in fact a problem that occurs in many applications in mathematics and physics.By considering supplementary new imaginary dimensions to the event occurring in the "real" laboratory, the Kolmogorov's system of axioms can be extended to encompass the imaginary set of numbers.This can be done by adding to the original five axioms of Kolmogorov a complementary three axioms.Thus, any experiment can hence be executed in the complex set C which is the sum of the real set R represented by a real probability and the imaginary set M represented by the imaginary probability.No matter what the probability distribution of the random variable in R is, the corresponding probability in the whole set C is always equal to one. JMSS Therefore, the outcome of the random experiment occurring now in C is completely predictable.Consequently, chance and luck in R are replaced by total determinism in C. Actually the probability in C is evaluated by subtracting the chaotic factor from the degree of our knowledge of the system.This shows to be essential and leads always to a probability equals to one in the complex set. Formally, the three supplementary and complementary axioms are: • Let P m = i(1-P r ) be the probability of an associated event in M (the imaginary part) to the event A in R (the real part).It follows that P r + P m /i = 1 where i 2 = -1 (the imaginary number) • We construct the complex number z = P r + P m = P r + i(1-P r ) having a norm 2 2 2 r m z P (P / i ) = + • Let Pc denote the probability of an event in the universe C where C = R + M. We say that Pc is the probability of an event A in R with its associated event in M such that: 2 We can clearly see that the system of axioms defined by Kolmogorov could be hence expanded to take into consideration the set M of imaginary probabilities P m . By defining the chaotic factor 'Chf' as being equal to 2iP r P m and the degree of our knowledge |z| 2 as being equal to 2 2 r m P (P / i) + , it follows that: Pc 2 = Degree of our knowledge-chaotic factor = 1, therefore Pc = 1.This means that if we succeed to eliminate the chaotic factor in an experiment, the outcome probability will always be equal to one.One consequence of the results above is that: 1/2≤|z| 2 ≤1 and -1/2≤Chf≤0. Moreover, according to an experimenter tossing a coin in R, it is a game of luck: the experimenter doesn't know the output.He will assign to each outcome a probability P r and will say that the output is not deterministic.But in the universe C = R + M, an observer will be able to predict the outcome of the game since he takes into consideration the contributions of M, so we write: Pc 2 = (P r + P m /i) 2 = |z| 2 -2iP r P m .So in C, all the hidden variables are known and this leads to a deterministic experiment executed in an eight dimensional universe (four real and four imaginary; where three for space and one for time in R and three for space and one for time in M).Hence Pc is always equal to 1. In fact, the addition of new dimensions to our experiment resulted to the abolition of ignorance and non-determinism.Consequently, the study of this class of phenomena in C is of great usefulness since we will be able to predict with certainty the outcome of experiments conducted.In fact, the study in R leads to non-predictability and uncertainty.Therefore, instead of placing ourselves in R, we place ourselves in C then study the phenomena, because in C the contributions of M are considered and therefore a deterministic study of the phenomena becomes possible.Conversely, by considering the contribution of the hidden forces, we place ourselves in C and by ignoring them we restrict our study to nondeterministic phenomena in R. I will describe in this study a powerful tool based on the concept of complex random vectors which is a vector representing the real and the imaginary probabilities of an outcome, defined in the added axioms by the term z = P r + P m .Then express the resultant complex random vector as the vector which is the sum of all the complex random vectors in the complex space.I will illustrate this methodology by considering a Bernoulli distribution, then a discrete distribution with N random variables as a general case.Afterward, I will prove the very well known law of large numbers using this new powerful concept.Boursin (1986); Dacunha-Castelle (1996); Dalmedico Dahan and Peiffer (1986); Gullberg (1997); Montgomery and Runger (2005); Poincaré (1968) and Walpole (2002) first, let us define the complex random vectors and their resultant by considering the following general Bernoulli distribution: THE RESULTANT COMPLEX RANDOM VECTOR Of A BERNOULLI DISTRIBUTION x j x 1 x 2 P rj P rl =p P r2 =q Where: x 1 and x 2 = The outcomes of the first and second random variables respectively P r1 and P r2 = The real probabilities of x 1 and x 2 respectively P m1 and P m2 = The imaginary probabilities of x 1 and x 2 respectively Science Publications JMSS We have: ∑ where, N is the number of random variables which is equal to 2 for this Bernoulli distribution. The complex random vector corresponding to the random variable x 1 is: The complex random vector corresponding to the random variable x 2 is: The resultant complex random vector is defined as follows: 1 2 2 2 rj mj j 1 j 1 Z z z (p iq) (q ip) (p q) i(p q) 1 i 1 i(2-1) 1 i(N-1) The probability in the complex space C which corresponds to the complex random vector 1 z is 1 Pc and is computed as follows: This is coherent with the new complementary axioms defined for the extended Kolmogorov's system. Similarly, Pc 2 corresponding to z 2 is: The probability in the complex space C which corresponds to the resultant complex random vector Z = 1+i is Pc and is computed as follows: where, S 2 is an intermediary quantity used in our computation of Pc.Pc is the probability corresponding to the resultant complex random vector Z in the universe C = R+M and is also equal to 1.In fact, Z represents both z 1 and z 2 that means the whole distribution of random variables in the complex space C and its probability Pc is computed in the same way as Pc 1 and Pc 2 . By analogy with the case of one random variable j z where: 2 2 j j j Pc |z | Chf with (N 1) = − = , then for the vector: where the degree of knowledge is equal to . Notice, if N = 1 in the above formula, then: Which is coherent with the calculations already done. To illustrate the concept of the resultant complex random vector Z, I will use the following graph (Fig. 1). Science Publications JMSS Fig. 1.The resultant complex random vector Z = z 1 +z 2 in the complex space C GENERALIZATION: THE RESULTANT COMPLEX RANDOM VECTOR Z OF A DISCRETE DISTRIBUTION Chan Man Fong et al. (1997); Greene (2000;2004) and Warusfel and Ducrocq (2004) let us generalize what has been found above for a Bernoulli distribution by considering the general discrete probability distribution of N random variables with the resultant complex random vector Z: The complex random vector corresponding to the random variable x 1 is z 1 = P rl + P ml = p 1 + i(1-p 1 ) = p 1 + iq 1 . The complex random vector corresponding to the random variable x 2 is z 2 = P r2 +P m2 = p 2 +i(1-p 2 ) = p 2 +iq 2 and so on … … … The complex random vector corresponding to the random variable x N is: The resultant complex random vector is defined as follows: JMSS Pc 1 corresponding to z 1 is: and so on ... ... ... Pc N corresponding to z N is: Pc is the corresponding probability to the resultant complex random vector Z = 1+ i (N-1) and is equal to: The corresponding probability of the resultant complex random vector Z = 1 + i(N-1) that represents the whole distribution of random variables in the complex space C. Guillen (1995); Mandelbrot (1997) and Srinivasan and Mehata (1978) as an example, let us consider the following discrete random distribution with four random variables that means we have in this case N = 4: EXAMPLE OF A DISCRETE RANDOM DISTRIBUTION We have: ∑ where, N is the number of random variables. The complex random vector corresponding to x 1 is The complex random vector corresponding to x 2 is 2 r2 m2 1 3i z P P .4 4 = + = + The complex random vector corresponding to x 3 is The complex random vector corresponding to x 4 is 4 r4 m4 1 2i z P P . The resultant complex random vector is: and is the probability corresponding to the resultant complex random vector Z that represents the whole distribution of the four random variables in the complex space C. Science Publications Second Case: A Distribution with N Random Variables As a general case, let us consider then this probability distribution with N equiprobable random variables: We have here: And we can notice that: , where; Therefore, the degree of our knowledge corresponding to the resultant complex vector is = and thus we can verify that we have always: What is important here is that we notice the following: Take for example: | Z| 1 (4 1) N 4 0.625 0.5and N 4 Chf 2(4 1) 0.375 0.5 N 4 | Z| 1 (5 1) N 5 0.68 0.625and N 5 Chf 2(5 1) 0.32 0.375 N 5 | Z| 1 (10 1) N 10 0.82 0.68 and N 10 Chf 2(10 1) 0.18 0.32 N 10 | Z| 1 (100 1) N 100 0.9802 0.82 and N 100 Chf 2(100 1) 0.0198 0.18 N 100 | Z| 1 (1000 1) N 1000 0.998002 0.9802 N 1000 Chf 2(1000 1) and 0.001998 0.18 N 1000 We can deduce mathematically that: From the above, we can also deduce this conclusion: As much as N increases, as much as the degree of our knowledge in R corresponding to the resultant complex vector is perfect, that is, it is equal to 1 and as much as the chaotic factor that forbids us from predicting exactly the result of the random experiment in R approaches 0. Mathematically we say: If N tends to infinity then the degree of our knowledge in R tends to 1 and the chaotic factor tends to 0. Moreover: This means that we have a random experiment with only one outcome, hence, either P r = 1 or P r = 0, that means we have either a sure event or an impossible event in R. For this we have surely the degree of our knowledge is 1 and the chaotic factor is 0 since the experiment is either certain or impossible, which is absolutely logical. The Law of Large Numbers and the Resultant Complex Random Vector Z The law of large numbers says that: "As N increases, then the probability that the value of sample mean to be close to population mean approaches 1" We can deduce now the following conclusion related to the law of large numbers. We can see, as we have proved, that as much as N increases, as much as the degree of knowledge of the resultant complex vector x 's correspond to the particles or molecules moving randomly in a gas or a liquid.So if we study a gas or a liquid with billions of such particles, N is big enough (e.g., Avogadro number) to allow that its corresponding temperature, pressure, energy tend to the mean of these quantities corresponding to the whole gas.This because the chaotic factor of the whole gas, that is, of the resultant complex random vector representing all the random particles or vectors, tends to 0, thus, the behavior of the whole system in R is predictable with great precision since the degree of our knowledge of the whole gas tends to 1. Figure 2 and 3 below illustrate this result. Hence we have joined here two different key concepts which are: the law of large numbers and the resultant complex random vector.The first one comes from ordinary statistics and probability theory and the second from the new theory of complex probability and statistics.This looks very interesting and fruitful and shows the validity and the benefits of extending Kolmogorov's axioms to the complex set.Montgomery and Runger (2005); Mũller (2005); Orluc and Poirier (2005) and Walpole (2002) let us now compute the real, imaginary and complex expectations of the random variables.For this purpose, let us consider the following Bernoulli distribution: EXPECTATIONS CORRESPONDING TO THE COMPLEX RANDOM VECTORS We can see that: • The complex random vector corresponding to x 2 is 2 2 i z q ip 3 3 = + = + • The resultant complex random vector is: The expectation of the random variables with the real probability part is defined by: 2 r j rj 1 r1 2 r2 j 1 1 2 1 4 5 E (x) x P x P x P 1 2 3 3 3 3 3 The expectation of the random variables with the imaginary probability part is defined by: The expectation of the random variables corresponding to the complex random vectors is defined by: E r (x) + E m (x) = (x 1 p + x 2 q) + (x 1 iq + x 2 ip): (x p ix q) (x q ix p) x (p iq) x (q ip) x z x z x z Ec(x) 5 4i Ec(x) E (x) E (x) 3 3 illustrates the graphical relation between the three expectations: the real one, the imaginary one and the complex one. We can notice that: The fact that |z 1 | = |z 2 | is not a special case for this distribution but is always true for any Bernoulli distribution having any probability values.Actually and in general, |z 1 | 2 = p 2 +q 2 and |z 2 | 2 = q 2 + p 2 , hence Due to the previous property, it can be shown for any Bernoulli distribution that: where, Z 1 i = − is the conjugate of the resultant complex random vector Z = 1+i, is the conjugate of the complex expectation vector Ec(x) = E r (x)+E m (x).And 2 | Z| Z Z = × , which derives from the well known theory of complex numbers.We can infer also that: It can be shown also, always for a Bernoulli distribution, that: Moreover, in the same distribution, we can deduce also: All these relations prove to be valid for any Bernoulli distribution. Numerically, our degree of our knowledge of the resultant complex random vector Z = 1+i is: 5 / 3 i 4i / 3 5 / 3 4 / 3 9 / 3 1 6 6 6 2 Hence, It can be verified in all cases and for any distribution that: Thus, we conclude that for any Bernoulli distribution we have: = degree of our knowledge of the resultant complex random vector in terms of Z and the complex expectation Ec of the random variable in the universe C = R + M and = The chaotic factor of the resultant complex random vector Z.Consequently, the resultant probability in C is: Case 1: A General Distribution Let us now determine the other characteristics for a general discrete probability distribution which are the real, imaginary and complex variances of the random variables.For this purpose, let us consider the following general probability distribution for N random variables: The expectation corresponding to the imaginary part of the random variables x j is defined by: N m j mj 1 m1 2 m2 j 1 N mN 1 1 2 2 N N E (x) x P x P x P x P x iq x iq x iq The expectation corresponding to the complex probability of the random variables x j can be computed by: x p ) (x iq x iq x iq ) (x p ix q ) (x p ix q ) (x p ix q ) x (p iq ) x (p iq ) x (p iq ) x z x z x z x z Ec(x) x iq ) (x p ix q ) (x p ix q ) (x p ix q ) x (p iq ) x (p iq ) x (p iq ) x z x z x z x z Ec(x ) we have also. The variance of the real part of the random variables x j is defined by: , which is the ordinary variance definition that we know.The variance of the imaginary part of the random variables j x is defined by: Similarly, the variance of the complex probability of the random variables j x is defined by: JMSS Therefore, we can directly see from Equations 1 and 2 above that: As it was proven in the general case of a probability distribution with N random variables.Cheney and Kincaid (2004); Deitel and Deitel (2003); Gentle (2003); Gerald and Wheatley (1999); Liu (2001) and Christian and Casella (2005) numerical simulations verify what has been found earlier.We will use Monte Carlo simulation method with the help of the programming language C++ with its predefined pseudorandom function rand() that generates random numbers with a uniform distribution.Table 1-3, are simulations of a Bernoulli distribution where the complex random vectors are chosen randomly by C++.Table 4-6, are simulations of a uniform distribution with three random variables having their complex random vectors also chosen randomly by C++.Table 7 is a simulation that confirms the direct relation between the resultant complex vector Z and the law of large numbers. CONCLUSION In this study I have elaborated the new field of "Complex Statistics" which is an original paradigm that was initiated in my first paper on the expansion of Kolmogorov's system of axioms.I have defined in this study a new powerful tool which is the concept of the complex random vector that is a vector representing the real and the imaginary probabilities of an outcome, identified in the added axioms as being the term z = P r +P m .Then I have defined and expressed the resultant complex random vector as the vector which is the sum of all the complex random vectors and representing the whole distribution and system in the complex space C. I have illustrated this methodology by considering a Bernoulli distribution, then a discrete distribution with N random variables as a general case.Afterward, I have determined the characteristics (expectation and variance) of discrete distributions corresponding to the imaginary probabilities and to the complex random vectors.Thus, I have showed that there is a correspondence among the real, imaginary and complex expectations as well among the real, imaginary and complex variances for any Bernoulli distribution as well for any probability distribution.Moreover, I have proven that there is a direct relation between the concept of the resultant complex vector and the very well known law of large numbers.Using this new concept and tool, I have succeeded to demonstrate the law of large numbers in a new way.Additional development of this new complex paradigm will be done in subsequent work.Hence, the first and second papers on complex probabilities written after extending Kolmogorov's axioms establish so far a new field in mathematics which can be called verily: "Complex Statistics". NOMENCLATURE We got 0.5 from the study done above of a probability distribution with two random variables. Fig. 2 .Fig. 3 .JMSSFig. 4 . Fig. 2. The degree of our knowledge, the chaotic factor and the Pc of Z, (1≤N≤40) set of numbers = the real set R + the imaginary set M P r = Probability in the real set R P m = Probability in the imaginary set M corresponding to the real probability in the set R Pc = Probability of an event A in R with its associated event in M = Probability in the complex set C = always 1 z = complex number = sum of P r and P m = complex random vector |z| 2 = The degree of our knowledge of the random experiment, it is the square of the norm of z.Chf = The chaotic factor of z number where i 2 = -1 E r = Expectation in the real set R E m = Expectation in the imaginary set M Ec = Expectation in the complex set C V r = Variance in the real set R V m = Variance in the imaginary set M Vc = Variance in the complex set C Table 1 . Computation of Pc for different values of z 1 and z 2 which are the complex random vectors of a Bernoulli distribution and which are chosen at random.In this case, the resultant complex random vector is Z = z 1 +z 2 and is always equal to 1+i.The corresponding probability of Z in C is always 1, just as expected Table 2 . Computation of the real, imaginary and complex expectations for different values of z 1 and z 2 which are chosen at random and the verification that we have always Ec(x) = E r (x)+E m (x) Table 3 . Computation of the real, imaginary and complex variances for different values of z 1 and z 2 which are chosen at random and the verification that we have always Vc Table 4 . Computation of Pc for different values of z 1 , z 2 , z 3 which are the complex random vectors of the distribution and which are chosen at random.In this case, the resultant complex random vector is Z = z 1 +z 2 +z 3 and is always equal to 1+2i. Table 5 . Computation of the real, imaginary and complex expectations for different values of z 1 , z 2 , z 3 which are chosen at random and the verification that we have always Ec (x) = E r (x)+E m (x) Table 6 . Computation of the real, imaginary and complex variances for different values of z 1 , z 2 , z 3 which are chosen at random and Table 7 . The resultant complex random vector Z = z 1 +z 2 +…z j +…+z N , with 1≤j≤N and the verification of the law of large numbers
5,951.4
2013-10-09T00:00:00.000
[ "Mathematics" ]
Carbon Dioxide Activation at Metal Centers: Evolution of Charge Transfer from Mg .+ to CO2 in [MgCO2(H2O)n].+, n=0–8 Abstract We investigate activation of carbon dioxide by singly charged hydrated magnesium cations Mg .+(H2O)n, through infrared multiple photon dissociation (IRMPD) spectroscopy combined with quantum chemical calculations. The spectra of [MgCO2(H2O)n].+ in the 1250–4000 cm−1 region show a sharp transition from n=2 to n=3 for the position of the CO2 antisymmetric stretching mode. This is evidence for the activation of CO2 via charge transfer from Mg .+ to CO2 for n≥3, while smaller clusters feature linear CO2 coordinated end‐on to the metal center. Starting with n=5, we see a further conformational change, with CO2 .− coordination to Mg2+ gradually shifting from bidentate to monodentate, consistent with preferential hexa‐coordination of Mg2+. Our results reveal in detail how hydration promotes CO2 activation by charge transfer at metal centers. Energetics of dissociation channels . Dissociation energies of [MgCO2(H2O)n] •+ for loss of H2O and CO2, respectively. Calculated at the M06L/aug-cc-pVDZ level of theory. All energies are given in kJ/mol relative to the most stable isomer. Experimental Setup Either a Continuum Surelite II (10 Hz) or a Litron Nano S60-30 (30 Hz) is used as vaporization laser for an isotopically enriched 24 Mg (99.9%) target. The pick-up gas consisting of helium seeded with CO2 is supersonically expanded, passed through a skimmer and then through a set of electrostatic lenses which guide the ions through differential pumping stages 1 into the ICR infinity cell. 2 In the ICR cell, the ions are trapped in an electromagnetic field under ultra-high vacuum conditions (~10 -10 mbar) in the center of a 4.7 T superconducting magnet as explained in detail by Marshall et al. 3 The ions are then resonantly excited and their cyclotron frequency is measured. 3 For each data point, 20 to 50 spectra are accumulated and averaged to obtain a higher signal-to-noise ratio. For spectroscopy, ions are mass selected and irradiated by infrared (IR) radiation from a 1000 Hz diode pumped EKSPLA NT273-XIR or EKSPLA NT277 laser system. Each data point in the absorption spectra corresponds to a full mass spectrum, measured after irradiation with a preset irradiation time (0.6-20 s). The IR/OPO laser system EKSPLA NT273-XIR operates between 4476 and 12000 nm whereas the EKSPLA NT277 operates from 2500 to 4475 nm. The measurements for n ≥ 4 were recorded at 1250-2234 cm -1 . Because the antisymmetric stretching mode of linear CO2 lies above 2234 cm -1 , measurements were recorded at 1250-S2 4000 cm -1 for n = 0-3. Immediately after each mass spectrum, the laser power is measured before the laser is tuned to the next wavelength. The wavelength is calibrated using a HighFinesse Laser Spectrum Analyzer IR-III. For the correction of the photon loss by the CaF2 window, the transmission curve provided by ThorLabs was used. 4 Photodissociation of the complex can occur via vibrational resonant excitation during laser irradiation and/or via BIRD. To account properly for the influence of BIRD, every tenth to thirtieth measurement is performed without irradiation to gain information on the relative abundance of the BIRD fragments and the precursor ion. These fragment ion signals due to BIRD IBIRD are subtracted from the mass spectra with irradiation of the laser I0: Ikorr = I0 − IBIRD. For larger clusters, BIRD has a stronger influence. At room temperature, after a trapping time of 1 s, roughly 75% of the MgCO2(H2O)8 + cluster loose one water molecule, forming Mg(CO2)(H2O)7 + . To yield the corrected IR spectrum of n = 7 at room temperature, the cluster with n = 8 water molecules was isolated to maximize n = 7 as precursor. Then the remaining depletion of n = 8 (due to radiation) is added to the relative abundance of the n = 7 precursor ion.
889
2020-02-26T00:00:00.000
[ "Chemistry", "Physics" ]
Dissimilar Materials Welding with a Standoff-Free Vaporizing Foil Actuator between TRIP 1180 Steel Sheets and AA5052 Alloy This paper mainly demonstrates an advanced type of the vaporizing foil actuator welding (VFAW) process between GPa-grade steel (TRIP1180) and aluminum alloy (AA5052-H32) without applying standoff. To secure a flying distance during the VFAW process, the preformed target sheet shaped like a circular indentation has been utilized. It is necessary to optimize process parameters integrated with geometrical design of the preform since the welding strength can be decreased beyond the optimum input energy in the standoff-free VFAW process. The welded surface was evaluated by SEM-EDS, XRD, EBDS, and TEM to analyze the welding mechanism and composition at the welding interface. The diffusion zone including the AlFe3 phase was observed at the welded interface which has high grain density due to the high-speed impact by increasing the welding strength, which leads to the perfect welding between the dissimilar materials. Introduction There is an increasing demand for a lightweight design in the body structure of the transportation vehicles, especially for electric and hybrid cars to achieve high energy efficiency [1,2] and decrease gas emissions [3,4]. It is the one of the best strategies for lightening body weight with joining and welding different grades of materials such as GPa grade steel and Al alloy with each other, which makes it possible to have a strength gradient in the single panel. In order to weld dissimilar materials, different types of welding processes such as fusion and solid-state welding [5] have been developed depending on whether it involves melting and subsequent solidification or does not. Even though fusion welding such as arc welding [6,7], gas welding [8,9], and power beam welding [10,11] has been widely applied to various industries, there are several issues related with defect formation around a welded interface since they tend to induce local melting with phase transition at the interface during the welding process in which large variation in mechanical and thermal properties such as strength, elongation, and thermal expansion ratio, etc. are able to cause residual stress along the interface. The solid-state welding applies a sufficient external force or pressure to induce plastic deformation at the interface, which can be divided into hot pressure welding including resistance spot welding [12,13] and cold pressure welding such as friction stir welding [14][15][16][17] explosion welding [18], and self-piercing riveting (SPR) [19,20] according to the heat generation at the interface to facilitate bonding. Although solid-state welding has been considered as a potential method due to its reduced effect on the thermal fractures, the welding strength is subjected to be influenced by the quality of welding interface at the preparation stage in terms of the amount of the oxide layer or surface cleaning with degreasing or brushing, etc. [19]. Additionally, the excessive physical deformation at the welded sheets can cause the fracture or delamination at the interface [20,21]. The vaporizing foil actuator welding (VFAW) is the one of the explosion welding, which does not involve the conventional full melting of the materials being joined. It applies substantially high pressure to the bottom of a flyer sheet as depicted in Figure 1a to make it collide with a target sheet, which tends to induce metallic bonding [22][23][24] at the interface for permanent welding. When a high current is instantly applied to the aluminum foil, it is vaporized from solid to gas, directly, which tends to generate tremendously high pressure as shown in Figure 1b. The generated explosive pressure sharply pushes the flyer sheet with substantially high velocity, and it distributes along the welding interface, which tends to induce an oblique angle between the target and flyer sheets as shown in Figure 1c. This explosive pressure ejects oxides and other contaminants on the interface with leaving behind fresh metal surfaces, which leads to the formation of a metallic bond between the two sheets by attaching clean metallic surfaces [25,26]. Materials 2021, 14, x FOR PEER REVIEW 2 of 14 been considered as a potential method due to its reduced effect on the thermal fractures, the welding strength is subjected to be influenced by the quality of welding interface at the preparation stage in terms of the amount of the oxide layer or surface cleaning with degreasing or brushing, etc. [19]. Additionally, the excessive physical deformation at the welded sheets can cause the fracture or delamination at the interface [20,21]. The vaporizing foil actuator welding (VFAW) is the one of the explosion welding, which does not involve the conventional full melting of the materials being joined. It applies substantially high pressure to the bottom of a flyer sheet as depicted in Figure 1a to make it collide with a target sheet, which tends to induce metallic bonding [22][23][24] at the interface for permanent welding. When a high current is instantly applied to the aluminum foil, it is vaporized from solid to gas, directly, which tends to generate tremendously high pressure as shown in Figure 1b. The generated explosive pressure sharply pushes the flyer sheet with substantially high velocity, and it distributes along the welding interface, which tends to induce an oblique angle between the target and flyer sheets as shown in Figure 1c. This explosive pressure ejects oxides and other contaminants on the interface with leaving behind fresh metal surfaces, which leads to the formation of a metallic bond between the two sheets by attaching clean metallic surfaces [25,26]. Chen et al. [26] and Liu et al. [27,28] have demonstrated the VFAW process to weld dissimilar materials such as various grades of steel with light-weight alloys including aluminum and magnesium alloys. Vivek et al. [29,30] successfully carried out the welding between the titanium-copper alloy and the BMG (Bulk Metallic Glass)-Cu110 alloy in which they observed the morphology of welded interface with respect to the impact velocity of the flyer sheet. Liu et al. [22] have investigated AlxFey phases at the welded interface with supported by EDS-SEM, EBSD, and TEM analyses. It has also been researched to optimize the process parameters such as impact angle, standoff distance, and input energies, etc. during the VFAW process. Vivek et al. [29] have examined the effect of impact angle and velocity on weldability by controlling the amount of input energy and standoff distance. However, the usage of standoff causes lots of problems from the practical aspect in the industrial process even though it tends to secure an allowable distance for the flyer sheet to guarantee the desired impact speed. It does not only increase the weight of the welded part due to redundant additional standoff, which has nothing to do with the two target materials, but also requires an undesirable secondary process to eliminate this part if it is necessary. In addition, since a slight misalignment between target, flyer sheets, and standoff at the initial set-up when stacking up with each other is able to induce strength variation in the welding interface, it tends to require a high level of tolerance control in the process. Under these circumstances, it is required to eliminate the standoff during the VFAW process for enhancing manufacturing efficiency. In this paper, we have proposed a standoff-free VFAW process without applying a conventional standoff that is replaced by a pre-deformed target sheet with support by a Chen et al. [26] and Liu et al. [27,28] have demonstrated the VFAW process to weld dissimilar materials such as various grades of steel with light-weight alloys including aluminum and magnesium alloys. Vivek et al. [29,30] successfully carried out the welding between the titanium-copper alloy and the BMG (Bulk Metallic Glass)-Cu110 alloy in which they observed the morphology of welded interface with respect to the impact velocity of the flyer sheet. Liu et al. [22] have investigated Al x Fe y phases at the welded interface with supported by EDS-SEM, EBSD, and TEM analyses. It has also been researched to optimize the process parameters such as impact angle, standoff distance, and input energies, etc. during the VFAW process. Vivek et al. [29] have examined the effect of impact angle and velocity on weldability by controlling the amount of input energy and standoff distance. However, the usage of standoff causes lots of problems from the practical aspect in the industrial process even though it tends to secure an allowable distance for the flyer sheet to guarantee the desired impact speed. It does not only increase the weight of the welded part due to redundant additional standoff, which has nothing to do with the two target materials, but also requires an undesirable secondary process to eliminate this part if it is necessary. In addition, since a slight misalignment between target, flyer sheets, and standoff at the initial set-up when stacking up with each other is able to induce strength variation in the welding interface, it tends to require a high level of tolerance control in the process. Under these circumstances, it is required to eliminate the standoff during the VFAW process for enhancing manufacturing efficiency. In this paper, we have proposed a standoff-free VFAW process without applying a conventional standoff that is replaced by a pre-deformed target sheet with support by a simple stamping process. To optimize the preform shape, FEM analysis has been carried out, which was confirmed by experiments for perfect welding between a TRIP1180 steel sheet and an AA5052 sheet. In order to validate the sufficient welding strength in terms of mechanical properties and metallurgical aspects, a lap shear test and microstructure investigation with SEM-EDS and TEM along the interface have been conducted. Figure 2 shows the initial set-up for the conventional VFAW process which consists of flyer sheet, standoff, and target sheet, which is sequentially stacked on the Al foil as depicted in Figure 2b. Figure 3 demonstrates the dimensional specification of the Al foil in which an actuating area is designed to be narrow for concentrating a high current to activate a local evaporation [23]. The actuating length of 2 mm has been applied to the experiment to increase the impact velocity since it tends to influence the amount of vaporizing pressure directly. To guarantee the uniform contact pressure in between specimens and experimental safety against explosive pressure during the VFAW test, the top and bottom surface of the stacked specimens are tightened by a back-up die set. In addition, Kapton film has been utilized to insulate the Al foil from the die set and the other welding specimens, which tends to prevent a current loss [23]. After an initial set-up, a target current from the capacitor bank directly flows to the Al foil through the copper plate as shown in Figure 4. The specifications of the capacitor bank for VFAW test are the maximum voltage of 8 kV, maximum energy of 12.8 kJ, and a capacitance of 200 µF. Experimental Procedure simple stamping process. To optimize the preform shape, FEM analysis has been carried out, which was confirmed by experiments for perfect welding between a TRIP1180 steel sheet and an AA5052 sheet. In order to validate the sufficient welding strength in terms of mechanical properties and metallurgical aspects, a lap shear test and microstructure investigation with SEM-EDS and TEM along the interface have been conducted. Figure 2 shows the initial set-up for the conventional VFAW process which consists of flyer sheet, standoff, and target sheet, which is sequentially stacked on the Al foil as depicted in Figure 2b. Figure 3 demonstrates the dimensional specification of the Al foil in which an actuating area is designed to be narrow for concentrating a high current to activate a local evaporation [23]. The actuating length of 2 mm has been applied to the experiment to increase the impact velocity since it tends to influence the amount of vaporizing pressure directly. To guarantee the uniform contact pressure in between specimens and experimental safety against explosive pressure during the VFAW test, the top and bottom surface of the stacked specimens are tightened by a back-up die set. In addition, Kapton film has been utilized to insulate the Al foil from the die set and the other welding specimens, which tends to prevent a current loss [23]. After an initial set-up, a target current from the capacitor bank directly flows to the Al foil through the copper plate as shown in Figure 4. The specifications of the capacitor bank for VFAW test are the maximum voltage of 8 kV, maximum energy of 12.8 kJ, and a capacitance of 200 μF. sheet and an AA5052 sheet. In order to validate the sufficient welding strength in terms of mechanical properties and metallurgical aspects, a lap shear test and microstructure investigation with SEM-EDS and TEM along the interface have been conducted. Figure 2 shows the initial set-up for the conventional VFAW process which consists of flyer sheet, standoff, and target sheet, which is sequentially stacked on the Al foil as depicted in Figure 2b. Figure 3 demonstrates the dimensional specification of the Al foil in which an actuating area is designed to be narrow for concentrating a high current to activate a local evaporation [23]. The actuating length of 2 mm has been applied to the experiment to increase the impact velocity since it tends to influence the amount of vaporizing pressure directly. To guarantee the uniform contact pressure in between specimens and experimental safety against explosive pressure during the VFAW test, the top and bottom surface of the stacked specimens are tightened by a back-up die set. In addition, Kapton film has been utilized to insulate the Al foil from the die set and the other welding specimens, which tends to prevent a current loss [23]. After an initial set-up, a target current from the capacitor bank directly flows to the Al foil through the copper plate as shown in Figure 4. The specifications of the capacitor bank for VFAW test are the maximum voltage of 8 kV, maximum energy of 12.8 kJ, and a capacitance of 200 μF. To demonstrate practical welding between GPa grade steel and Al alloy, TRIP1180 and AA5052-H32 sheets with a length, width, and thickness of 100 mm × 50 mm × 1.2 mm, and 200 mm × 50 mm × 1.0 mm, respectively, have been applied to the VFAW process as a target and flyer sheet, respectively. The tensile tests for the initial mechanical properties were conducted with the ASTM-E8 standard [31] specimen as represented as Figure 5, in which they show the ultimate tensile strength of 1218.65 MPa and 221.51 MPa for TRIP1180 and AA5052-H32, respectively. To validate the welding strength, a lap shear test has been carried out in the universal testing machine (UTM) with the cross-head speed was 0.1 mm/s. The welded TRIP1180 and AA5052 specimen were gripped at the UTM and pulled toward the test direction. Figure 6 demonstrates the schematic design of the lap shear test in which the welded interface coincides with the centerline of the test set-up for inducing pure shear deformation along the interface [25,29]. For the microscopic investigation at the welded interface, cross-sections of the welded surface were prepared, first, by mechanical grinding using a SiC girt paper (400 grit to 2400 grit (8 µm grain size)), then polished with suspensions of 1 and 0.25 µm diamond polishing solution, and finished with further polishing using 0.05 µm colloidal silica suspension to obtain the required surface finish. The sample surface after final polishing was cleaned under running water, followed by ethanol, and dried. After surface preparation, the microstructure and elemental composition of the joints were examined using scanning electron microscopy (SEM, JSM-5800 JEOL, Tokyo, Japan) coupled with energy dispersive spectroscopy (EDS) and electron backscatter diffraction (EBSD) operated at an accelerating voltage of 20 kV. The TSL OIM Analysis v8 software was used to process EBSD data and generate inverse pole figure (IPF), image quality (IQ), and phase maps. Microbeam X-ray diffraction (XRD) analysis was carried out using a Rigaku D/Max Rapid-S with CuK radiation to identify the phases in the narrow joints. Transition electron microscope (TEM) analyses were carried out on the cross sections of the joint interface using the Cscorrected scanning transmission electron microscope (STEM, Hitachi-HF5000 instrument, Amsterdam, Netherland) operated at 200 kV. The specimen used in the TEM analysis was prepared by lifting the joint interface using a focused ion beam (FIB, FEI Helios Nano-Lab 600, Hillsboro, OR, USA). Preform Design of the Target Sheet It has been proposed to perform the VFAW test with the application of a preformed target sheet instead of utilizing a conventional standoff. To secure a sufficient flying distance, the TRIP1180 target sheet with a thickness of 1.2 mm has been stamped to have an indentation along the circular boundary as depicted in Figure 7, which is applied to the initial stacking for the standoff-free VFAW test. The preform die set is designed to impress a circular indentation [32] on the target sheet with a diameter and height of 30 mm and 1.6 mm since it has been validated in the conventional VFAW experiments [23][24][25]29,30] for guaranteeing the optimum welding area and flying distance. The design variables in the specification of the die set are represented in Figure 8. For the various combinations of design variables as shown in Table 1, the final dimensions for R1 and R2 have been selected as 1.5 mm, respectively, not to induce material failure during the stamping process. Figure 9 demonstrates the FE analysis results with ABAQUS/Standard [33], which represents uniform strain distribution without inducing strain localization around the corner when R1 and R2 have been selected as 1.5 mm. Figure 10 demonstrates the preform die set based on the optimal design variables, which is installed in a 100-tonf servo press for carrying out the preforming process. It is able to produce the desired preform shape without inducing material failure as depicted in Figure 10b. Microbeam X-ray diffraction (XRD) analysis was carried out using a Rigaku D/Max Rapid-S with CuK radiation to identify the phases in the narrow joints. Transition electron microscope (TEM) analyses were carried out on the cross sections of the joint interface using the Cs-corrected scanning transmission electron microscope (STEM, Hitachi-HF5000 instrument, Amsterdam, Netherland) operated at 200 kV. The specimen used in the TEM analysis was prepared by lifting the joint interface using a focused ion beam (FIB, FEI Helios Nano-Lab 600, Hillsboro, OR, USA). Preform Design of the Target Sheet It has been proposed to perform the VFAW test with the application of a preformed target sheet instead of utilizing a conventional standoff. To secure a sufficient flying distance, the TRIP1180 target sheet with a thickness of 1.2 mm has been stamped to have an indentation along the circular boundary as depicted in Figure 7, which is applied to the initial stacking for the standoff-free VFAW test. The preform die set is designed to impress a circular indentation [32] on the target sheet with a diameter and height of 30 mm and 1.6 mm since it has been validated in the conventional VFAW experiments [23][24][25]29,30] for guaranteeing the optimum welding area and flying distance. The design variables in the specification of the die set are represented in Figure 8. For the various combinations of design variables as shown in Table 1, the final dimensions for R1 and R2 have been selected as 1.5 mm, respectively, not to induce material failure during the stamping process. Figure 9 demonstrates the FE analysis results with ABAQUS/Standard [33], which represents uniform strain distribution without inducing strain localization around the corner when R1 and R2 have been selected as 1.5 mm. Figure 10 demonstrates the preform die set based on the optimal design variables, which is installed in a 100-tonf servo press for carrying out the preforming process. It is able to produce the desired preform shape without inducing material failure as depicted in Figure 10b. Lap Shear Test for a Welded Specimen With applying the preformed target sheet, the standoff-free VFAW test has been performed with respect to the increase of the input energy from 4 kJ to 12 kJ to examine the effect of the input energy on the welding strength between TRIP1180 and AA5052. Figure 11 shows the experimental results of lap shear test, which has been conducted until the final fracture occurs. Three experiments were conducted under the same input energy conditions for repeatability evaluation. Since the delamination occurs due to imperfect welding between the target and flyer sheets, it is not able to sustain sufficient reaction force compared with the flyer sheet made of AA50502 as shown in Figure 12 when the input energy of 4 and 6 kJ is applied. However, it is noted that the welded specimen with the input energy of 8 kJ shows an early fracture during the lap shear test in the vicinity of the welding zone as depicted in Figure 12 since it tends to induce undesirable thickness reduction around the corner of circular welded region even though it the delamination does not occur. When the input energy increases to 10 kJ, there is no other early fracture and delamination due to a perfect welding at the interface as depicted in Figure 11d, which results in the final fracture at the flyer sheet itself instead of the circular welded region as shown in Figure 12. To confirm the thickness distribution with respect to the input energy, the welded specimen is sectioned in half by waterjet cutting as shown in Figure 13. Since the thickness reduction has occurred in both of sheets with similar ratios such as 5-11.6% in the target sheet and 8-12% in the flyer sheet during the VFAW, the early fracture is attributed to the localization in the flyer sheet with relatively low strength at the x-coordinate of 16.5 mm. It is interesting to note that the thickness of the flyer sheet has only increased a lot in case of 10 kJ as depicted in Figure 14b while it decreased when the input energy of 6, 8, 12 kJ has been applied compared with the initial sheet thickness. This is why the welded specimen with applying the input energy of 10 kJ tends to exhibit comparable tensile strength to the AA5052 without showing delamination and early fracture at the interface. It is noteworthy that it is necessary to take into consideration optimum process parameters with correlating the amount of input energy, simultaneously, since the weldability between dissimilar materials from the VFAW test is not proportionally influenced by the amount of input energy only [24,26]. force compared with the flyer sheet made of AA50502 as shown in Figure 12 when the input energy of 4 and 6 kJ is applied. However, it is noted that the welded specimen with the input energy of 8 kJ shows an early fracture during the lap shear test in the vicinity of the welding zone as depicted in Figure 12 since it tends to induce undesirable thickness reduction around the corner of circular welded region even though it the delamination does not occur. When the input energy increases to 10 kJ, there is no other early fracture and delamination due to a perfect welding at the interface as depicted in Figure 11d, which results in the final fracture at the flyer sheet itself instead of the circular welded region as shown in Figure 12. To confirm the thickness distribution with respect to the input energy, the welded specimen is sectioned in half by waterjet cutting as shown in Figure 13. Since the thickness reduction has occurred in both of sheets with similar ratios such as 5-11.6% in the target sheet and 8-12% in the flyer sheet during the VFAW, the early fracture is attributed to the localization in the flyer sheet with relatively low strength at the x-coordinate of 16.5 mm. It is interesting to note that the thickness of the flyer sheet has only increased a lot in case of 10 kJ as depicted in Figure 14b while it decreased when the input energy of 6, 8, 12 kJ has been applied compared with the initial sheet thickness. This is why the welded specimen with applying the input energy of 10 kJ tends to exhibit comparable tensile strength to the AA5052 without showing delamination and early fracture at the interface. It is noteworthy that it is necessary to take into consideration optimum process parameters with correlating the amount of input energy, simultaneously, since the weldability between dissimilar materials from the VFAW test is not proportionally influenced by the amount of input energy only [24,26]. Microstructure Investigation in the Welding Interface The welding interface between TRIP1180 and AA5052 sheets has been investigated with SEM-EDS, micro XRD, EBSD, and TEM. Four specimens were extracted from the perfect welded specimen for the microstructure investigation by wire cutting as shown in Figure 15 and polished using up to 4000 grit sandpaper. Microstructure Investigation in the Welding Interface The welding interface between TRIP1180 and AA5052 sheets has been investigated with SEM-EDS, micro XRD, EBSD, and TEM. Four specimens were extracted from the perfect welded specimen for the microstructure investigation by wire cutting as shown in Figure 15 and polished using up to 4000 grit sandpaper. There is a compositional diffusion zone between the TRIP1180 and AA5052 interface, which was generated by excessive plastic deformation by high-speed impact as depicted in Figure 16. Since there is no mechanical interlocking force due to the flat interface morphology, it appears that the metallic bonding is formed at the welding interface. The detailed investigation to find out metallic bonding at the diffusion zone was implemented using micro XRD. Figure 17 shows the results of the micro XRD measurements of the aluminum, steel, and interface. Even if the intensity peak at the interface is different with the steel and aluminum phase, it is difficult to confirm the formation of Al-Fe phase because the intensity peak of the Al-Fe phase is very close to those of the Al and Fe phases. EBSD measurements were conducted further to identify the potential phase formation at the interface. Figure 18 shows the results of the EBSD measurements at the interface where the potential Al-Fe phase was observed. In the image quality (IQ) mapping, very low confidential index was noted at the interface due to the very large plastic deformation which results in lattice distortion. In the inverse pole figure (IPF) image, a random crystallographic orientation of the constituent crystals was noticed. The phase map demonstrated that the interface zone mostly consists of the Al-Fe phase while the Al side was populated with FCC crystals and the Fe side with BCC crystals. In the kernel average misorientation (KAM), many misorientation levels at the interface were high compared with those of parent material regions. From these results, the new phase of Al-Fe was formed at the interface, which has a very dense grain structure because of the high-speed impact. In order to characterize the phase more precisely, TEM work has been performed around the interface region. (KAM), many misorientation levels at the interface were high compared with those of parent material regions. From these results, the new phase of Al-Fe was formed at the interface, which has a very dense grain structure because of the high-speed impact. In order to characterize the phase more precisely, TEM work has been performed around the interface region. Figure 19b,d shows typical FFT patterns of Al and Fe. In contrast, an SAED pattern obtained at the interface revealed a ring pattern indicating the amorphous formation or a very fine nano-grain structure. The SAED ring pattern was well indexed to the AlFe 3 with a space group of Fm3m, which is thermodynamically stable [34,35]. Liu et al. [22,27] also observed complex intermetallic AlFe phases at the interface zone such as AlFe, AlFe 3 , Al 5 Fe 2 , and Al 3 Fe. Perhaps, meta-stable intermetallics such as Al 2 Fe with low symmetry would be transformed to a high-symmetry phase [36]. From these results, it seems that the driving force of the high weldability between the two materials resulted from the metallic bonding, as observed in the microstructural analysis. When the flyer sheet collides with the target sheet with high speed and high pressure, the local temperature of the surface around the impact region should be high enough to form metallic bonding, eventually resulting in the formation of the AlFe 3 intermetallic phase that is hard and brittle in nature. Generally, the existence of such kind of intermetallic phases is not preferable for enhanced joint strength and ductility. The crack found in Figure 18 indicates the brittleness around the region. However, even with this AlFe 3 intermetallic phase, very high joint strength could be achieved due to metallic bonding between Al and Fe sheets. When the flyer sheet collides with the target sheet with high speed and high pressure, the local temperature of the surface around the impact region should be high enough to form metallic bonding, eventually resulting in the formation of the AlFe3 intermetallic phase that is hard and brittle in nature. Generally, the existence of such kind of intermetallic phases is not preferable for enhanced joint strength and ductility. The crack found in Figure 18 indicates the brittleness around the region. However, even with this AlFe3 intermetallic phase, very high joint strength could be achieved due to metallic bonding between Al and Fe sheets. Conclusions In this paper, the VFAW welding of dissimilar materials between TRIP1180 and AA5052-H32 has been conducted by substituting a standoff with the preformed shape in the target sheet to increase the efficiency of the VFAW process. The design parameters of the preformed shape were optimized through the FEM analysis considering the restriction conditions from the geometrical limit of the preformed shape, which makes it possible to have a perfect welding between TRIP1180 and AA5052-H32 by applying the input energy of 10 kJ. It has been concluded that it is substantially necessary to optimize process parameters integrated with the geometrical design of the preform since the welding strength can be decreased beyond the specific input energy due to the nonlinearity of the process parameters in the standoff-free VFAW process. Many microstructural observations have been conducted to identify the composition and phase at the welding interface. From the SEM-EDS and micro XRD results, diffusion between the aluminum and steel was observed, but it was not confirmed that the new phase was formed at the interface. Like the results of the EBSD and TEM analysis, the AlFe3 phase was observed at the interface which has a very fine grain structure because of the high speed impact. It can be concluded that Conclusions In this paper, the VFAW welding of dissimilar materials between TRIP1180 and AA5052-H32 has been conducted by substituting a standoff with the preformed shape in the target sheet to increase the efficiency of the VFAW process. The design parameters of the preformed shape were optimized through the FEM analysis considering the restriction conditions from the geometrical limit of the preformed shape, which makes it possible to have a perfect welding between TRIP1180 and AA5052-H32 by applying the input energy of 10 kJ. It has been concluded that it is substantially necessary to optimize process parameters integrated with the geometrical design of the preform since the welding strength can be decreased beyond the specific input energy due to the nonlinearity of the process parameters in the standoff-free VFAW process. Many microstructural observations have been conducted to identify the composition and phase at the welding interface. From the SEM-EDS and micro XRD results, diffusion between the aluminum and steel was observed, but it was not confirmed that the new phase was formed at the interface. Like the results of the EBSD and TEM analysis, the AlFe 3 phase was observed at the interface which has a very fine grain structure because of the high speed impact. It can be concluded that the metallic bonding occurred at the interface during the VFAW process by forming the AlFe 3 phase, which results in high welding strength between the dissimilar materials.
7,432.6
2021-08-31T00:00:00.000
[ "Materials Science" ]
The requirement elicitation process of designing a collaborative environment – The<EMAIL_ADDRESS>case . The software development industry is one that is under a constant pressure to deliver products that are innovative, fast to market and better suited to the costumers’ needs and expectations. In this context, the traditional approaches to product development can become obsolete, as the market and the end-users are dynamic and rather unpredictable and the latter’s active participation can sometimes make the difference between success and failure. The Agile approach, on the other hand considers extensive collaboration with all stakeholders paramount in all the stages of the software development process and for this, a variety of requirements elicitations techniques are used in order to obtain extensive input that leads to designing and developing enhanced products and applications. This paper presents the requirement elicitation process and techniques used in the development process of an open collaboration and innovation platform –<EMAIL_ADDRESS>which is an experimental model dedicated to create a community where all relevant stakeholders from three specific creative industries, namely design, fashion design and crafts, can meet, interact, gain access and share information, knowledge and resources. Requirements elicitation process. The Agile framework The process of requirements elicitation (RE) is generally accepted as one of the critical activities in software development and it refers to the specific manner used in developing, analyzing and documenting all the system's requirements from stakeholders.Getting the right requirements is considered to be a vital, but difficult part of software development projects [1], as it deals with the significant problem of designing the right software for the customer [2]. Requirements elicitation is all about learning and understanding the needs of users and project stakeholders with the ultimate aim of communicating these needs to the system developers.A substantial part of elicitation is dedicated to uncovering, extracting and surfacing the wants of the potential stakeholders [3]. The conventional methods to the requirements elicitation process used to focus on gathering all the system requirements and preparing the requirements specification documents up front before proceeding to the design phase, but in practice, this approach proves to be detrimental to the development process, as it leaves no room to accommodate any kind of change, which is inevitable due to the business dynamics and the ongoing modifications of costumers' needs and expectations. In this context, the agile approach to software development and requirements engineering [4] proves to be a better fit, as it welcomes changing requirements even late in the development cycle to accommodate the stakeholders needs. Agile Methods (AMs) are a family of software development processes designed to deliver products on time, on budget, and with high quality and customer satisfaction, that have become popular during the last few years [5][6][7].The aim of these methodologies is to deliver higher quality products faster and better fitting to the costumers needs and this is done through extensive collaboration with the project's stakeholders.This element is one of the pivotal points of the Agile Manifesto which states that the main priority should be given to "individuals and interaction over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to changes over following a plan" [6]. Agile Methods enforce very little upfront requirements elicitation but instead advocate incremental and iterative discovery throughout and integrated with the software development lifecycle [8], by prioritizing and delivering the most important functionalities first. In most cases the process of requirements elicitation is performed incrementally over multiple sessions, iteratively to increasing levels of detail, and at least partially in parallel with other system development activities.Typically, the result of this process is a detailed set of requirements in natural language text and simple diagrammatic representations with additional information including descriptions of the sources, priorities, and rationales [3]. Plus, due to the complexity of most software systems, the technical and non-technical requirements can come from a variety of sources: problem owners, project documentation, stakeholders, and other existing systems and so, many of the effective techniques used in the requirements elicitation process do not originate from the traditional areas of software engineering or computer science research but from social sciences, organizational theory, group dynamics, knowledge engineering, and very often from practical experience [3] and are being embedded in iteration-based agile methods [9]. The right combination of elicitation technique and clear proposed objective produce quality software whereby clear proposed objective after doing requirements elicitation is essential to the success of software development projects [10]. 2 Short description of<EMAIL_ADDRESS>project<EMAIL_ADDRESS>is a complex and innovative platform developed through a research project implemented by the Engineering and Management Department of "Gheorghe Asachi" Technical University of Iasi. The goal of the project is to develop an experimental model dedicated to creating a community where all relevant stakeholders from specific creative industries can meet, interact, gain access and share information, knowledge and resources. The platform has three main components: -A crowdsourcing component aimed at engaging relevant stakeholders from creative industries for a common goal, in terms of knowledge discovery & management, distributed human intelligence tasking, peer-vetted creative production and broadcast search.Through this platform section, users can address questions and find answers, share creative ideas in order to receive feedback, find solutions to different problems etc. -A marketplace component which is a brokerage section that has the objective to pair offers from support services and raw materials providers with companies from creative industries.Through this platform section, users can find relevant providers of products or services needed and can buy and sale products and services related to creative industries. -An e-shop component that offers the opportunity for each company or individual that has an account on the platform to create its own virtual shop in order to present and sell the products to end-users. The requirement elicitation process for<EMAIL_ADDRESS>According to the literature, the typical activities of the requirements elicitation process can be divided into five fundamental types [3]: understanding the application domain, identifying the sources of requirements, analyzing the stakeholders, selecting the techniques, approaches, and tools to use and eliciting the requirements from stakeholders and other sources. Understanding the application domainthis phase refers to the detailed investigation and examination of the situation or "real world" in which the system will ultimately reside (sometimes called the application domain) [11]. In our case, the domain analysis, focusing on identifying missing online solutions and instruments that entrepreneurs and freelancers from creative industries can benefit from was conducted in 2017 as part of the research process to apply for UEFISCDI financing. The outcomes of the understanding the application domain stage consisted in two types of documentations: an academic oriented documentation and a business oriented documentation. The academic oriented documentation focused on studies and analysis conducted by the research team regarding business ecosystems, open innovation and crowdsourcing, as the domain knowledge in the form of detailed descriptions and examples plays an important part in the process of requirements elicitation [3]. The business oriented documentation consisting in: the mission statement of the project, the main goals of the project and implicitly of the platform (experimental model), a list of the main components of the platform, a list of basic functionalities of the platform, a list and general description of the target groups and the main stakeholders. Another relevant aspect considered in this phase was the project's team experience in designing and developing innovative online instruments for business development, as it proved instrumental in every stage of the requirements elicitation process. Identifying the sources of requirements -due to the complexity of software development projects requirements may be spread across many sources and exist in a variety of formats [12]. The process of investigating all the relevant sources of requirements for<EMAIL_ADDRESS>platform development provided a variety of sources and documents to be used in the requirements elicitation process by the project team.The main sources identified were: platform stakeholdersthey were further involved in the requirements elicitation process through requirements workshops, namely focus-groups, questionnaires and prototype testing. users and subject matter experts: in this case PhD students researching creative industries ecosystems, web designers and web programmers.The PhD students performed extensive research on their expertise subject and delivered reports.Web designers and web programmers participated in requirements workshops to identify, analyze and prioritize the most important features and tasks to be performed in each sprint and later on in the process in prototype testing.At the same time, their input was relevant regarding the aspects of the non functional requirements of the platform, such as security, performance, maintainability etc. existing systems and platformsthe academic project team performed a benchmark analysis on different platforms covering the three main components of the<EMAIL_ADDRESS>project: crowdsourcing, marketplace and e-shop, to identify common functional and nonfunctional features. project and legal limitations were also taken into account as there are relevant to the implementation of different features of the platform and they are listed in the financing contract signed between the University and UEFISCDI. Analyzing the stakeholders -the customer involvement and interaction were declared the primary reasons for project success and limited failure [13] and the identification and description of relevant stakeholders that can appropriately define, clarify and prioritize requirements is paramount [14].Stakeholders are people who have an interest in the system or are affected in some way by the development and implementation of the system and hence must be consulted during the requirements elicitation process [3]. Identifying and analyzing the stakeholders was an extensive and rather difficult process, due to project complexity, but in the end the stakeholders list included: companies and individuals active in the three target creative industries, namely: design, fashion design and crafts. companies and individuals that can provide support services, raw materials or products for creative industries. companies and individuals interested in purchasing products from creative industries companies. others, namely individuals interested in participating in crowdsourcing competitions and contests. All stakeholders were considered to be users as for interactive systems, users play a central role in the elicitation process, because usability can only be defined in terms of the target user population [15]. The stakeholders' needs were analyzed and evaluated for a primary identification of the main requirements and then they were further involved in the requirements elicitation process through requirements workshopsfocus groups, by completing questionnaires regarding the platform's functionalities and design and in the prototype testing phase. Selecting the techniques, approaches, and tools to use -in practice it is generally accepted that most projects imply more than an individual requirements elicitation technique or approach.However, the variety of techniques to be employed depends on the specific context of the project and the organization developing it and is often a critical factor in the success of the elicitation process [15]. The techniques and tools employed throughout the requirements elicitation process were chosen taking into account the project team previous experience in developing online applications and the specifics of the stakeholders involved and consisted in: domain analysis brainstorming requirements workshops task analysis questionnaires prototype testing. Eliciting the requirements from stakeholders and other sources -once all the sources of requirements and the specific stakeholders have been identified, the actual elicitation of the core requirements begun by using each of the techniques selected in the previous stage. Domain analysisin the first stage of the requirements elicitation process which took place in 2017 when the project team started working on the research project consisted in an extensive literature review meant to clarify the research problem and to identify the needs of entrepreneurs from creative industries and the existing online instruments/platforms companies can use for business development.The domain analysis was followed by several brainstorming sessions which had the goal to develop the preliminary mission statement for the project and target system and to establish who are the platform's users, what are the main characteristics of the platform and what problem(s) will solve for the users. At this stage, the project team has designed the main components and functionalities of the platform, based on the benchmarking analysis conducted. Once the project received financing, the project team extended the use of requirements elicitation techniques to involve more stakeholders in the process through requirements workshops, task analysis, questionnaires and prototype testing. The requirements workshops, namely focus groups were organized by the project team taking into account two main types of stakeholders/users: one of the focus groups with companies and individual activating in the three creative industries nominated in the project and another with web designers and web programmers.Each focus group had a different motivation and emphasis, but the common goal was to improve platform usability and utility.The focus group guide included questions divided into several subgroups covering: technical aspects, graphical design, contents, structure and navigation. Each focus group was conducted by a member of the project team who had the facilitator role and apart from asking the questions included in the focus group guide and recording the answers, they also guided and assisted the participants in addressing the most relevant issues in order to receive accurate and complete requirement information.The results of the focus groups were analyzed and transformed in a detailed list of processes which were further decomposed. After the focus group results examination, the project team conducted a task analysis which employed a topdown approach where high-level tasks were decomposed into subtasks.The main objectives in this stage was to construct the hierarchy of the tasks performed by the users in the platform and to determine the necessary knowledge required for users to carry them out.This was a relevant assignment, as the platform is an innovative instrument and the target groups may not fully understand how to use it in order to gain most advantages.Plus, the details resulted from the task analysis stage, were further included in the questionnaires. The questionnaires were designed following QDF (quality function deployment) methodology which is used to translate the customer wants and needs into product or service design characteristics utilizing a relationship matrix.The questionnaires were sent to all the types of stakeholders identified. Once the platform prototype has been designed it was tested with all types of stakeholders in order to receive relevant feedback regarding the functionalities developed and the overall platform usability. Conclusions The requirements elicitation process is starting to be recognized as a critical activity in any software development process, due to the constant demand for better, faster and more usable applications. In this context, the project team analyzed and selected the elicitation techniques that better suit the project specific needs, the stakeholders involved and the team members expertize.The goal was to make sure that the requirements elicitation process is properly conducted and the results obtained in the activity will thoroughly cover all the relevant functional and non functional requirements of the platform. In order to ensure this, the research team opted for a combination between domain analysis, brainstorming, requirements workshopsfocus groups, task analysis, questionnaires and prototype testing and an ongoing active involvement with all relevant stakeholders. At the same time, the requirements elicitation in agile approaches is an ongoing iterative process as any change in business dynamics can influence and produce adjustments in the development of the platform.
3,479
2018-01-01T00:00:00.000
[ "Computer Science" ]
John Locke's Ethics: Reflection of The Concept of Property Rights Happiness is the goal of mankind and an eternal philosophical discourse. The main values for achieving happiness have been explored since the era of logos replaced mythology, and become an important theme of philosophy because it is relevant to this day. Some of the philosophers have contributed to developing teachings about the main values in realizing happiness in people's lives. This article aims to explore the ethical thinking of John Locke, who is known as one of the founders of empiricism. Recognition of individual property that supports the human goal to live a happy life. This concept then becomes the basis for the development of a market economy and global capitalism, following the logic of capital accumulation that has no end, and blurs the rationality of the boundaries of individual rights and collective rights. The research method used is a literature study, by tracing the works of John Locke in the field of ethics, especially on human rights and property rights. The works on John Locke's thoughts are also a source of this research. From the results of the study, it can be shown that not all of John Locke's thoughts on ethics are relevant to Indonesian conditions. As a philosophical discussion, John Locke's ethics can enrich concepts and theories about citizens' property rights that need to be protected by the state, but as a policy it needs modifications that are by the main values according to the needs of society. Introduction In the past, the Greeks were known to value the harmony of the universe. Aristotle built his ethical teachings to realize the ideal of "eudaimonia", happiness and success in life. One's life is eudaemon if one can achieve success and glory that is admired by society. Eudaemon is the best actualization, when humans can use their common sense well (Ross, 1999). Aristotle's ethics written in his main book, namely the Nicomachean Ethics, became the basic reference for ethics for centuries, even to this day. As a moral philosophy, ethics becomes a human discourse about himself, others, and his social environment. In Aristotle's terms, happiness is the highest good, which is not just temporary pleasure. Someone unemployed for years will be happy if he gets the job he wants. A husband and wife after doing all kinds of methods and treatments to get a baby, will be happy when the wife starts to conceive a fetus, the fruit of her love. The political party that won the election after being in the opposition for several periods gave happiness to the politicians and constituents of the party. We can make this list as long as we want, both the happiness felt by individuals, small families, and larger groups of people. Then what is the real happiness? Are there any main teachings, guidelines, or values that can be used as a guide to achieving them? This article will try to examine the ethics of John Locke (1632Locke ( -1704, a philosopher known as the founder of empiricism. The ideas of ethics and values for achieving the virtues of life taught by Locke have received a lot of support and criticism. In the Indonesian context, how far can John Locke's ethical teachings be accepted and influence policies on the protection of intellectual property rights. In the first part, we will briefly examine ethical teachings as moral philosophy, from Ancient Greece to the John Locke Era, especially those related to happiness. Next, we will briefly describe the biography of John Locke and his career as an intellectual. The discussion is directed at John Locke's ethical teachings, particularly regarding property rights and their relevance to the current condition of society. The next section is a note and critique of John Locke's teachings and the conclusion of the article at the end. Methods This research was conducted using a descriptive approach through library research regarding John Locke's ethics, particularly the notion of property rights. The research data is in the form of information from several work of literatures that are relevant to the discussion. The literature used is in the form of books, scientific journals, proceedings, theses, and newspapers, both in print and digital from the internet. Next, the information is reviewed and analyzed. A literature study approach is used to collect relevant and indepth information. Then, the information is critically analyzed and reflected to answer research problems and draw conclusions. This literature study is directed at the ethical thoughts of philosophers, especially about the ethics of happiness and the ethical thoughts of John Locke. One of Locke's ethical ideas is the recognition of private property, which is a prerequisite for happiness. Then, this concept also underlies the concept of recognition of intellectual property rights. Ethics as Moral Philosophy Etymologically, the word "ethics" comes from the Greek, namely "ethos", which in the singular means habits, customs, character, feelings, attitudes, or ways of thinking. Meanwhile, in the plural form, ethics can be interpreted as customs (Bertens, 2011, p. 4). Through this limitation, ethics can be understood as a science that studies what can be done, or knowledge about customs. In the Big Indonesian Dictionary (KBBI), ethics is explained as the science of what is good and what is bad, or about rights and obligations. As a philosophical study, ethics always accompanies human life as a social being. Before Aristotle wrote down the teachings of ethics, philosophers had applied certain social conventions about what was considered good and bad, which he lived in the form of attitudes, actions, and speech, which society accepted as good morality. In the age of mythology, these good moral practices were directed not only to fellow humans but also to the gods. At the beginning of its development, ethical truths were not universal, but were directed at certain groups in society (Downs, 2012). For the Sophists, for example, ethical questions might be: "What is justice, goodness, and happiness to the Athenians?" From this question it can be understood, what is considered good for a community group may be different from other community groups. What was considered fair in Athens, may have been different in Sparta, or Andros. Likewise with bad words and actions, it would be different in each community at that time, so ethical issues were more relative and subjective. The universal meaning of ethical truth was sought by Socrates (470-399 BC), through the search for the essentials of things that apply to all and transcend the particularities of space and time. Ethical truth must be universal and objective truth, not relative and subjective in the style of the Sophists. Socrates believed that all moral virtues are forms of knowledge in a way that can be taught to every human being. Through dialectical conversations (affirmation, refutation, debate), Socrates seeks clarity and depth to achieve the authenticity of meaning, as well as general nature. This method, according to Socrates, is the art of midwifery (Tjahjadi, 2004, p. 40). Socrates' ethical thinking examines the fundamentals of happiness. For Socrates, what can lead humans to a happiness is a virtue (virtues). Thus, ethics is a way of life to achieve happiness. However, in this case happiness is not a matter of achieving all needs or ideals but lies in rational happiness (rational eudaemonism), which is the main happiness and drives human action. A good life is more important than life itself (Reshotko, 2012, pp. 1-25). Unlike his teacher, Plato (427-347) had his concept of ethics. According to Plato, if humans have soul purification, then they will be happy. Plato's ethics seeks to liberate the soul from the confines of the body's prison. Therefore, sensory pleasure negates moral values or sensory pleasure is the antithesis of goodness. The path to right living is a path of purification or purification in which man strives to attain pure wisdom and attains a higher level. For Plato, the human goal is eudaimonia, namely prosperity or a good life, a happy life. A good life is impossible without a polis or city-state, which is a prerequisite for happiness because humans are social creatures. A well-managed policy will produce happiness for its citizens, whereas a policy that is not managed properly will find it difficult to achieve happiness (Mackenzie, 1985, p. 88-91). People live in the policy to meet their needs. Therefore, after the working class, the state also needs musicians, poets, artists, and teachers, which are then followed by the presence of philosophers, namely people who are fit to govern the state. Plato's ethics exerted a strong influence on Christian theologians. After Plato, Aristotle (384-324) appeared as a major ethical thinker in Greek history, and reached its peak with the publication of his work: "Nicomachean Ethics and Eudemian Ethics", both books present practical knowledge of how to achieve happiness (Kucukuysal and Beyhan, 2011, p. 43 -51). Like his two predecessors, Aristotle recognized that the final human goal is happiness. When happiness is achieved, humans do not need anything else. Aristotle rejected the notion that the ultimate goal of human life is wealth, honor and enjoyment. Wealth is not a goal, but a means to a higher goal. Honor is just an instrument that follows one's quality of life. That is, without a superior quality of life, there will be no honor. While pleasure is not the goal, the feeling of pleasure is not only felt by humans, but also by animals. As a rational being, it is inappropriate to make pleasure the ultimate goal (Aristotle, 1999, p. 116). In addition, as social beings (homo socius), humans must prioritize personal and social characteristics to arrive at the desired goodness or happiness, which is not limited to meeting physical sensory needs. Then After Aristotle, and the followers of Stoicism contributed his thoughts on ethics. For Epicurus, ethics is more in line with sensory characteristics, because humans are composed of material atoms so that the highest happiness must be in sensual pleasures. In contrast to Epicurus, who prioritized bodily pleasures, for adherents of Stoicism, this is a consequence of living in harmony with nature and in accordance with the guidance of reason. This philosophical view of Stoicism then gave birth to ideas about the size of natural law as the universal reason. Virtue is nothing but the perfection of universal reason. In himself, every human being has a law of truth (conscience) as the main guide toward good. For the Stoics, this priority is the perfection of the human mind that is unique, following their nature. Happiness is achieved through living in harmony with reason and nature. Next, the world of philosophical thought entered the era of theocentrism, which was full of the concept of Divine revelation in the field of Christian ethics with the main character Thomas Aquinas (1274-1225). Christian ethics cannot be separated from the Greek philosophical tradition. If Thomas Aquinas found his ethical basis in Aristotle, then Augustine (354-430 AD) found his ethical footing in Plato and Plotinus' Neo-Platonism (MacIntyre, 1996, p. 107). However, the source of Christian morals or ethics is not Greek philosophy, but the Scriptures. For Augustine, happiness is not to be found in the enjoyment of food and drink, honor, wealth and sex, but in a dedication of the mind to discovering the truth. In De Triniate XIII, Augustine wants a universal human desire, namely to achieve happiness, which is a natural human hope (Djung, 2014, p. 1-20). One must know what one wants and know how to make it happen. It is also necessary to mention the role of Arab philosophers in terms of transferring the treasures of Greek philosophy to the Middle Ages. Thomas Aquinas and his teacher, Albertus Agung (1193-1206 AD) read Aristotle's texts through the works and comments of philosophical figures such as Al-Kindi (801-873 AD), Al-Farabi (872-951 AD), Ibn Sina (980-1037 AD), Al-Ghazali (1058-1111 AD), and Ibn Rushd (1126-1198). In general, Arabic philosophy is Greek philosophy which is read through the lens of Islamic monotheism, just as Latin philosophers read it through the framework of Christian Mintheism. Al-Kindi's ethical thought can be considered representative of other Arab philosophers. Referring to Plotinus, Al-Kindi spoke about the fate of the soul after death, that the soul which has been separated from the body can find more perfect truth than when it was still united with the body. For Al-Kindi, the substance of the soul comes from the creator himself like the rays that emanate from the sun. After death the soul will return to the creator (Druart, 1993, p. 329-357). From the arguments about how to achieve happiness and avoid suffering, some experts conclude that Al-Kindi's ethical teachings came from the Stoics. Stoicism philosophy believes that we should not base our happiness on temporary worldly things. Instead, the basis of happiness should be on something that is permanent and of fixed value, namely the intellectual world with immaterial universal forms. According to Al-Kindi, philosophical activity is the highest activity, because it can lead people to find happiness. Besides Augustine and Thomas Aquinas in Christian ethics, Immanuel Kant (1724-1804) was a leading figure in ethical teachings, which later became the pinnacle of German enlightenment (Aufklarung). His work, "Practical Budi Criticism" on moral theory, is one of his monumental works besides "Pure Budi Criticism", on the theory of knowledge. These two works are considered to be phenomenal works of the Enlightenment era to this day. The basic ethical question for Kant is, what should I do? which is a primordial question (Kant, 1952, p. 236). Through this question, Kant wants to say, the main focus is not on "good", which arises because of the understanding of good. Thus, it becomes clear that Kant's ethics is an ethics of necessity (categorical imperative). Meanwhile, the ethics of Aristotle and Thomas Aquinas are the ethics of goodness and happiness, as emphasized in the book Nicomachean Ethics. Kant bases ethics only on the autonomy of reason and goodwill. The pinnacle of the history of German Aufklarung philosophy is Georg Wilhelm Friedrich Hegel (1770-1831), through his work "Dialectics of the Spirit", Dialectic of The Spirit. The theme of ethics is outlined in his work, "Elements of Philosophy of Right". Hegel criticizes Kant's concept of ethics which is considered as a modern bourgeoisie that is separate from the general public because it prioritizes personal morality. For Hegel, ethics must be based on the concept of goodness, as the actualization of human nature. The terminology of life ethics is described as a condition of human will in which reason and will are in harmony. People with virtue are people who have desires and inclinations built by nature and education and are in harmony with their minds. Hegel's work became the starting point for Karl Marx (1818Marx ( -1883. For Marx, freedom is the most essential of human beings. The goals of freedom and human needs are goals implicit in the struggles of the working class in bourgeois or capitalist societies. Freedom will social class between the rich or owners of capital and the working class, even though eliminating class absolutely is impossible because there are always different roles in society according to one's abilities. Within social classes moral appeals are useless. The use of the word moral always presupposes a divided form of society. Admittedly, Marx reduced ethics or morals to the wants and needs of the working class against the social order of bourgeois society. Soren Kiekergard (1813-1855) became the focus of existentialist philosophy. As a militant Protestant, Kiekergard believes that rational arguments suggest that in the end individual choice should rule. For Kiekergard, the fundamental question is, "How do I live?". People can choose a life that promotes law or enjoyment. Reason can distinguish between good and evil. In this case, God's will become the criterion in the act of choosing. In contrast to Kiekergard, Friedrich Nietzsche (1844-1900 considered that Christianity is a source of modern disease because it has caused a systematic devaluation of this world for something to come. Nietzsche's vocation is to build an all-new morality to purify the world by denying all the concepts of God which is the product of monotheism. Nietzsche's basic thesis is "the death of God" (Nietzsche, 2009, p. 69). With the death of God, Nietzsche promotes the purification of the world by denying all concepts of God, and promoting the ancient Greek God, Dionysius, as a symbol of human freedom. After God dies, there is no morality promoted by religion (nihilism), instead, freedom in the true sense drives human activity. Nihilism promotes the destruction of meaning and understanding. All the old systems of religion and morality still exist, but have become marginalized. The ultimate goal of nihilism is happiness and enjoyment, resembling the ideas of Epicureanism. Nietzsche interprets happiness in its sensual or material form. The next ethical figure is John Stuart Mill (1806-1873). Mill's main thesis is that all human beings desire pleasure, and that pleasure is the ultimate goal of universal human activity. As the originator of utilitarianism, Mill transforms pleasure into happiness experienced by the greatest number of people. The greatest happiness for most people is pleasure (Suseno, 2000, p. 91). Ethical utilitarianism is the target of criticism because it can sacrifice the happiness of people, for the purpose of the happiness of large groups. The concept of utilitarianism is considered responsible for major tragic events in human history, for example the tragedy of the killing of Jews by the Nazis or the bombing of two major cities in Japan, namely Nagasaki and Hiroshima. The notion of general happiness becomes an instrument to legitimize crimes against small groups of people. Contemporary ethics cannot be separated from the philosopher Emanuel Levinas (1905Levinas ( -1995. Levinas' ethical thinking was influenced by his life background as a descendant of Jews living in Europe during World War II, as well as his association with colleagues such as Edmund Husserl (1859-1938), Martin Heidegger (1889-1976, Maurice Merlea-Ponty (1908-1965), Franz Rosenzweig (188-1929), and Martin Buber (1878-1965. This association provided an opportunity for Levinas to inherit European intellectual property, namely Western philosophy. Through his work, "Totality and Infinity: An Essay on Exteriority", Levinas seeks to dismantle the totality that stems from the ego, and "I" must get out of the prison of the ego to meet other people (other), or "others". The appearance of the other captivates the freedom of "I" not to treat as I please. I have to be someone else's guardian and not harm him (Levinas, 1979: 194). For Levinas, ethics is a relational encounter with others. With the diversity of societal characteristics, the actuality of Levinas' ethics needs to be promoted, because it can erode egocentrism and increase social responsibility. Contemporary ethical thinkers are interested in the idea of the importance of ethics in providing answers to modern problems, such as injustice, abortion, crime, socio-economic inequality, and so on. Sketches of the Life History, Political Activities, and Intellectual Career of John Locke John Locke was born on August 29, 1632, in Wrington, United Kingdom. His father was also named John Locke, and his mother was named Agnes. Under the tutelage of his father, who sided with the Roundheads, who were pro-parliament, young Locke already showed his political passions that were opposed to the power of the British Monarchy. During his studies at the University of Oxford, Locke was actively involved in the liberation movements (Glorius Revolution) which confronted the authority of King Charles II (Hardiman, 2019, p. 74-75). While developing his interest in medicine, Locke was also fond of the liberal teachings of his lecturer, John Owen, who was a strong influence on his later works. As a student, Locke studied philosophy, natural science, and medicine at the prestigious University of Oxford. Locke's family's closeness to the monarchy's opposition led to his family being exiled to the Netherlands. During that quiet period of exile, his book was published: "An Essay Concerning Human Understanding", followed by "Two Treaties on Government", after returning to England, after the fall of King James II. The two books became important references to the movement to limit the king's absolutism, and placed Locke as a prominent figure in liberalism, thus influencing the Declaration of Independence and the United States (US) constitution. Together with Thomas Hobbes, John Locke initiated the concept of a social contract, which is a reference for constitutional democratic countries, which require state power to be presented by a social agreement (Wijaya, 2016, p. 183-193). However, the source of Christian morals and ethics is not Greek philosophy, but revelation recorded in the scriptures. Locke began his career as a teacher at the Christ Church school, where he studied, by teaching Greek and Latin. After publishing his important works, Locke worked for the government, especially in the field of economics, while still pursuing a life as an intellectual serving polemics with his colleagues, such as Edward Stillingfleet (1635-1699). In the last four years of his life, Locke still managed to finish his work, "Paraphrase and Notes on the Epistles of St Paul", which shows the depth of his religious thought. On October 28, 1704, Locke died, and was buried at High Laver, a village in Essex, about 4 miles from Harlow, England. On his tombstone, Locke inscribed his message while still alive: "Walkers, pause! Here lies John Locke. If you ask him what kind of person he is, he will answer someone whose life is content with simple things. He was indeed brought up by science, but what has been carried out his whole life is devotion to the truth" (Tjahjadi, 2004, p. 236). Acknowledgment of Property Rights Naturally, everyone has the right to protect their property. According to Locke, "The first and foremost aim of those who unite in a commonwealth, and place themselves under a government, is the safeguarding of their property, a safeguard which is not often found in the natural state" (Russell, 2016, p. 820). Philosopher's ethics which is the foundation of this empiricism is practical. A person has legal rights to his property. In its absolute form, the doctrine that individuals have inalienable rights is inconsistent with the principle of benefit, namely the doctrine that right actions are actions that support the realization of shared happiness. The idea of property rights places Locke as the forerunner of capitalism. In the pre-industrial era, urban production was handled by craftsmen who owned the equipment and sold their products. The Farmer ownership system is the best system. The position of the workers was restored using land treaties (commons), which enabled them to increase their livelihood. Ownership theory emphasizes the results of human work on nature (nature) or land (land) which gives the concept of ownership to those who cultivate it. This concept is part of the book "The Second Treatise of Government", which outlines several aspects, such as labor factors, the prohibition of taking something more than what is needed, and availability for others (Locke, 2012). To be able to obtain property rights, humans work with their bodies and hands and process them so that they produce something of value, more than the original object. Work makes a clear distinction about the position of human possessions. If humans do not work, then they are not entitled to property rights. Through work and creativity, humans have used their energy and thoughts to process natural products. Nevertheless, Locke put forward two conditions for someone to get property rights, namely: a) A work that results in someone's ownership can only be done, if it is still sufficient and good supply left for others; b) Restrictions on taking part of someone's property. As a major thinker of liberalism and individual freedom, Locke is at under the influence of natural law theory, developed by Hugo Grotius (1583Grotius ( -1645 and Samuel Pufendorf (1632-1694). According to Locke, God has bestowed rational abilities on humans, which enable humans to consider the actions they want to take. In line with that, Locke emphasized, the state must protect individuals and their property rights. In summary, Drahos (1996, p. 43) concludes Locke's theory of ownership with several main points, namely: 1) God has given this world to the average person. 2) Everyone can have his own things. 3) The results of human work are owned by him. 4) When humans mix the results of their work with something from nature, the results of the work are theirs. 5) The right to ownership has conditions, where a person still leaves the person with the item so that it is sufficient for needs and also good for others. 6) One cannot take something from the crowd that can be used for the common good. According to Snyder, Locke's ownership basis is very natural, and is not based on an agreement or (rational) human consciousness or occurs before human contracts. The use of natural law theory must be seen as part of the doctrine of innate knowledge. By basing it on the natural law of thought, the original conditions that Locke interprets are conditions in which there is equality, as part of God's normative commandment, to maintain peace and maintenance of the world (Snyder, 1986: 730). The Concept of Intellectual Property Rights and Its Problems John Locke's concept of property rights forms the basis for the protection of Intellectual Property Rights (IPR), including in the current digital era, and animates the idea of IPR as property rights. The concept of this right shows that the law protects someone who works and the results of his work, so that a climate that supports creativity is created in society. Everyone will compete without a doubt to develop their talents and creativity, not only because of the motive of economic benefits but also because of the certainty that the results of their creativity are recognized and protected. The basis of this argument stems from the thoughts of Jeremy Bentham (1748-1832) and John Stuart Mill (1806-1873). For Mill, every human action is considered ethical if the action is in line with the principle of expediency to create happiness. In line with that, according to Bentham, every action is good if it brings happiness to the majority of society (greatest happiness for the greatest number) (Dua, 2008, p. 61). In legal terminology, the term moral rights are rights inherent in a creator (author, creator, inventor). The word attached means that the right cannot be removed even though the right has ended. Law No. 28 of 2014 concerning Copyright states that moral rights are rights that are eternally attached to the creator. This norm is by the Berne Convention (The Berne Convention for the Protection of Literary and Artistic Works/Berne Convention), 1886. The consequences of moral rights that are eternally attached to creators cause these moral rights to apply indefinitely. Copyright itself is a material right, namely an absolute right to an object that gives direct power over an object and can be defended against anyone (Sofwan,198,p. 24). According to Badrilzaman, material rights are divided into two parts, perfect material rights and limited material rights. Material rights that provide perfect (full) enjoyment for the owner. Meanwhile, limited material rights are rights that provide incomplete enjoyment of an object, when compared to property rights. That is, limited material rights are not full or imperfect (Badrulzaman, 1983, p. 43). The current construction of Copyright Law is the result of a reconfiguration of legal politics through the political choice of concordance law and transplants that have been carried out for nearly a century in the archipelago. The construction of the embryo of the Copyright Law began in 1912 when Auteurswet 1912 Stb. No. 600 enforced in the Dutch East Indies. The political choice of translating foreign laws into the National Copyright Law mainly stems from international agreements. If not addressed with critical consideration, these international legal instruments can be used as a new model of imperialist tools. When Indonesia ratifies international trade agreements within the framework of the General Agreement on Tariffs and Trade/World Trade Organization (GATT/WTO), which contains provisions on intellectual property rights (Trade-Related Aspects of Intellectual Property Rights/TRIPS). As a consequence, apart from Indonesia having to adapt its intellectual property rights regulations to the TRIPS Agreement, Indonesia must also prepare legal instruments to comply with international legal instruments. These international agreements have juridical consequences. In addition, Indonesia is required to comply with the agreements that have been made, but also international relations carried out by Indonesia will be institutionally binding with international organizational bodies that manage copyright protection. The interests of developed countries (developed countries) can be imposed without being seen as intervening in the internal affairs of a country. Meanwhile, developed countries have no interest in changing the laws in their countries, because their law has been the reference for intellectual property law reform agreements since the GATT/WTO. This condition can be seen as an intervention by developed countries to developing countries (developing countries) which are forced to do so through pragmatic political choices. The flow of globalization is a wave that cannot be avoided as the progress of human civilization is followed by social change. However, just following the flow of these changes without trying to filter out the parts of the changes that must be followed is a haphazard action. The national copyright law is a regulation that is loaded with the interests of developed countries and contains liberal capitalist ideology. Transplanting foreign law into national intellectual property law eliminates the nation's socio-cultural values, which in turn has the potential to erode the creativity of the Indonesian people. Indonesia has been repeatedly placed as a haven for intellectual property pirates, especially copyrights, so that it is included in the black list according to the version of developed countries, and as a result it receives economic sanctions. For example, in the form of refusal of garment exports by the United States, if it does not immediately carry out effective intellectual property law enforcement. Criticism and Debate of the Concept of Property Rights The debate over property rights stems from the boundary between shared rights and individual rights. The universe is bestowed by God to be enjoyed by humans. In what part and to what extent can a person claim his property rights that cannot be claimed by another person. Grotius (1583-1645), a Dutch legal expert who later influenced the teachings of John Locke, argued that humans need to agree on determining the boundaries that become part of the common property and parts that can be claimed under a contract. This view was challenged by Robert Filmer (1588-1653) for two reasons. First, if the property was originally shared, then no human being can lose it. Second, whether the agreement in determining the boundaries of property rights as private property and individuals made by past generations remains valid for future generations. For Filmer, the distribution of property rights boundaries must be an expression of God's will, not based on human agreements that can change. Once the agreement is made based on human commitment, then any right is open to unlimited revision. If political authority does not come directly from God, but rests on human choice, the idea of property rights will appear fragile. Meanwhile, Locke is of the view that truth, which originates from human reason and revelation, confirms that the universe belongs to God which is the inheritance of mankind that can be enjoyed together. "Earth and everything in it was given to humans to support and please their lives". Nevertheless, Locke also thought about the concept of limiting individual property rights, for the purpose of the common good. For Locke, that limit is, "at least there are enough resources that are good for others as common property". Taking excessively and not using it optimally is an act that must be avoided. Locke said, "no creation of God for the sake of human beings should be damaged or rotten". Humans have the right to property for several reasons and prerequisites, namely: first, humans have worked with their energy and creativity, thus they are entitled to obtain or own processed products from nature. When humans have mixed their work with natural products, the results of this work are theirs, which is known as the theory of work results. Second, the availability of resources for others in sufficient quantities. Conditions, where there are sufficient resources in sufficient quantities and conditions that are good for the benefit of others, are prerequisites for individual property ownership. Third, restrictions on individual property rights. A human being should not take more than what is needed. This is related to the aspect of sufficient availability for others. Although Locke's property theory has provided limits to individual property rights, it does not necessarily bring happiness to mankind. Especially if the property right is applied to intangible property such as copyright, as an exclusive right that arises automatically. As explained in the previous section, in copyright cases, the potential for developing countries' dependence on developed countries is opened up through international conventions that are globally binding. Recognition of property rights encourages the process of industrialization through the accumulation of capital and resources, which are the main supporting forces of modern societies pursuing economic prosperity, which then affect social, cultural, lifestyle, and other aspects. Through industrialization, humans are able to utilize parts of the universe for their own purposes, far beyond the achievements of previous generations. Much of the work of human hands have been replaced by machines, robots, and various artificial bits intelligence that allow work to be faster, and easier, with multiple results. This is actually the heart of the capitalist economy, that the aim of production is not for consumption but for the accumulation of capital, which keeps the gap between the owners of capital and the working class widened, if not widened. The owner of a shoe factory, for example, does not make shoes solely for himself and his family to use, but rather to earn a profit from trading. It is this kind of profit-gaining and capital accumulation that makes the natural boundaries of individual property ownership disappear within the rationality of a capitalist economy. In the view of Friedrich Wilhelm Nietzsche (1884-1900), Locke's ethical teachings are inadequate and superficial, because they do not consider other values beyond enjoyment, for example, altruism, sacrifice, or delaying individual enjoyment (Budiman, 2019, p. 80). In Snyder's language, the root of all evil is not merely about having something, but regarding the desire to have more that is not rightfully his" (Snyder, 1986, p. 723-750). Economic domination in the current mode of globalization takes the form of an ever greater capacity to transfer production results, which are created through the labor and sweat of so many workers into the hands of few owners of capital. For accumulation purposes, capital owners are not bound by regulations regarding production location, sources of capital, technology, and the participation of local residents. Once the investors' bargaining position is strong, they can resist labor demands, even government regulations, by boycotting investments or moving from one country to another that offers softer terms and higher profit incentives. Conclusions Ethics as a teaching of moral philosophy has roots long before the era of John Locke. Philosophers since the era of Socrates have attempted to find the main values in life that can achieve happiness (eudaimonia). This struggle of ideas will probably continue until the last generation of mankind. In the early 17th century, John Locke emerged victorious as a spokesman for liberalism and human rights principles, with his ideas on liberty, the recognition of individual property rights, and the boundaries of common property, which drew support as well as criticism. Locke's liberal thinking became the basis for the development of the ideology of market economy and global capitalism, which became the mainstay of modern society, which was originally born in Western Europe and Northern Europe, then spread throughout the world. Nevertheless, the boundaries of natural property ownership are difficult to realize in modern society because capital can be accumulated without limits, especially if it is strengthened through legal instruments, both within the national legal framework and international legal frameworks for global economic transactions. Under these conditions, reckless deregulation is a shortcut for transferring various resources into the hands of owners of financial and capital assets. The use of designated guidelines and instruction-for-author.pdf will ensure the manuscript is well-prepared in a conveniently and properly. As a philosophical study, the article reflects the concept of property rights as part of the most basic human rights.
8,475.6
2023-01-01T00:00:00.000
[ "Philosophy" ]
Definitive Endoderm Formation from Plucked Human Hair-Derived Induced Pluripotent Stem Cells and SK Channel Regulation Pluripotent stem cells present an extraordinary powerful tool to investigate embryonic development in humans. Essentially, they provide a unique platform for dissecting the distinct mechanisms underlying pluripotency and subsequent lineage commitment. Modest information currently exists about the expression and the role of ion channels during human embryogenesis, organ development, and cell fate determination. Of note, small and intermediate conductance, calcium-activated potassium channels have been reported to modify stem cell behaviour and differentiation. These channels are broadly expressed throughout human tissues and are involved in various cellular processes, such as the after-hyperpolarization in excitable cells, and also in differentiation processes. To this end, human induced pluripotent stem cells (hiPSCs) generated from plucked human hair keratinocytes have been exploited in vitro to recapitulate endoderm formation and, concomitantly, used to map the expression of the SK channel (SKCa) subtypes over time. Thus, we report the successful generation of definitive endoderm from hiPSCs of ectodermal origin using a highly reproducible and robust differentiation system. Furthermore, we provide the first evidence that SKCas subtypes are dynamically regulated in the transition from a pluripotent stem cell to a more lineage restricted, endodermal progeny. Introduction Mammalian development is a tightly regulated process, with considerable biochemical and physiological changes occurring from the time of fertilization to the onset of gastrulation and further differentiation towards fully formed organisms.However, understanding early fate decision events, such as segregation of the three germ layers, is a prerequisite for regenerative medicine [1][2][3][4][5].The advent of induced pluripotent stem cells and their unique features of unlimited selfrenewal and nonrestricted differentiation capacity marked a milestone in the battle to dissect such processes-directly in the context of human development [6][7][8].Given the incredible accordance of embryonic development in vivo and its respective model system in vitro, it is not surprising that most of the currently available pluripotent stem cell differentiation protocols make use of physiological, stagespecific signalling clues in order to recapitulate development of all three germ layers: ectoderm, mesoderm, and endoderm.Further differentiation towards more specialized cell types has also been achieved, for example, formation of primitive gut tube endoderm (SOX17/Hnf1b positive [9,10]), pancreatic progenitor cells (Pdx1/Cpa1 positive [11,12]), and hepatic progenitor cells (AFP/HNF4a positive [13]) from definitive endoderm progenitor cells.Nevertheless, the precise mechanisms governing such complex processes are not completely understood.Another limitation exists in achieving highly homogenous, reproducible cell type-specific yields.As a result, the current use of hiPSCs for disease modelling where the aim is to use in vitro differentiated patient-specific pluripotent stem cells to replace the patients' damaged cells is massively hindered.In consequence, critically defined, efficient, and robust differentiation protocols are highly anticipated. Endoderm comprises the innermost of the primary germ layers of an animal embryo and has a primary role to provide the epithelial lining of two major tubes within the body.The first tube, which extends the entire length of the body, is known as the digestive tube and undergoes budding during embryogenesis to form the liver, gallbladder, and pancreas.The second tube, the respiratory tube, forms an outgrowth of the digestive tube and gives rise to the lungs.Notably, two distinct sets of endoderm can be distinguished in the developing embryo: visceral endoderm arising directly from the inner cell mass and definitive endoderm (DE) derived from mesendoderm within the anterior primitive streak in close proximity to the cardiovascular progenitors [1,[14][15][16].The visceral endoderm forms the epithelial lining of the yolk sac [17] while the DE is responsible for the internal (mucosal) lining of the embryonic gut and is governed by the expression of key transcription factors such as SOX17 [18], Foxa2, or Hex1 [19]. To date, a large group of proteins has been broadly neglected concerning its role during developmental processes, namely, ion channels.In addition to the modulation of the membrane potential in various tissues and cell populations, ion channels were identified to be involved in a number of biological processes, such as proliferation, cell differentiation, and cell morphogenesis.Since these mechanisms are apparently abundant in the transition of stem or progenitor cell populations to more defined cells types of different origin and potency, a role for ion channels in developmental processes can be hypothesized [20][21][22][23].In particular, the adsorptive tissues derived from the DE are often rich in ion channels and defects in these channels are responsible for some harmful diseases.One prominent example is cystic fibrosis (CF), a common, autosomal recessive disorder due to mutations in a chloride channel known as the CFTR.Located on the plasma membrane of many epithelial cells, this simple mutation gives rise to abnormalities of salt and fluid transport in many endodermal derived tissues including lung, pancreas, and liver [24].However the contribution of other ion channel families to diseases within the foregut has been poorly studied. Indeed, in pluripotent stem cells, activation of small and intermediate conductance calcium activated potassium channels (SK channels; SKCas) triggers the MAPK/ERK pathway following RAS/RAF activation finally, giving rise to cytoskeletal rearrangement, cardiogenesis, and cardiac subtype specification [2,3,5,25].The group consists of four members, namely, SK1 (KCa 2.1, KCNN1), SK2 (KCa 2.2, KCNN2), SK3 (KCa 2.3, KCNN3), and SK4 (KCa 3.1, KCNN4).The functional form of the ion pore is mediated by the combination of the 4 subunits, respectively.Additionally, widely distributed functional splice variants of SKCas have been found throughout the organism in several tissues [26][27][28].Functional SKCas are not only constructed as homo-but also as hetero-tetrameric channel proteins, most probably serving a cellular and functional specificity [26,29].The pore is opened following subtle elevation of intracellular calcium levels.Calmodulin, attached in a Ca 2+dependent manner to the C-terminal of the channel subunits, specifically binds Ca 2+ -ions and mediates a conformational change of the channel protein, leading to the opening of the pore [30,31].Calcium is the only known physiological activator of SKCas and channel opening occurs within a few milliseconds [31].SK1-3 are highly expressed in the nervous system where they modify the membrane potential; that is, they crucially contribute to the after-hyperpolarization and therefore regulate the firing pattern, frequency, and length of action potentials in different neuronal networks [32][33][34][35].On the other hand, SKCas play important roles in multiple other cellular functions, namely in cerebral and peripheral blood vessel smooth muscle, the functional myocardium, or neural progenitor cells [21,36,37]. In the current study, we highlight a robust and efficient differentiation protocol to drive plucked human hair-derived hiPSCs towards definitive endoderm.Furthermore, we analyse changes in protein and mRNA expression in the SKCa family of ion channels in the transition from a pluripotent cell state to a definitive endodermal committed cell type. Materials and Methods 2.1.Keratinocyte Culture from Plucked Human Hair.Outgrowth of keratinocytes from plucked human hair was induced as described previously [25,38].Keratinocytes were split on 20 g/mL collagen IV-coated dishes and cultured in EpiLife medium with HKGS supplement (both Invitrogen, USA).The use of human material in this study has been approved by the ethical committee of the Ulm University (Nr.0148/2009) and in compliance with the guidelines of the Federal Government of Germany and the Declaration of Helsinki concerning Ethical Principles for Medical Research Involving Human Subjects. Rat Embryonic Fibroblasts (REFs) Culture.REFs were isolated from day E14 Sprague Dawley rat embryos as described previously [38] and cultured in DMEM supplemented with 15% FCS, 2 mM GlutaMAX, 100 M nonessential amino acids, and 1% Antibiotic-Antimycotic (all Invitrogen).Cells were passaged using 0.125% trypsin digestion when reaching confluence for up to 5 passages.All animal experiments were performed in compliance with the guidelines for the welfare of experimental animals issued by the Federal Government of Germany, the National Institutes of Health, and the Max Planck Society (Nr.O.103). Reprogramming Keratinocytes.Keratinocytes at 75% confluence were infected with 5 × 10 5 proviral genome copies in EpiLife medium supplemented with 8 g/mL polybrene on two subsequent days.On the third day, keratinocytes were transferred onto irradiated REF feeder cells (2.5 × 10 5 cells per well irradiated with 30 Gy).Cells were cultured in hiPSCs medium in a 5% O 2 incubator and medium was changed daily.After 3-5 days small colonies appeared, showing a typical hiPSCs like morphology.Around 14 days later, hiPSC colonies had the appropriate size for mechanically passaging and were transferred onto irradiated MEFs or onto Matrigelcoated (BD, USA) dishes for further passaging. Quantitative One- Step Real Time.RT-PCR (qPCR) Analysis was performed as previously described.Briefly, one-step real-time qPCR was carried out with the LightCycler System (Roche, Mannheim, Germany) using the QuantiTect SYBR Green RT-PCR kit (Qiagen, Hilden, Germany).Relative transcript expression was expressed as the ratio of target gene concentration to the housekeeping gene hydroxymethylbilane synthase (HMBS) [47,48]. 2.9.FACS Analysis .For flow cytometry cells were harvested with TrypLE (Invitrogen) for 7 min at 37 ∘ C to obtain singlecells suspension.Next, cells were washed twice with PBS, blocked with 5% HSA-solution (in PBS) to avoid unspecific binding of the antibodies to the Fc-receptor.Cells were washed again with PBS and incubated for 40 min at 4 ∘ C with CXCR4-PE (Invitrogen), subsequently c-Kit-APC (Invitrogen) was added for additional 10 min at 4∘C in FACS buffer (2% FCS in PBS), according to the manufacturer's instructions.Cells were washed with FACS buffer, 50 ng/mL DAPI was added, to exclude dead cells from analysis, and the samples were directly analysed on a LSRII flow cytometer (BD). For intracellular SOX17 staining cells were washed twice with PBS, blocked with 5% HSA-solution (in PBS) to avoid unspecific binding of the antibodies to the Fc-receptor.Cells were washed again with PBS and the pellet was resuspended in 4% PFA and incubated for 15 min at 37 ∘ C for fixation.Subsequently the cell pellet was resuspended in 0.5% Saponin in FACS buffer (saponin buffer) and incubated for 30 min on ice.Cells were pelleted and stained with SOX17 (1 : 100, R&D systems) at 4 ∘ C for one hour.Cells were washed with saponin buffer and afterwards incubated for 30 min at 4 ∘ C with antigoat Alexa Fluor 647.Finally cells were washed with FACS buffer and directly analysed on an LSRII flow cytometer (BD). Reprogramming Human Hair-Derived Keratinocytes to hiPSCs. For the depicted studies we utilized keratinocyte cultures from plucked human hair of healthy individuals (Figure 1(a)).With the use of a lentiviral, multicistronic fourfactor reprogramming system, keratinocytes were successfully reprogrammed to human induced pluripotent stem cells displaying embryonic stem cell like morphology (Figure 1(a)) as well as hallmarks of pluripotency tested via immunohistochemistry and qRT-PCR for the expression of embryonic stem cell markers.Several lines of more than 5 individuals (data not shown or reported in [3,25,38]) were tested for their proliferation and differentiation capacity and subsequently two lines were selected, named "hiPSC 1 and hiPSC 2. " Both lines were additionally tested for the protein expression of OCT4, SOX2, NANOG, SSEA4, TRA1-60, and TRA1-81 (Figure 1 .Taken together, our established hiPSC lines display an embryonic stem cell like phenotype, proven by morphology and expression of pluripotency markers as well as absent mRNA for endodermal markers.approach [43][44][45].Small molecule-based assays are less biased by batch-to-batch variations and are usually more cost effective.Upon extensive testing of different combinations, our protocol led to the following replacements of established growth factors being known to drive definitive endoderm differentiation: CHIR90021 replaced Wnt3a [40], IDE1 replaced Activin A [41], and LY294002 inhibited the AKT signalling pathway to abolish pluripotency [42].Figure 2(a) represents a detailed scheme of the differentiation conditions used for the formation of DE from day 0 (undifferentiated pluripotent hiPSCs) to day 6 (definitive endodermal cells).In vitro differentiated hiPSCs became positive for endodermal markers, confirmed by positive immunostaining of cells on day 5 for FOXA2 and SOX17 (Figure 2(b)).To analyse and characterize the SOX17 expression more objectively, we quantified SOX17 expression via intracellular FACS analysis in a time course from day 3 to day 6 of the applied protocol.Figure 2(c) shows representative FACS plots from both lines, representing SOX17 positive cells on day 3 and day 5.We did not observe differences in the differentiation capacity of virus-free hiPSCs after excision of the reprogramming cassette in comparison to virus-containing cells, making further analyses of silencing of exogenous factors unnecessary (Figure 2(d)).In summary, SOX17 expression is increasing from approximately 45% at day 3 to nearly 80% of SOX17 positive cells on day 6.Recent publications depict CXCR4 and c-KIT positive cells as definitive endoderm progenitors, that give rise to self-renewing endodermal progenitor cells (EPCs) [49].To confirm our protocol, we did time course analysis by flow cytometry for CXCR4 and c-KIT positive cells during differentiation.To further confirm the definitive endodermal identity of the differentiated lines, we measured mRNA levels using qRT-PCR analysis for OCT4, SOX17, and FOXA2.From day 1 to day 5 mRNA levels for the pluripotency marker OCT4 decrease continuously (Figure 2(g), summarized for hiPSC 1 and hiPSC 2).SOX17 and FOXA2 levels were tested in the two established hiPSC lines during differentiation and displayed increasing mRNA levels from day 1 to day 5 (Figure 2(g)).This data clearly indicates that the investigated hiPSC lines can be differentiated into DE, loosing markers of pluripotency and up regulating the expression of endodermal markers during endoderm formation. Expression of Calcium-Activated Potassium Channels (SKCas) during DE Differentiation. Next, we had a closer look on the expression of the different SKCas subtypes during DE differentiation.hiPSCs were differentiated into DE cells and expression of SKCa was investigated after 5 days of differentiation.On day 5 SOX17 is strongly expressed indicating the differentiation into DE cells (Figure 3(a)).To analyse the expression of the SKCas, DE cells were stained for the different SKCa subtypes.Immunofluorescence analyses show a quite strong and stable expression of SK1, SK2, and SK3 whereas SK4 seems to be expressed at a lower level (Figure 3(b)).SK1, 2, and 4 are localized in the cytoplasm and the cell membrane.However, SK3 is not only localized at the cell membrane but also as PUNCTUA in the nuclei (Figure 3(c)).This is a finding that needs to be analysed in further studies.Double immunofluorescence staining for SOX17 and respective channel proteins are shown in Supplementary Figure 1 available online at http://dx.doi.org/10.1155/2013/360573.mRNA expression analysis via quantitative RT-PCR (qRT-PCR) shows a relative constant expression of SK1 and SK2 during DE differentiation (Figure 3(d)).In contrast, transcript levels of SK3 obviously increased after 4 days of differentiation (Figure 3(d)).SK4 mRNAs levels marginally increased during the first days of differentiation and peaked on day 3, followed by a sharp decline up to day 5 (Figure 3(d)). To note, all four SKCa subtypes are expressed during DE differentiation.SK1 and SK2 are constantly expressed whereas SK3 seems to be up regulated during ongoing DE differentiation.The transcript levels of the different SKCa subtypes on day 5 reflect our observations of the immunofluorescence analysis.In sum, all 4 SK subtypes are differentially expressed during DE differentiation of human induced pluripotent stem cells with a yet undescribed localization of SK3 in the nucleus. Discussion In the current study, we provide proof of the concept that plucked human hair-derived iPSCs are highly potent in their capacity to commit not only towards mesoderm [3] and neuroectoderm [25] but also towards the endodermal germ layer, particularly definitive endoderm.To this end, a newly adopted protocol based on previously published studies was applied and resulting cells were extensively characterized by gene expression analysis, immunofluorescence microscopy, and FACS-staining for intracellular and surface markers defining the definitive endoderm signature. As induced pluripotent cells are currently considered to resemble human embryonic stem cells, a state-of-theart assay for hiPSC generation is required.Such an assay requires the following prerequisites: (i) noninvasive harvest of the cell type of origin, (ii) broad applicability in terms of guided differentiation to all three germ layers, (iii) useful for large-scale hiPSC biobanking, (iv) highly efficient, and (v) fast reprogramming to the hiPSC stage.Keratinocytes from the outer root sheath of plucked human hair represent such a cell source and thus points towards the generation of patient-specific human induced pluripotent stem cells as a new paradigm for modelling human disease and for individualizing drug testing.Previously, we have further optimized this method in terms of efficiency and speed by using rat embryonic fibroblasts as a feeder layer for keratinocyte reprogramming [38].The arising hiPSCs fulfilled all the prerequisites of pluripotency including teratoma formation and spontaneous three-germ layer differentiation. In further studies, we have applied plucked hair-derived hiPSCs to guide differentiation towards motoneurons [25] and cardiac pacemaker cells [3], both representing highly specified cell types from either ectodermal or mesodermal origin.However, their differentiation capacity to give rise to definitive and primitive gut tube endoderm remained elusive.While forming, definitive endoderm is incorporated by morphogenetic movements into a primitive gut tube stage.This in turn is patterned into foregut, midgut, and hindgut to form the functional epithelial compartment of multiple internal organs: liver, intestines, lungs, and the pancreas [50].Nowadays, virtually every cell population arising from the primitive gut tube has been generated using guided differentiation of pluripotent cells towards liver, intestines, lungs, and the pancreas [51][52][53].Thus, the induction of DE cells marks a prerequisite for the entire process of pluripotent stem cell differentiation into, for example, pancreatic or hepatic progenitor cells [54,55].Several protocols have been developed and modified to increase the efficiency of DE commitment.All these protocols are strongly dependent on high doses of TGF signalling mediated by Activin A as the major driving force of the process.However, large-scale differentiation experiments should be cost effective, thus, making a small molecule-based assay more desirable.To this end, we combined several previously described strategies.First, we replaced Activin A by IDE1, a compound having shown to display similar but also superior characteristics compared to Nodal or Activin A [41].Similarly, we substituted Wnt3a by the small molecule CHIR90021 that inhibits GSK 3 kinase to mimic Wnt signalling [40].The third small molecule LY294002 inhibited the AKT signalling pathway, by repressing PI3K, to promote the exit from pluripotency [42].In consequence, a robust and reproducible assay was developed which shows to be effective in several human plucked hair-derived iPSCs.As the formation of definitive endoderm is a prerequisite to obtain, for example, relevant numbers of pancreatic -cells, our data in combination with the presented reprogramming strategy are highly relevant for human disease modelling approaches. However, several studies have suggested that -cells generated from human pluripotent stem cells lack adult, and at the most reach, fetal maturity as particularly expressed by their polyhormonality.This observation reinforces the notion that establishing culture conditions that promote appropriate maturation represents a significant obstacle for the generation of functional -cells in vitro [56].A recent landmark paper identified self-renewing definitive endodermal progenitor cells as a potential cell source to bypass this limitation.-cells generated from these cells showed features of adult maturity as even shown by functional assays [57].Given the fact that all published protocols so far lack this feature, the quality of the definitive endodermal intermediate seems to have an impact on the final maturity.The generation of definitive endodermal progenitor cells was characterized by high positivity for c-KIT and CXCR4 [57].Thus, we included in our current DE analysis an FACS-based tool and indeed succeeded in obtaining a pattern likely to allow the isolation of this distinct cell type.The similar differentiation capacity of all our analysed plucked human hair-derived iPSCs is relevant to the field of disease modelling, using patient-specific material.Plucked hair keratinocytes are more or less the only cell type, which matches the above criteria.Nevertheless, a potential ectodermally biased epigenetic memory could limit their utility [58].Our finding abolishes such a bias at least based on the number of different cell lines and the reproducible endodermal commitment pattern. Stem Cells International The development of in vitro models underlying embryonic development is a prerequisite to build new knowledge and to develop new strategies targeting various genetic diseases.The development and investigation of endodermderived cells are such as pancreatic cells, are of high importance for the field of developmental biology and clinical implications.Induced pluripotent stem cells (iPSCs) with their unique features of unlimited self-renewal and nonrestricted differentiation capacity are a highly promising tool for regenerative medicine as well as for studies on developmental biology.iPSCs have been generated from a variety of different cells types originating from all threegerm layers [38,58,59].Finally, this setup has been used to determine the expression pattern of a certain ion channel family which has been previously shown to be differentially regulated in embryonic stem cells and involved in differentiation processes, namely, small and intermediate conductance calcium-activated potassium channels [2,3,5,60].Thus, our study gives novel insights into guided pluripotent stem cell differentiation towards definitive endoderm and a potentially involved protein family. SKCas either exhibit small (SK1, KCa2.1, Kcnn1; SK2, KCa2.2, Kcnn2; and SK3, KCa2.3, Kcnn3) or intermediate (SK4, IK, KCa3.1, and Kcnn4) unitary conductance for K + ions.Important roles in multiple cellular functions, for example, cell cycle regulation in cancer cells [20,61], smooth muscle relaxation [23,62], mesenchymal stem cell proliferation [22], and cytoskeleton reorganization in neural progenitors [21] have been reported.SKCas are widely expressed throughout all different tissues.While SK1 is exclusively expressed in the central nervous system, SK2 is more widely expressed in different organs arising from different germ layers such as brain, liver, or heart.SK3 is the most widespread expressed isoform with a predominant expression pattern in the central nervous system but also in smooth muscle rich tissue.SK4 can be detected in inflammatory cell-rich, surfacerich, and secretory tissues such as the pancreas [63].In the pancreas, for example, SK4 regulates glucose homeostasis and enzyme secretion of acinar cells [64,65].Moreover, SKCas are overexpressed in a variety of cancers, including pancreatic cancer [20] and, for example, SK3 was shown to be involved in cancer cell migration [66].Nevertheless, the role of SKCas in developmental processes remains enigmatic though it is well accepted that cell differentiation and maturation affect the expression patterns of ion channels.Our group has shed for the first time light on their role in differentiating pluripotent stem cells derived from mouse and men [2,3,5,25].A potential role of SKCas was already suggested by their differentially regulated expression pattern.In fact, it temporally coincides with the commitment of the cardiovascular progenitor showing an expression peak of the respective isoform around day 5 [5].In consequence, we aimed to address the expression pattern of SKCas in the developing endoderm using plucked human hair-derived iPSCs as a bona fide modelling system.Interestingly, the differential regulation of most SKCa isoforms was relatively modest.Albeit SK2 and SK4 show a slight expression peak around day 2/3, the only regulated isoform seems to be SK3 showing a continuously increasing expression with ongoing DE formation.Interestingly, reports showing SK3 expression in DE-derived organs are restricted to a handful of studies showing SK3 expression in epithelial cancer cells and a liverspecific splice variant [67,68].Further studies including gain and loss of function approaches within the same assay have to clarify the respective functions of SKCa isoforms with DE formation and later maturation processes towards liver and pancreas. To summarize, we present an efficient, novel, and robust DE formation assay being suitable for ectoderm-derived plucked human hair iPSCs.Given the prerequisites for reprogramming fulfilled by plucked human hair, a robust DE assay for this particular iPSCs type is highly relevant for disease modelling approaches.Subsequently, we have identified dynamic expression of the SKCa family of proteins during DE formation. (b)) and mRNA levels of three pluripotency markers (OCT4, SOX2, NANOG).At the pluripotent stage definitive endoderm makers (FOXA2, SOX17) and markers for pancreatic progenitors (PTFA1, PDX1) were negative (Figure1(d)).Additionally, all lines were capable of differentiating into cells of all 3 germ-layers, as shown by -3-tubulin (neuronsectoderm), -actinin (muscle cells-mesoderm), and fetoprotein (liver cells-endoderm) (Figure1(c)).One line was further treated with recombinant Cre protein to excise the reprogramming STEMCCA cassette being flanked with loxp sites.To test for successful excision, PCR amplification of the STEMCCA cassette was performed in cre-and nontreated iPS cell clones from this respective line showing the band only in controls (Figure1(e)) Figure 2 ( e) shows representative FACS plots of CXCR4 and c-KIT positive cells of the two hiPSC lines on day 3 and 5. Two independent experiments for each line were summarized and shown from day 2 to 7 of endodermal differentiation.From day 2 on, the double positive population (CXCR4 and c-KIT) is steadily increasing in both lines.The maximum is reached with almost 90% double positive cells for hiPSC 1 and almost 80% for hiPSC 2 (Figure 2(d)).Again there was no relevant difference upon excision of the reprogramming cassette (Figure 2(f)). Figure 3 : Figure 3: Expression of Calcium-activated Potassium channels during formation of definitive endoderm.(a) Expression of SOX17 (green) after 5 days of DE differentiation.(b) Immunofluorescence analysis of SKCa proteins in DE cells.Indicated SKCa subtype (red).Scale bars as indicated.(c) Higher magnifications of indicated SKCa subtype (red).Scale bars as indicated.(d) Transcript levels of SK1 and SK2 remained relatively low during the DE differentiation.In contrast, mRNA levels of SK3 increased after 4 days of differentiation.SK4 mRNA levels slightly increased during the first days of differentiation and peaked on day 3 followed by a sharp decrease until day 5. Expression levels are shown relative to the housekeeping gene HMBS ( = 4, two different hiPSCs lines).
5,742.8
2013-04-16T00:00:00.000
[ "Biology" ]
Automatic Tracking Method of Basketball Flight Trajectory Based on Data Fusion and Sparse Representation Model The appearance model of flying basketball obtained by the traditional basketball flight trajectory tracking method is not accurate, which leads the anti-interference performance of trajectory tracking not ideal. Based on data fusion and sparse representation model, a new automatic trajectory tracking method is proposed. Firstly, the relevant technologies of basketball flight trajectory automatic tracking are studied and summarized, and then the method is studied. The specific implementation steps of this method are as follows: the features of flying basketball images were extracted by the target feature extraction algorithm, and the appearance model of flying basketball was built based on sparse representation. Data fusion technology and particle filter algorithm are combined to realize automatic tracking of basketball flight path. Through three axial basketball trajectories of automatic tracking test and noise test and verify the design method under the 3D world coordinate system to achieve the X, Y, and Z axis up more accurate tracking, at the same time, after applying measurement signal to noise, automatic trajectory tracking results affected by some, but still managed to realize the trajectory tracking. Introduction e current research on sports video mainly focuses on the following aspects: first, extract wonderful clips, that is, summarize and collect the wonderful clips, so as to meet the audience who cannot watch the live broadcast, so that they can save time and enjoy the fun of watching the game to the greatest extent [1]. e second is the technical statistical analysis of the event [2], that is, the analysis of the competition situation. rough computer-aided, it can automatically analyze and count the competition video so as to obtain detailed competition technical statistics, reduce errors, and save human resources. And the last is identifying athletes' behavior, which is not only the most widely used but also the ultimate goal of most sports video analysis work. rough the recognition of athletes' competition actions, more advanced applications can be realized, such as simulated sports competition. In basketball, most of the content of the game is the interaction between basketball and players. Tracking basketball is the basic work of game situation analysis and other applications [3,4]. Foreign countries began to study and analyze sports video in the late 1980s. is field has potential economic value and broad application prospects. For the flight trajectory tracking problem of the ball, some classical research results have been obtained at home and abroad. Literature [5] proposed an image fusion scheme based on image cartoon texture decomposition and sparse representation. e proposed image fusion method decomposes the source multimodal image into cartoon image and texture image. For cartoon components, a suitable space-based morphological structure preservation method is proposed. Energy-based fusion rules are used to maintain the structure information of the source image. For texture components, a method based on sparse representation is proposed. For the proposed fusion method based on sparse representation, a dictionary with strong representation ability is trained. Finally, according to the texture enhancement fusion rules, the fused animation and texture components are fused. Reference [6] proposed a twin neural network target tracking algorithm integrating the disturbance perception model. e low-level structural features extracted by twin neural network are effectively fused with high-level semantic features to improve the representation ability of features. e disturbance perception model based on color histogram feature is introduced into the algorithm. e target response score map is obtained by weighted fusion to estimate the target position, and the optimal target scale is estimated by using the adjacent frame scale adaptive strategy. However, the target appearance model obtained by the above traditional target tracking methods is not accurate enough, resulting in the unsatisfactory anti-interference performance of trajectory tracking. Referring to the previous research results, this paper introduces data fusion and sparse representation model to realize the automatic tracking of basketball flight trajectory. Problems in Basketball Flight Trajectory Tracking In the research of basketball flight trajectory tracking, there are still many problems, such as the difficulty of shooting clear images when the basketball is moving at high speed, the distortion [7] phenomenon of camera imaging, and the air resistance when the ball is flying. Motion Blur Problem. Motion blur means that the relative motion speed of the basketball and camera is too fast, which will expose different scene points in the photosensitive device at the same time and at the same point, resulting in image quality degradation [8]. ere is a serious motion blur problem in the basketball at high speed, which will affect the accuracy of trajectory tracking. Lens Distortion. Lens distortion is actually the general term of the inherent perspective distortion of the optical lens, that is, the distortion caused by perspective, which is very unfavorable to the imaging quality of photos [9]. ere are some distortion factors in the camera lens in the real situation, which is difficult to be described by the ideal distortion model. Air Resistance. When flying at high speed in the air, the basketball will be disturbed by Magnus force, air resistance, and so on, which makes its real flight trajectory deviate from its ideal flight trajectory. Data Fusion Technology 3.1.1. Overview of Data Fusion. Data fusion refers to the realization of estimation tasks and decision-making through multisource information and integrated multisensor, which specifically includes a variety of concepts such as information fusion and data fusion [10,11]. Data Fusion Classification. Based on the abstraction level, data fusion can be divided into three categories: feature level, decision level, and pixel level. Key Points of Data Fusion. In data fusion, both traditional theories and new technologies are applied. e traditional theories include optimization theory and decision theory, while the new technologies include weighted average method and D.S. [12]. Fundamentals of Basketball Flight Trajectory Automatic Tracking Algorithm 3.2.1. Target Feature Extraction. Target feature extraction algorithms mainly include local preserving mapping and independent component analysis. Particle Filter Algorithm. e particle filter algorithm is a Monte Carlo filtering method [13], which is mainly based on Bayesian theory. e basic idea is to express the posterior probability distribution corresponding to the system random variables through a group of random particles with weights, which can solve the non-Gaussian and nonlinear problems [14]. Sparse Representation eory. e sparse representation model includes synthetic sparse model and analytical sparse model. e analytical sparse model is an expansion of the field of signal sparse modeling, which mainly emphasizes the position and number of nonzero elements in the sparse coefficient elements to characterize the spatial dimension of signal X. e synthetic sparse model uses the super complete redundant dictionary to sparse decompose the image signal. Appearance Model Based on Sparse Representation. In image target tracking, features such as contour, texture, and color are often used to describe the target. However, in some images, the target contour is not obvious and the background is complex. Tracking algorithms based on traditional observation models often lose the target [15]. erefore, a new appearance model of the flying basketball based on the sparse representation model is proposed in this paper. It is assumed that the target in the image sequence is located in a low dimensional subspace G � {g 1 , . . . , g n }; that is, the target can be sparse represented by this subspace: where the subspace is called target subspace a, which is composed of the eigenvectors of the target observation vector matrix in the previous B frame image; C stands for super complete dictionary; D represents the representation 2 Complexity error caused by noise and occlusion; and E represents sparsity [16]. e feature of basketball flying image is extracted by the target feature extraction algorithm, and the appearance model of the flying basketball is constructed based on the sparse representation model. When extracting the features of basketball flight image, the algorithm used is the local preserving mapping algorithm [17], that is, to reduce the dimension of basketball flight image data while retaining the features of the original data. In the feature extraction of basketball flight image, the k-nearest neighbor method is used to construct the interclass adjacency graph and intraclass adjacency graph [18]. en, the weight on the edge is determined. e method used is as follows: In formula (2), S i,j represents the weight on the edge and i and j represent the edges of basketball flight image features, respectively. Taking the edge weight as the weight matrix of the adjacency graph, it can be found that the matrix is sparse and symmetric. For the basketball flight image data set, it is transformed and projected through the weight matrix of the adjacency graph to obtain the basketball flight image features [19,20]. In projection, the objective function needs to be minimized to rationalize the projection criterion. e specific calculation formula of minimization objective function T is as follows: In equation (3), W ij represents the weight matrix of the adjacency graph; y i and y j represent two adjacent points in the basketball flight image data set; N represents the number of data in the basketball flight image data set; and T represents the minimization objective function. en, the appearance model of the flying basketball is constructed based on the sparse representation model; that is, the appearance model of flying basketball is constructed by the sparse reconstruction algorithm. e specific construction process of the appearance model is as follows: (1) e basketball flight image features are input [21] and trained to obtain dictionary pairs with high resolution and low resolution, which are represented by D l and D h . vec In equation (3), X i represents the reconstructed highresolution basketball flight image of the reconstructed area. (6) e sparse reconstruction problem [23] is solved as follows: In equation (4), P represents the objective function of sparse reconstruction problem [24,25]; B T i represents sparse representation matrix; F stands for linear feature extraction operator; and λ is the fidelity balance parameter. (7) e image block of the high-resolution basketball flight appearance model is constructed, as shown in the following formula: In equation (7), X i represents the constructed highresolution basketball flight appearance model image block and T represents the sparsity balance parameter. e high-resolution basketball flight appearance model [26] is constructed by repeated operations of partial weighted average on the image blocks of the highresolution basketball flight appearance model X 0 . Finally, the high-resolution basketball flight appearance model X 0 is output. Establishment of Algorithm Model. en, the model of the basketball flight trajectory automatic tracking algorithm is constructed by using data fusion technology and particle filter algorithm. Firstly, the data of the high-resolution basketball flight appearance model are processed by data e data fusion method used is D-S evidence theory [27], that is, divide the evidence set, use the divided part to make independent judgment on the identification framework, and then use the Dempster rule to recombine the previously divided parts. e combination rules are as follows: In formula (8), A i and B j , respectively, represent two independent sources of evidence; A represents proposition; m (Ø) represents the universe set of trust function; m represents trust function; and k stands for the number of propositions. en, the automatic tracking algorithm model of basketball flight trajectory is constructed by the particle filter algorithm. In the model construction, the measurement covariance has an important impact on the output of the final filter. In order to avoid the influence of measurement covariance, dynamic correction is introduced, as shown in the following formula: In formula (9), R k represents the measurement covariance; f (·) represents the dynamic correction function; d k represents the distance between binocular camera and basketball at time k in three-dimensional space, that is, the automatic tracking algorithm model of basketball flight trajectory, as shown in the following formula: In equation (10), x c , y c , and z c , respectively, represent the midpoint position when the left and right cameras are connected with the two optical centers; x k , y k , and z k , respectively, represent the coordinate measurement values corresponding to basketball flight in the three-dimensional world coordinate system. Two-Step Tracking. e designed basketball flight trajectory automatic tracking algorithm determines the value range and increasing process of the dynamic correction function according to the actual calibration results of the camera. In the initial tracking stage, the small measurement covariance difference is set, so that the algorithm can quickly follow the basketball flight. en, the size of the measured covariance difference is gradually increased so as to improve the output stability of the particle filter algorithm. In the process of two-step tracking, the expression of measurement covariance is defined as the following formula: In equation (11), d 0 represents the initial distance value corresponding to the optical center of the basketball and the midpoint of the connecting line of the binocular camera and R 0 represents the initial measurement covariance. Algorithm Steps. e steps of the designed basketball flight trajectory automatic tracking algorithm are summarized. e steps of the algorithm are divided into three steps, as follows: (i) e most difficult problem of basketball target tracking is that the appearance model of the flying basketball is not accurate enough, which is still an unsolved problem in the traditional basketball flight trajectory tracking method. erefore, based on the feature extraction of basketball flight image features by target feature extraction algorithm, I construct the appearance model of sparse representation of flying basketball, which lays the foundation for high-precision target tracking of basketball targets. (ii) Using data fusion technology and particle filter algorithm, the model of the basketball flight trajectory automatic tracking algorithm is constructed. D-S evidence theory data fusion technology is used to complete the information processing of high-resolution basketball flight appearance model data. (iii) Finally, the basketball flight appearance model is tracked by using the constructed basketball flight trajectory automatic tracking algorithm model. Case Tracking Test. In the experiment, the binocular vision system is used to collect the basketball flight images in the robot basketball. Under the conditions of i5-7300hq processor and 10 GB memory, the moving image sequence is processed based on MATLAB 2020b for automatic tracking of basketball flight trajectory in the experiment. e experimental image is shown in Figure 1. e visual results of basketball target tracking are shown in Figure 2. In the experiment, the average value of M in Y i is 32, k is 6, and the initial value of T k is 10 mm. With the reduction of the distance between binocular camera and basketball at k time in three-dimensional space, R k gradually increases to 40 mm. ree Axis Tracking Results. Because the process of basketball flight is in three-dimensional space, the test of its tracking effect is divided into three directions: x-axis, yaxis, and z-axis. e basketball motion image is processed by MATLAB 2020b to obtain the trajectory coordinates of basketball flight. e basketball flight trajectory is tracked 4 Complexity by using the method proposed in this paper, and the tracking results are compared with the actual coordinates. If they are close, the tracking results of the proposed method are ideal. X-Axis Tracking Results. e designed basketball flight trajectory automatic tracking method based on data fusion and sparse representation model is used to track the trajectory of basketball thrown by the experimental basketball robot. e experimental results of x-axis basketball flight trajectory automatic tracking are shown in Figure 2. From the experimental results of x-axis basketball flight trajectory automatic tracking in Figure 3 and Table 1, it can be seen that there is almost no obvious difference between the output of the basketball flight trajectory automatic tracking algorithm model and the actual basketball flight trajectory. In the 400 ms of the experiment, there is a small deviation between the original flight speed and the tracking speed, and the deviation value is 0.1 m/s, which shows that with the increase in experimental time, the tracking result of the design method is still relatively stable and the output noise is small, which proves that the automatic tracking effect of x-axis basketball flight trajectory of the design method is better. Y-Axis Tracking Results. en, the automatic tracking of y-axis basketball flight trajectory of the design method is tested, and the specific test results are shown in Figure 3. According to the automatic tracking of y-axis basketball flight trajectory in Figure 4 and Table 2, when the experimental time is 300 ms and 350 ms, the proposed method has two tracking errors, 3 mm/MS and 2 mm/MS, respectively, but the error does not affect the overall tracking effect of y-axis. It can be seen that the automatic tracking method of basketball flight trajectory based on data fusion and sparse representation model can track the y-axis more accurately in the three-dimensional world coordinate system. at is, the output of the basketball flight trajectory automatic tracking algorithm model during y-axis tracking is very close to the reality, which proves that the design method can accurately track the y-axis trajectory. Z-Axis Tracking Results. Finally, the automatic tracking of z-axis basketball flight trajectory of the design method is tested, and the specific test results are shown in Figure 4. It can be seen from the automatic tracking of z-axis basketball flight trajectory in Figure 5 and Table 3 that the automatic tracking accuracy of z-axis basketball flight trajectory of the design method is lower than that of x-axis and y-axis basketball flight trajectory. ere is a certain error in the overall tracking trajectory, but the error value is always less than 33 mm/MS, which shows that it still maintains a high accuracy. Based on the three-axis tracking results, it can be found that the output of the basketball flight trajectory automatic tracking algorithm model is very close to the reality; that is, the designed basketball flight trajectory automatic tracking method based on data fusion and sparse representation model can track the basketball flight trajectory more accurately. Complexity 5 Error and Deviation Correction Test. After the test, the error and correction test of the designed basketball flight trajectory automatic tracking method based on data fusion and sparse representation model is carried out; that is, a certain noise is added during the tracking to test the antiinterference performance of the design method in the automatic tracking of basketball flight trajectory. For automatic flight trajectory tracking method, antiinterference performance is an important index to measure its tracking performance. Taking the automatic tracking performance test of x-axis basketball flight trajectory as an example, a certain noise is applied during the test, the automatic tracking of x-axis basketball flight trajectory is tested under the noise interference, and the test results are compared with the tracking results in Section 5.2.1 to observe the anti-interference of the automatic tracking method of basketball flight trajectory based on data fusion and sparse representation model. In the test, the noise is mainly applied at 200 ms, and the applied noise is the measurement signal noise. e test results of x-axis basketball flight trajectory automatic tracking performance of the design method after noise are shown in Figure 5. According to Figure 6 and Table 4, the comparison results of the x-axis basketball flight trajectory automatic tracking test at the place where the noise is added (200 ms) show that after the measurement signal noise is applied, the flight trajectory automatic tracking result at 200 ms is affected to a certain extent. However, the trajectory tracking is still successfully realized, which proves that the designed Conclusion In order to solve the problems of poor anti-interference and low tracking accuracy of traditional basketball flight trajectory tracking methods, a new trajectory automatic tracking method is proposed. Combined with the sparse representation model, data fusion, target feature extraction, and particle filter algorithm, basketball flight trajectory tracking is studied. To sum up, the following achievements have been made in this study: (1) e sparse representation model, data fusion, target feature extraction, and particle filter algorithm are deeply studied and comprehensively classified. (2) Taking basketball as the research object, through the comprehensive application of various algorithms, this paper constructs the basketball flight trajectory automatic tracking algorithm model, realizes the basketball flight trajectory automatic tracking with high precision and strong anti-interference, and has a broad application prospect. (3) In order to verify the effectiveness of the proposed method, simulation experiments are designed. rough the trajectory tracking of x-axis, y-axis, and z-axis, it is proved that the proposed method can track the basketball flight trajectory with high precision, and the tracking accuracy can be maintained above 95%. In order to verify its anti-interference performance, noise was added when the experimental time was 200 ms low. e experimental results show that the velocity tracking error of the proposed method is only 0.04 m·S −1 at the noise interference, which shows that the trajectory tracking accuracy can still maintain a high level under the noise interference. Data Availability We use simulation data, and our model and related hyperparameters are provided in our paper. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,905.6
2021-09-29T00:00:00.000
[ "Computer Science" ]
Real-time energy management system for public laundries with demand charge tariff A building energy management system can be defined as a building automation system that can facilitate demand response by controlling building-side energy resources. Here, a real-time Energy Management System (EMS) is developed to reduce the energy costs charged by the demand charge tariff for a specific commercial building that consists of a public laundry. The authors model the public laundry energy management problem as a multi-task scheduling problem and design a set of algorithms to heuristically control the operations of washing machines and clothes dryers by taking into account the customer-specified task requirements and the laundry owner’s budget. Numerical case studies are conducted to validate the proposed EMS. INTRODUCTION Thanks to the Advanced Metering Infrastructure (AMI), modern buildings possess the capabilities of interacting with the external power grid and performing Demand Response (DR) [1]. Through the bilateral communication channels of AMI, buildings can receive energy pricing and incentive signals from the utility and based on it, re-shape their energy consumption profiles. Related works of building energy management system As part of building automation technology, Building Energy Management Systems (BEMSs) [2] have drawn increasing attentions in recent years. Acting as the agent of the user, BEMSs interact with both the user and the grid to plan the operations of the building-side energy resources. In the literature, the development of BEMS has been widely studied and briefly outlined in the following. Some works [3][4][5][6] focus on developing BEMSs for residential buildings. For example, [3] optimally schedules plug-in hybrid vehicles and household appliances in a residential building to minimize the household's energy cost. [4] proposes a multi-stage home energy management system that minimizes the household's energy cost by considering predic-This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. The Journal of Engineering published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology tion errors of renewable energy and ambient temperature. [5] proposes a home energy management model based on vehicleto-home integration and occupant's indoor thermal comfort modelling. [6] proposes a load commitment model that determines the schedules of household's appliances by taking into account both time-varying tariff and the utility's load reduction instructions. Other research works [7][8][9][10] design BEMSs for commercial and industrial buildings. [7] proposes a BEMS to improve energy efficiency and reduce energy cost for a commercial building with heating, cooling and electrical energy zones. [8] proposes an energy management scheme for an industrial building to reduce electricity cost by scheduling its cooling loads and local PV power sources. In [9], a predictive control algorithm is implemented in a commercial BEMS through neural networks, which maintains comfort thermal levels for the building. [10] proposes an adaptive strategy based on model predictive control for a commercial building with renewable energy generation, with the objective of reducing the building's energy cost. FIGURE 1 Snapshot of a self-service public laundry [11] charge based on the maximum power recorded during an entire billing cycle. The typical billing cycle of DCT is usually a calendar month. This means that a customer would be charged by the DCT rate on the maximum power consumed over the whole calendar month. Some studies have been conducted on designing demand side energy management strategies with DCT. [13] develops an economic dispatch model for power generation resources in a microgrid with DCT penetration. [14] proposes a finite horizon based dynamic optimization model for scheduling the charging/discharging power of a battery energy storage system to control the customer's peak power and minimize the expected DCT charging cost. In the second and third authors' recent work [15], a heuristic home energy management scheme is proposed to manage appliances' operations, aiming at mitigating the risk of high DCT penalty. Contribution of this paper The main contribution of this paper is to investigate the energy management problem with penetration of DCT for a specific commercial building that is commonly seen in modern cities and that consists of public laundries ( Figure 1). To the best of the authors' knowledge, energy management of laundries has not been previously studied in the literature. A public laundry is a business entity that offers clothes washing and drying services to users through multiple on-site washing machines and clothes dryers (referred to as "machines" in this paper). This paper develops a BEMS that manages the operation of multiple washing machines and clothes dryers in a public laundry, so as to mitigate the high energy cost for the laundry owner while ensuring the quality of laundry services delivered to the customer. In the proposed BEMS, the daily operation environment of a public laundry is modelled as two coupled multi-task queues, that is, a cloth washing task queue and a cloth drying queue. A washing/drying task is defined as an electricity demand that lasts for a time period and requires a certain power level. Based on this, we develop a set of real-time control algorithms to manage the machines, with the aim of controlling the DCT cost to be within the laundry business owner's budget. By properly managing the public laundry's power consumption, the proposed method can enable the public laundry to contribute to the external grid's peak load control. The rest of this paper is organized as follows: Section 2 presents the modelling methodology of a public laundry's operational environment; Section 3 introduces the proposed laundry energy management system; Section 4 reports the simulation results; and the conclusion and future work are presented in Section 5. MODELLING OF PUBLIC LAUNDRIES' OPERATIONAL ENVIRONMENT The operation environment of a public laundry can be depicted as shown in Figure 2. The laundry consists of multiple washing machines and clothes dryers that provide services to public users. Users put clothes into an idle machine, set washing/drying task parameters and task deadlines, and monitor the tasks' progresses through the smartphone app. There is a Laundry Energy Management System (LEMS) that manages the operation of the machines through a wireless communication network (e.g. WiFi or Zigbee), aiming at optimizing the laundry's energy usage to mitigate the high DCT cost risk while satisfying the users' task requirements. In this context, the energy management problem of a public laundry can be modelled as a multi-task control problem. In this section, the modelling methodology of the public laundry's operation environment is presented. Modelling of public laundry Consider a laundry with N machines. Denote the number of washing machines and clothes dryers in the laundry as N 1 and N 2 , respectively; therefore, N = N 1 + N 2 . Each machine is assigned with an index n, that is, n = 1, 2, …, N. Denote the index and total number of the laundry's control time interval as t and T, respectively, that is, t = 1, 2, …, T. Denote the duration of each time interval in terms of Δt (hour). The laundry is considered to be charged by the demand charge tariff. The DCT charges for the peak power consumption over a whole calendar month. That is, at the end of the billing month, the DCT cost (C dct ) is determined as: where andP represent the DCT rate ($/kW) and the peak power of the month (kW), respectively. As many BEMSs, the LEMS performs energy management on a daily basis. Since the DCT is charged for the peak power over the period of a month, it would be hard for the LEMS to estimate if the peak power in the current day would define the DCT cost that ends up being charged at the end of the month. Some works [13][14][15] perform energy management with one-day ahead predictions on renewable power load. From our viewpoint, this strategy can be hardly applied to the public laundry's operation environment because a public laundry's load, which depends on the public users' tasks, is highly stochastic and hard to predict in dayahead time scales. In this study, we adopt a budget-based strategy to cope with DCT in which the laundry owner is assumed to work against a budget (denoted as C budg for the monthly DCT payment). Based on this, the peak power threshold of the laundry (denoted as P lim ) subjected to the owner's budget is: The objective of the LEMS is then to control the laundry's peak power within P lim as much as possible. Modelling of laundry machines Both the washing machine and clothes dryer are considered to be interruptible, which means the machines can be paused and resumed at a later time. A machine can be turned off only after the task running in the machine completes. Each machine has three operational states, denoted as: 0-OFF, 1-PAUSED, and 2-RUN. Denote the state of the machine i at time interval t as s i,t , then there is For each machine, we assume that it consumes the rated power P rate i (kW) when it is running and the base power P base i when it is paused. The base power consumption remains often within a small amount (usually several watts). The machine consumes zero power when it is turned off. For each washing machine, it is assumed it can have three operation modes: (1) normal wash mode, lasting for 30 min; (2) super wash mode, lasting for 40 min; (3) super plus wash mode, lasting for 1 h. These 3-mode settings are obtained from the washing machines in the public laundry of the Hillington Hospital, London. For each clothes dryer, the clothes drying task's duration is set by the user, which is an integer in the range of [20, 120] (in minutes). Each machine is subjected to a minimum continuous online time constraint. That is, it cannot be paused/resumed too frequently, in order to protect the rotor of the machine: where on i,t −1 is the accumulated running time of the ith machine at time t-1; on,min i is the minimum online time threshold for the ith machine. Modelling of laundry tasks There are two types of tasks in a public laundry's business: washing task and drying task. It is assumed that there are a total of K tasks over the T time intervals, which form a task queue denoted can be defined for the kth task (k = 1:K) and it includes the following attributes: 1. task type ( k ), which is a binary variable. k = 0 means it is a washing task; k = 1 means it is a drying task; 2. machine index ( k ), which is an integer variable 1 ≤ k ≤ N indicating the machine that executes the task; 3. task duration (D k ), which is an integer variable indicating the duration of the task, measured as the number of time intervals. D k = {3, 4, 6} when k = 0; 2 ≤ D k ≤ 12 when k = 1; 4. task submission time (t 1 k ), which is an integer variable1 ≤ t 1 k ≤ T ; and 5. task deadline (t 2 k ), which is an integer variable indicating the time interval before which the task must complete. 1 ≤ t 2 k ≤ T and t 1 k + D k ≤ t 2 k ≤ T . Based on the above task properties, the Scheduling Margin of a task k at time interval t (denoted as g k ,t ) is defined and calculated as: where d k,t is the duration that the task has already been executed up to time interval t. In Equation (4), the numerator indicates how much time is left to reach the deadline, while the denominator indicates how much subtasks are left before completion. REAL-TIME ENERGY MANAGEMENT SCHEME FOR PUBLIC LAUNDRY WITH DEMAND CHARGE TARIFF In this section, the energy management scheme for the laundry is presented based on the modelling methodology of the laundry's operational environment. The overall objective of the LEMS is to decide the operational status of each machine at each time interval. More conveniently, for N machines, these decision variables can be expressed as a N × T -dimensional matrix S, where the entry s i,t represents the ith machine's state at time interval t. The LEMS determines S to make the total power consumption of the laundry below the threshold: where P total t is the total power consumption of the laundry at time t (kW), which is calculated as Equations (5) and (6). Equation (3) means that the EMS needs to ensure that in every time interval, the DCT charge cannot exceed the laundry owner's budget; Equation (4) ensures that all the washing/drying tasks can be completed before the task deadline. Algorithm 1: Main workflow The workflow of the proposed real-time energy management scheme for public laundry is shown in Algorithm 1. In the beginning. the algorithm initializes the laundry's operation environment (Lines 1-3). Then at each time interval, the algorithm checks if there are new tasks arrived (Line 5). If so, the algorithm handles the tasks with the ascending order of the task deadline (Lines 6-9). For each newly arrived task, the algorithm firstly checks if the host machine is turned on, whether the laundry's total power consumption would exceed the threshold P lim (Lines 10-12); if so, the algorithm executes a sub-routine (Algorithm 2) to temporarily pause one or more machines that are currently running tasks to maintain the laundry's total below the power threshold (Line 13); otherwise, the algorithm turns on the host machine and proceeds to handle the next task (Lines 15-17). When all the waiting tasks at the current time interval are handled, the algorithm moves to the next time interval and invokes a sub-routine (Algorithm 3) to update the machines' states. Above control logics repeat until the end of the control horizon is arrived (Lines 19-24). Algorithm 2: Sub-routine for pausing machines As shown in Algorithm 1, for a new task, if turning on its host machine would lead to the excess of P lim , a machine pausing sub-routine is invoked, which attempts to pause one or more running machines to regulate the total power consumption of the laundry. This sub-routine is presented in Algorithm 2. 6 Generate the set Sort the tasks in ′ with ascending order of the task deadline; 9 Get the next task i in ′ ; 10 Set s i ,t = 2; The algorithm firstly accepts the inputs passed from Algorithm 1 (Line 1); then, it selects the machines with the running state, that is, s i,t = 2 (Line 2). For each of these machines, the algorithm calculates the scheduling margin of the task that is running on it (Lines 3-5); then, the running machines are sorted with descending order of the calculated scheduling margins (Line 6). For each of the ordered machines, the algorithm checks if the machine's accumulated online time achieves the minimum required value; if so, the algorithm pauses the machine (Lines 7-12). Then, the algorithms checks if the total power consumption of the laundry is less than P lim ; if so, the algorithm outputs the machines' state matrix S (Lines 13-15); otherwise, it proceeds to the next running machine (Lines 16 and 17). Algorithm 3: Sub-Routine for updating the machine status In the main workflow, after handling the task queue in each time interval, a sub-routine is invoked to update the machines' states. The logics of the sub-routine is presented in Algorithm 3. Firstly, the algorithm accepts the inputs (Line 1); then, it sequentially checks the current state and task execution progress for each machine (Lines 2 and 3). For each machine that was paused in previous time intervals, if the task scheduling margin is equal to 1 (which means that the machine cannot be further paused to meet the task deadline), the machine state is set to 2 (Lines 4-7); if the task scheduling margin is larger than 1, the algorithm further checks if the total power consumption of the laundry is larger than the power threshold; if yes, the algorithm continues to pause the machine and sets its state to 1 (Lines [8][9][10][11][12][13][14]. Start For each machine that is running a task, the algorithm checks if its task has been completed. If so, the algorithm turns off the machine (Lines 15-17); otherwise, the algorithm lets the machine continue the running of the task (Lines 18-20). By combining the three algorithms, the overall procedures of the proposed energy management scheme for a public laundry is illustrated in Figure 3. SIMULATION STUDY Numerical simulations are conducted to validate the proposed LEMS and the results are reported in this section. Simulation setup The operating environment of a public laundry is simulated, including four commercial, 9 kg, heavy duty washing machines and four 10 kg, heavy duty cloth dryers. The rated power of each washing machine and each clothes dryer is 2.05 and 4.6 kW, respectively. Some major specifications of the machines are shown in Table 1. A normal business day is considered, in which the energy management horizon is set between 8 a.m. and 8 p.m. The duration of a control time interval is set to be 10 min; there are thus totally 72 time intervals over the whole energy management horizon. One-day laundry tasks are simulated based on the information from a real-world public laundry [16], which is a queue consisting of multiple pairs of clothes washing and drying tasks. The DCT rate is set to be $8.03 kW/month. The DCT budget (C budg ) of the laundry owner is set to be $165, which means the power threshold is 20.5 kW. To demonstrate the effectiveness of the proposed method, we compare the LEMS with a benchmark case without energy management. That is, for the benchmark case, each laundry machine starts to operate at the submission time of its hosted task and keeps on running until the task is completed. In the following, we denote the benchmark case and the proposed LEMS as Cases 1 and 2, respectively. Scenario 1: Energy management with moderate tasks A total of 62 laundry tasks are simulated in this scenario, which consist of 31 washing tasks and 31 drying tasks. The operation preference settings of the laundry washers and dryers are shown in Tables 2 and 3, respectively. By applying the proposed LEMS to the simulation settings, the power consumption and load profile are obtained. The laundry's numerical operation results are shown in table 4. It can be seen that with the proposed LEMS, the laundry's peak power consumption is largely reduced from 24.55 to 20.03 kW (18.41% reduction) compared to the benchmark case. As a result, the DCT cost is reduced from $197.14 to $163.01, meaning that it is successfully controlled within the budget (i.e. $165). The total power consumption profiles of the laundry under both cases are shown in Figure 5. The details of the task operations and machine states are shown in Figure 4 where it can be seen that many new tasks are submitted at 10:20 a.m. To avoid exceeding the power threshold, Dryer 4 is paused at 10:20 am, and resumed after 20 min. The effect of this pausing action is reflected in Figure 5, where the laundry's peak power consumption is reduced from 24.55 to 20.5 kW during the period between 10:20 a.m. and 10:40 a.m. During the following 20 min, Dryer 2 is paused because the scheduling margins of the task on this machine is larger than the one of the other machines. Similar task scheduling actions are performed on other machines at different instants in time. Scenario 2: Energy management with intensive tasks To further test the effectiveness of the proposed LEMS, we apply the LEMS to the same laundry settings but with highly intensive laundry tasks. A total of 70 tasks is simulated and this includes 35 washing tasks and 35 drying tasks. The machines' operation details are shown in Figure 6 and the laundry's numerical operation results are summarized in Table 5. From Figure 6(a), it can be seen that every washing machines and cloth dryers are either in an operating state or in a paused state for most hours. At 2:30 p.m., multiple drying tasks are submitted to Dryer 1, 2, 4 and Washers 1 and 3. Meanwhile, Washers 2 and 4 cannot be paused as their hosted tasks are approaching the deadline. With the task scheduling of Figure 6(b), although Dryer 3 is paused, the laundry's total power consumption at that time is still larger than the power consumption threshold. Overall, in this scenario, due to the fact that there are too many laundry tasks to execute, there is no way for the LEMS to control the DCT cost within the budget; nevertheless, the LEMS still tries to reduce the DCT cost as much as possible. Figure 7 compares the laundry's total power consumption profiles for both cases. Again, with the use of the LEMS, the laundry's power profile is much smoother than that observed without the use of the energy management. One-month evaluation One-month operation of the proposed LEMS is simulated with the same setting of the DCT rate and the budget previously introduced. We duplicate the one-day task settings from the simulation in Section 4.3 to other days with some modifications, including randomly changing the number, arrive time, and deadlines of tasks and reducing tasks in weekends. We also compare the 1-month operation results with the scenario that does not use the energy management system. The peak power consumption of each day is shown in Figure 8, and the numerical 1-month optimization results are reported in Table 6. The results show that when there is no energy management, the peak power consumption of the month is 26.6 kW, leading to high DCT cost ($213.6). With the proposed LEMS, the peak power consumption of the month is significantly reduced to 22.51 kW; the corresponding DCT cost is reduced to $180.75. These results confirm that the effectiveness of the proposed LEMS. Evaluation of LEMS on laundries with different scales We apply the proposed LEMS to three public laundry models with different scales, that is, with 5, 6, and 7 pairs of washers and dryers, respectively. The configuration of each public laundry is shown in Table 7. To simulate the diversity of the laundry load, different numbers of laundry tasks with diverse submission time and deadlines are generated and assigned to each laundry. The numerical optimization results are shown in Table 8. Without LEMS, the one-day DCT cost of the public laundry 1 is $234.07; by applying the LEMS, the cost is significantly reduced to $197.78, which is below the threshold, $200. Correspondingly, the peak power of the Laundry 1 is reduced from 29.15 to 24.63 kW. Similar trends can be found in laundries 2 and 3. CONCLUSION AND FUTURE WORK This paper proposes a real-time energy management system for public laundries with penetration of demand charge tariff. Based on the instant power consumption of the laundry and a dynamically formed queue of laundry tasks, the proposed laundry energy management system schedules and controls the laundry tasks to control the monthly demand charge cost within a budget. The simulation results show that the developed laundry energy management system can effectively help the laundry owner to reduce the financial risk of demand charge tariff while ensuring the quality of services delivered to laundry users. Even in the scenario with intensive laundry tasks, the proposed energy management scheme can still help to reduce the demand charge cost and peak power consumption. In future work, it is desirable to account for the penetration of renewable energy and other electricity tariffs in the energy management framework . Other future work can also focus at the possible implementation of Peer-to-Peer (P2P) energy trading between laundry systems and other energy customers.
5,572
2021-01-21T00:00:00.000
[ "Engineering" ]
Pre-Service Physics Teachers ’ Difficulties in Understanding Special Relativity Topics The aim of this study is to identify the reasons why pre-service physics teachers have difficulties related to special relativity topics. In this study conducted with 25 pre-service physics teachers, the case study method, which is a qualitative research method, was used. Interviews were held with the participants about their reasons for difficulties in understanding special relativity topics. We used content analysis with the interview data and created eight categories. By doing so, we tried to identify the causes of difficulties experienced by the participants. As a result, it can be said that students are biased against relativity subjects and consider them to be difficult. Although the students found the subject interesting, problems such as mathematical difficulties, problems related to determining the reference system and transition from classical physics to relativistic physics made the learning process difficult for them. Additionally, we identified positive and negative opinions about the teaching method. INTRODUCTION The theory of relativity is one of the most fundamental theories of physics.Recently, the teaching of this theory has been given more attention.At first, relativity was only taught at the level of higher education in Turkey, but it has begun to be taught in high schools since 2008.Accordingly, the number of science teaching researches on these topics have increased (Özcan, 2011;Selçuk, 2011;Yıldız, 2012). Relativity contains topics that are difficult to learn and teach.Previous studies show that students have difficulties in understanding relativity related topics (Guisasola et al, 2009;Ireson, 1996;Scherr et al, 2001;Scherr et al, 2002;Selçuk, 2011).Dimitriadi and Halkia (2012) reported learning difficulties due to reference systems and students' tendency to combine theory of relativity with classical physics.Another study shows that students believe that there is a preferred/privileged observer; and time dilation and length contraction only occur according to the moving observer (Villani & Pacca, 1987).In the study that he performed with pre-service teachers from different academic levels, Selçuk (2011) identified significant learning difficulties in concepts of proper time, time dilation, proper length, mass and relativistic density.Even when students take advanced level lessons, they can't understand the implications of special relativity on interpreting the physical world (Scherr et al, 2002).From this perspective, it seems necessary to develop teaching approaches suited to special relativity.Some studies point out the positive contributions of using the history of science to teach relativity (Arriassecq & Greca, 2012;Villani & Arruda, 1998).Ogborn (2005) suggests a sequence consisting of four steps for teachers to teach relativity.Among studies on special relativity, studies focusing on teaching by visualisation occupy an important place (Carr & Bossomaier, 2011;Henriksen et al,2014;Kortemeyer et al, 2013;Kraus, 2008;McGrath et al, 2010;Savage et al, 2007;Smith, 2011;Wegener et al, 2012).Al-Khalili (2003) shares his ideas about teaching relativity using topics that most people find interesting, such as time travel.Some studies have suggestions on laboratory experiment about relativity (Singh, Singh & Hareet, 2011;Singh, 2013). In studies that investigate the difficulties of students related to relativity, difficulties were identified based on students' answers to relativity-related questions.Although there are several studies in the literature in which difficulties experienced by students in physics are investigated, there is no study that investigates the reasons why students have relativity-related difficulties by asking students about these difficulties.Therefore, this study differs from other studies in terms of the method used to determine the reasons why students have difficulties in relativity topics.The aim of this study is to identify the reasons why pre-service physics teachers have difficulties related to relativity topics.The research question is "Why do pre-service physics teachers have difficulties related to relativity topics?" METHOD This study is a qualitative research.The reason for qualitative research is because it was intended to achieve analytic generalization rather than generalizing the results for the population.Analytical generalization aims to reach certain conclusions or theories through a limited number of participants or information sources (Altunışık et al., 2002).The research method of the study is similar to the case study.Case study is defined as "in-depth review focusing on a current case, event, situation or set" (Yin, 1994).In other words, case study is an in-depth study seeking answers for "how" and "why" questions (Yıldırım & Şimşek, 2006).Case study involves an interest towards the process rather than the results, the context rather than a specific variable, reviewing and finding rather than proving (Merriam, 1998).How the research was conducted as described below. The study was performed during a special relativity course.The subject of special relativity was taught by dividing it into topics of Relativity of Time, Relativity of Length, Lorentz Transformation Equations, Lorentz Velocity Transformation Equations, Relative Momentum and Relative Energy. Participant Selection This study was conducted with pre-service teachers taking the Special Relativity course.The study was conducted during the Spring Semester of 2014 with 25 pre-service physics teachers.Pre-service physics teachers take this course in the 6 th semester.In the semester in which this study was conducted, four of the participants were in the 6 th semester of their university education, nine participants were in the 8 th semester, nine participants were in the 10 th semester, and five participants were in the 12 th semester or above.The high number of pre-service teachers repeating the course, which is supposed to be taken in the 6 th semester, can be seen as an indicator of difficulties experienced in relation to the course.The aim of this study is to identify the reasons behind these difficulties.Therefore, purposive sampling was preferred and all students taking the course in the related semester were included in the study. Data Collection Method Data collection was carried out in two stages.In the first stage, the participants were asked to write down why they thought each topic was easy or difficult for them.There was no time limit at this stage. In the second stage, interviews were held with the participants to further investigate their difficulties and better understand the papers written by the participants.During these interviews, some participants changed or made additions to some of their statements and explained them in detail.Thus, attempts were made to find out the opinions of the participants more clearly and deeply.There was no time limit for the interviews either. Data Analysis Method Description papers and interview notes, which are qualitative data sources, were considered as the raw data.The raw data was evaluated using the content analysis method.The main purpose of the content analysis is to find the concepts and relations that explain the data obtained (Yıldırım & Şimşek, 2006).The content analysis method was used in order to identify the data, bring the similar data together within the framework of certain concepts and themes and reveal the truth that might be hidden in the data (Aslan, 2009).To this end, the raw data was encoded.Samples for how the codes were identified are as follows: Participant A11: "I had trouble because they are abstract concepts, I couldn't imagine them.","RE-3: It was difficult for me because the concepts were abstract" is taken as the code name. Participant A24: "I have difficulties because relativity of time is abstract.For example, the twins paradox.I can't associate it with daily life.","RT-18: I cannot adapt to daily life" is taken as the code name. Participant A11: "The formulas are not difficult.It is easy to solve the problems when you determine in which reference system the quantities were measured.","LTE-1: It's difficult to identify quantities in Reference Systems" and "LTE-6: The formulas are easy" are taken as the two different code names. The codes were divided into categories and grouped together using their similar features.The principle of "coding according to concepts concluded from the data", which was suggested by Strauss and Corbin (1990) was used for coding.The codes were divided into eight categories in total.The said categories are shown in Table 1 with examples.The examples given in Table 1 were taken from written expressions of the participants or used during the interview as a description for that category.The participants were number from A1 to A25.Categorized codes, categories and topics of relativity were comparatively evaluated and we tried to understand the reasons why participants had difficulties understanding. Validity and Reliability According to Lincoln and Guba (1985), it is more appropriate to use the concepts of trustworthiness instead of internal validity, transferability instead of external validity, dependability instead of internal reliability and confirmability instead of external reliability in qualitative studies.The trustworthiness was ensured by using two different data sources.These data sources are mentioned above.Purposive sampling was used in order to increase the external validity of the study.Also, the validity of the data obtained from the participants was increased by quoting the participants directly.The data obtained from the participants using different data collection tools was often compared and tested for consistency.In the same way, the consistency of comments was also tested.The results found were often compared with the raw data in an attempt to increase confirmability.Also, data analyses and conclusions were evaluated separately by two researchers and then researchers' evaluation results adjusted as well. RESULTS The raw data obtained in the study was coded and categorized.Some of the codes were positive and some were negative.For example; codes stating that the student had no difficulties or codes that explain the reasons that make it easier to learn the subject were considered to be "positive".On the other hand, codes stating that the student had difficulties and explaining the reasons that make it difficult to understand the subject were considered to be "negative".These codes and categories were evaluated both separately for each topic and as a whole.A total of 96 code types, 38 positive and 58 negative, were identified.These codes were repeated by 25 participants a total of 296 times; 107 of them positive, 189 of them negative.It shows that if participants repeat a code frequently, they want to say something about this topic and they give importance to the thought that expresses the code.In addition, it indicates that in a category the more type of codes we have in that category, the more different difficulties are experienced.The number of code types and the distribution of categories according to the topics are given in Table 2.As seen in Table 2, a minimum of 11 and a maximum of 20 codes were determined in Special Relativity topics.The minimum number of codes were in the Lorentz Velocity Transformation Equations topic and the maximum number of topics were in the Relativity of Time topic.The numbers shown in parentheses in the code number column are "positive" and "negative" number of codes, respectively.Proportionately, the Relative Energy topic had the highest number of codes, while Relative Momentum had the lowest number of codes. When the codes were divided into categories, the codes belonging to Lorentz Transformation Equations and Lorentz Velocity Transformation Equations were divided into the least number of categories (4).The codes belonging to the Relative Energy topic were divided into the highest number of categories (8).Categories seen in all topics were C1, C3, C4 and C6.Category C8, which was only seen in two topics, was the least common category. The number of codes were divided according to their positiveness or negativeness, and Special Relativity topic.The resulting distribution is given in Table 3. Examining the distribution of codes according to categories as shown in Table 3, it is seen that the negative codes were the majority in most categories except for C2 and C5.Examining on the basis of topics, the number of positive codes were higher in the Relativity of Length and Relative Energy topics with a narrow margin, while the number of negative codes were higher in the Relative Momentum topic with a narrow margin.In all other categories, the number of negative codes were significantly higher. The C1 category had the highest number of codes in total, both positive and negative.The number of codes in this category was almost equal to half the total number of codes in all categories.Looking at the data in the C1 category as given in Table 3, it is seen that negative opinions were generally dominant.This is especially evident in the Lorentz Transformation Equations and Lorentz Velocity Transformation Equations topics.On the other hand, it is undeniable that positive opinions were the majority in the Relative Energy and Relative Momentum topics.It is also noticeable that the number of positive and negative opinions were equal in the Relativity of Length topic.Examining the negative opinions, which dominated the majority of the topics, it is seen that a considerable number of negative opinions were related to difficulties in determining the reference system and this difficulty was reflected in problem solving and using connections.Participant A1's remark of "I'm having trouble with determining the reference system.I can't make out which reference system the quantity was measured with"' is a good example that demonstrates the effect of difficulties in determining the reference system in problem solving. In most of the topics, there were no positive or negative codes related to C2. Accordingly, it may be thought that there were no methodological problems affecting the learning process.It is noteworthy that the most of the codes in Relativity of Time, Relativity of Length and Relative Energy were positive.Examining the content of the codes, it is seen that adjusting the explanation of the topic according to the context, watching documentaries and using thought experiments had positive effects on teaching.For example, participant A5's remark of "...it's easier to understand the topic (time dilation) with the Twin Paradox", and participant A3's remark of "The topic (length shortening) becomes more understandable when we use examples with objects that we are used to (use in everyday life)" emphasize the importance of thought experiments and points to the importance of adjusting the explanation of the topic according to the context. C3 draws a poor image in all topics in terms of the number of opinions.The codes are generally negative except for Relative Energy.Examining the content of the codes; it is seen that there was a bias about the difficulty of the topic.The participants indicated that they found the contents of the topic to be contradictory with their common sense and everyday experiences.Participant A24's remark of "I can't associate it (Relativity of Time) with daily life.It contradicts with all my experiences since childhood.It (Relativity of Time) is a situation that I have never felt/experienced before", is a good example for this.The remark of participant A9: "It is not strange for me anymore that the connection are different (from classical physics)", stands out among the positive opinions in Relative Energy.This situation may be an indicator that the students accepted the concepts and the phase of finding the new ideas odd was overlooked as they proceeded to the Special Relativity topic.Additionally, it may be concluded from the codes that the popularity of the E = mc 2 equation suggested by Einstein for relative energy was quite effective.The topic was interesting for the participants and the high curiosity about the topic was reflected positively in the codes.For example, participant A6's remark of "I found E = mc 2 to be interesting because it's such a popular formula and..." and participant A21's remark of "...I think of Einstein when I think of physics and I think of E = mc 2 when I think of Einstein and this made me curious", exemplify this fact. Examining the distribution of codes in the C4 category, which involved the codes related to the transition from classical physics to relativistic physics, it is seen that the negative codes were in the majority.Also, the number of codes was higher than for most other categories as well.Category C4 had the highest number of codes after C1.The reason that there was a concentration of negative opinions in the Relativity of Time and Relativity of Length topics may be due to the fact that they constitute the first step in understanding Special Relativity.Participant A2's remark of "I found the concepts and imagining them in my mind to be difficult, because it (Relativity of Time) is the first topic of the transition from classical physics to relativity", clearly shows this.However, it is also seen that there was a concentration of negative opinions about Relative Energy and Relative Momentum as well.Examining the content of these codes, difficulties experienced with the momentum and energy topics in classical physics were also evident in the relative momentum and relative energy topics.Participant A24's remark of "I'm having difficulties with momentum in classical physics a well.It's not a topic that I can get a grasp of.That's why I'm having trouble with relative momentum too", emphasizes this situation.Additionally, the participants indicated that they were not able to distinguish between momentum and energy when they were supposed to use these concepts according to the classical approach and when they were supposed to use them according to the relativist approach.It was found that the participants had trouble with understanding why they needed concepts of relative energy and relative momentum.Also, not being able to fully rationalize the mass-energy equivalence was among the negative opinions. The number of codes was not high in the C5 category, which involved the codes related to relations between the relativity topics and there were no codes in this category in the Lorentz Transformation Equations and Lorentz Velocity Transformation Equations topics.The codes in Relative Momentum and Relative Energy were all positive.Although the relations between the Special Relativity topics were not generally considered to be difficult, the only topic that did not involve positive opinions but only negative ones was Relativity of Time.Looking at the content of negative opinions, the participants stated that they had difficulty in handling Relativity of Time with Lorentz Transformation Equations.It was emphasized that the opposite actions of time and length under relativity conditions (shortening of length while time dilates) caused confusion.Considering the positive codes in Relative Momentum and Relative Energy on the other hand, since they had already understood the logic of Special Relativity, the participants stated that they had no difficulty in associating Relative Momentum and Relative Energy with other topics of relativity. C6 is another which had a high number of codes derived from the opinions of the participants.The majority of these codes were negative in nature.Although the number of positive and negative codes was almost equal in Relativity of Time and Relativity of Length topics, the negative codes were dominant in other topics.Considering the content of negative opinions, it was emphasized that the subject required effort and time.It is also noteworthy that the number of students repeating the course were quite high.The participants emphasized that they did not understand Special Relativity the first time, but they were able to understand it after repeating the course.The content of positive codes in Relativity of Time and Relativity of Length generally consisted of opinions related to the ease of understanding of the topics. There were a relatively low number of codes in the C7 category, which involved opinions related to concreteness or abstractness of the topic.There were no positive or negative codes in the Lorentz Transformation Equations and Lorentz Velocity Transformation Equations topics.However, the number of negative codes was high especially for Relativity of Time and Relative Energy.Participant A11's remark of "I had trouble because they are abstract concepts, I couldn't imagine them", is an example.It is seen that the participants generally considered the concepts of time and energy to be abstract and difficult.In addition, participant A25's remark of "I easily understood length because it is a concrete quantity", demonstrates the ease of understanding associated with concrete objects.On the other hand, participant A13's remark of '"I can accept the change easily because time is not concrete.But it is hard to accept length shortening because it is related to a concrete substance", represents a divergent opinion. The codes that resulted from the contradictions in sources were collected in the C8 category.This is the category that had the least number of codes.In this category, there were no codes in any topics other than Relative Momentum and Relative Energy, and all codes were negative.In some of the sources, it is mentioned that mass relatively varied with velocity.In some other sources on the other hand, it is indicated that this was wrong and mass did not vary with velocity.The participants stated that they were confused because there was different information in different sources.For example, participant A3 explains this situation clearly: "The fact that there are two different explanations for the relativity of mass leads to confusion."and "It seems as if there were two different masses in the (relative) kinetic energy formula.It is very difficult to understand this topic (Relative Energy)." CONCLUSION AND DISCUSSION Based on the study data, in conclusion, it can be said that the participants found the subject of Special Relativity interesting.According to Ogborn (2005), although students find Special Relativity very interesting when they first hear about the concept of time dilation and the mysterious formula of E=mc 2 , the mathematical difficulties that they experience when they meet the Lorentz transformations cause them to lose interest.In the present study, the mathematical difficulties stand out as one of the problems that the participants faced when learning about relativity.Especially in Lorenz Transformation Equations, these difficulties are seen to be more dominant.The problems related to determining the reference system were the mathematical problems.The participants mentioned that, when solving problems, they had had difficulty in understanding which quantity was measured by which observer or which reference system, even if they comprehended that the issues belonged to the Special Relativity topic.Aslanides and Savage (2013) identified that the students couldn't comprehend the correct relativistic thinking and couldn't define the symmetry between these two references.In the present work, the participants mentioned that they had comprehended relativity.Maybe the problem was that they could not define the correct relativistic thought exactly.To expose this situation, it can be researched in more detail.Some studies emphasize challenges experienced by students with reference systems at various levels, which are similar to challenges observed in this study (Dimitriadi & Halkia, 2012;Scherr et al, 2002).Additionally, difficulties related to visual and spatial skills, remembering the formulas, constructing the problems and applying mathematical skills in problems were observed.Taking the statements of the participants into account, it was seen that using storytelling and visualization in the presentation of the problem was useful.Therefore, use of such methods in problem presentations could be increased. However, it was also found that the participants were biased about the difficulty of the topic.The fact that there are a lot of rumors about the very difficult nature of relativity caused students to have a bias concerning the course.According to the participants, another reason why the topics of relativistic physics are so difficult to learn is that it requires extra effort and time. Another point where students have difficulties related to learning is the classical physicsmodern physics paradigm shift, because events encountered by the participants in their everyday lives can generally be explained in accordance with the classical physics paradigm.Besides, it is possible to conduct real experiments with classical physics.On the other hand, relativistic physics is a subject that is not encountered in everyday life and it is not suitable to perform real experiments.In addition, relativistic physics usually produces results that contradict with real everyday experiments and perceptions.Scherr (2007), in her work, identified that it can be because of beliefs which we acquired in daily life, it is difficult for students to learn the relativity of simultaneity.According to Scherr (2007), the experience which we have gained in everyday life allows us to believe that the relativity of simultaneity is absolute.Among the difficulties that the participants faced when learning about relativity, this situation can be seemed clearly to be evident.Difficulties related to abstractness of relativistic physics, problems associating it with everyday life and imaging the concepts in mind were clearly stated by the participants.In order to overcome these difficulties, it is suggested in some studies that special relativity is taught by visualisations, using computer programs such as animations, simulations and games (Carr & Bossomaier,2011;Henriksen, 2014;Kortemeyer et al, 2013;Kraus, 2008;McGrath et al, 2010;Savage et al, 2007;Wegener et al, 2012).The use of thought experiments in teaching of the special relativity as an effective tool is also common (Cacioppo & Gangopadhyaya, 2012;Cornier & Steinberg, 2010;Franklin, 2010).Although the statements of the participants underlined the importance of thought experiments, they also indicated the difficulties in understanding them. It is seen in some sources (Born, 1962;Feynman, 1997) that mass varied depending on velocity based on the experimental validation of the predictions of special relativity.In recent years, it has been indicated that the concept of velocity-dependent mass was a misunderstanding and this fact had to be changed in all books and curricula (Hecht, 2009;Okun, 1989).Some books featuring special relativity changed the parts about the concept of "relative mass" in later editions (Serway & Beichner, 2000;Ünlü et al, 2014).Thus, the contradictory explanations about "relative mass" in sources featuring special relativity caused confusion.The pre-service teachers who participated in this study stated their difficulties in this regard.Selçuk (2011) addressed a similar situation in detail in his study. Table 1 . Categories and Descriptions A5: "Understanding the thought experiences requires effort."A15:"Itwasnot hard for me to accept the relativity of time, I had read a book before because I had thought time travel was interesting."A18:"It is easier to solve problems with pictures and images, but it is hard to imagine other problems."A3:"…(inclassicalphysics) we were only talking about a single time (compared to reference systems).It is very difficult to transit from the idea of classical time to relative time."A17:"Ihavealways found the topic of energy very confusing....I passed the mechanics course by memorizing."A19:"Iclearlyunderstood the concept of relative momentum, because I had understood the concept of momentum in classical physics very clearly."C5Difficulties in associating the topics of relativity with each other.A4: "It is confusing that the length shortens while the time expands."A6: "After learning the concept of relativity in the beginning, it was easier to understand (relative moment)."A24: "…I understand the length shortening.But I can't associate it with Lorentz transformation A13: "The greatest difficulty of relative momentum is that how the momentum can be relative, if the mass is not?" Table 2 . Distribution of codes and categories in the topics of the Special Relativity Table 3 . Distribution of Number of Codes According to Special Relativity Topics and Categories
6,049.4
2016-01-01T00:00:00.000
[ "Education", "Physics" ]
GPareto : An R Package for Gaussian-Process Based Multi-Objective Optimization and Analysis The GPareto package for R provides multi-objective optimization algorithms for expensive black-box functions and an ensemble of dedicated uncertainty quantification meth-ods. Popular methods such as Efficient Global Optimization in the mono-objective case rely on Gaussian processes or Kriging to build surrogate models. Driven by the prediction uncertainty given by these models, several infill criteria have also been proposed in a multi-objective setup to select new points sequentially and efficiently cope with severely limited evaluation budgets. They are implemented in the package, in addition with Pareto front estimation and uncertainty quantification visualization in the design and objective spaces. Finally, it attempts to fill the gap between expert use of the corresponding meth-ods and user-friendliness, where many efforts have been put on providing graphical post-processing, standard tuning and interactivity. Introduction Numerical modeling of complex systems is now an essential process in fields as diverse as natural sciences, engineering, quality or economics.Jointly with modeling efforts, methods have been developed for the exploration and analysis of corresponding simulators, in particular when runs are time consuming.A popular approach in this case is to rely on surrogate models to alleviate the computational expense.Many surrogate models are used in practice: polynomials, splines, support vector regression, radial basis functions, random forests or Gaussian processes (GP).They may be integrated in various optimization strategies, see e.g., Wang and Shan (2007), Santana-Quintero, Montano, and Coello (2010), Tabatabaei, Hakanen, Hartikainen, Miettinen, and Sindhya (2015) and references therein.We focus here on GP-based strategies, which have been recognized as very well-suited for sequential designs of experiments, and in particular in an optimization context (Jones, Schonlau, and Welch 1998;Jones 2001). The GPareto package proposes Gaussian-Process based sequential strategies to solve multiobjective optimization (MOO) problems in a black-box, numerically expensive simulator context.More precisely, it considers the case of models with multiple outputs, y (1) (x), . . ., y (q) (x) (where y (i) : X ⊂ R d → R), that are optimized simultaneously over a box-constrained domain X.Typically, outputs (or objectives) are conflicting (e.g., quality versus quantity, etc.), so there exists no solution where all objectives are minimized at once.The goal is then to iden-tify the set of optimal compromise solutions, called a Pareto set (Collette and Siarry 2003).Defining that a point x * dominates another point x if all its objectives are better (which we denote by x ⪯ x * in the following), the Pareto set X * is the subset of the non-dominated points in X: ∀x * ∈ X * , ∀x ∈ X, ∃k ∈ {1, . . ., q} such that y (k) (x * ) ≤ y (k) (x). The image of the Pareto set in the objective space, y (1) (X * ), . . ., y (q) (X * ), is called the Pareto front, which is useful to practitioners to select solutions (see Figure 3 for an illustration).In practice, the Pareto set is usually not finite, and optimization strategies aim at providing a finite set that represents X * well. In general, numerical optimization has motivated a substantial activity in the R community: see for instance the CRAN Task View on optimization and Mathematical Programming (Theussl and Borchers 2015) or the recent special Journal of Statistical Software issue (Varadhan 2014).However, most works are dedicated to mono-objective optimization with large budgets.For small budgets, the packages DiceOptim (Roustant, Ginsbourger, and Deville 2012;Ginsbourger, Picheny, and Roustant 2015) and tgp (Gramacy 2007;Gramacy and Taddy 2010) propose GP-based techniques, but for mono-objective problems only.There are a few packages on MOO in general: nsga2R (Tsou 2013), emoa (Mersmann 2012), mopsocd (Naval 2013), goalprog (Novomestky 2008) and mco (Mersmann 2014), which provide tools and algorithms such as NSGA-II (nondominated sorting genetic algorithm II) implementations (Deb, Pratap, Agarwal, and Meyarivan 2002) or hypervolume computations (see Section 2.2).As for methods available for expensive black-box functions optimization, the package SPOT (Bartz-Beielstein and Zaefferer 2012) seems to be the only alternative to GPareto.On the other hand, GP-based MOO has recently generated a substantial activity in the statistical and optimization communities, with focuses either on sampling strategies (Ponweiser, Wagner, Biermann, and Vincze 2008;Wagner, Emmerich, Deutz, and Ponweiser 2010;Svenson 2011;Emmerich, Deutz, and Klinkenberg 2011;Picheny 2015;Zuluaga, Sergent, Krause, and Püschel 2013) or on uncertainty quantification (Bhardwaj, Dasgupta, and Deb 2014;Calandra, Peters, and Deisenroth 2014;Binois, Ginsbourger, and Roustant 2015a).GPareto aims at filling this gap by making most of the recent approaches available in a unified implementation to both MOO experts and end-users.To this end, a substantial effort has been given to provide graphical visualization and standard tuning, and many entry-points ranging from high-level interfaces to specific method tuning have been made available. GPareto is built upon the DiceKriging (Roustant et al. 2012) package dedicated to Gaussian process modeling.Several associated packages deal with various related problems, in particular DiceOptim (mono-objective optimization) and KrigInv (algorithms for inversion problems) (Chevalier, Picheny, and Ginsbourger 2014a,b).GPareto shares many aspects with those packages.This document is also available as a vignette in the package, which is available from the Comprehensive R Archive Network at https://CRAN.R-project.org/package=GParetoalong with the full PDF documentation. The remainder of this paper reviews briefly the methods available in the package, describes important implementation aspects and functionalities, and provides illustrations through a few examples. Principles of Gaussian-process based optimization We recall here very briefly the scheme common to most GP-based (mono-or multi-objective) optimization, as in the famous EGO algorithm (efficient global optimization) proposed in the seminal article of Jones et al. (1998). The mono-objective case Let y be the output of the numerical model of interest and x ∈ R d the inputs to be optimized over.Considering for now that y is a scalar, it is assumed to be a realization of a Gaussian process F (x) with mean µ(x) and covariance c(x, x ′ ) known up to some parameters. Step 1 Generate an initial set of n observations: y 1 = y(x 1 ), . . ., y n = y(x n ).Typically, {x 1 , . . ., x n } are chosen using a space-filling design.A classical rule-of-thumb is to use n = 10 × d. Step 2 Fit the GP model to the data, by estimating the mean µ(x) and covariance c(x, x ′ ).Typically, a parametric form is assumed for those functions, whose parameters are adjusted, e.g., by using maximum likelihood estimation.The GP model is the distribution of Y (x) conditional on the observations y 1 , . . ., y n , with plugged-in mean µ and covariance c. Step 3 A new point x n+1 is chosen as the maximizer of a so-called infill criterion which is based on the GP model.This step requires running an inner optimization loop to find the best point over R d . Step 4 A new observation y n+1 = y(x n+1 ) is obtained by running the simulator and the GP model is updated by conditioning on y n+1 .At this step, the estimates of µ and c might be updated. Steps 3 and 4 are repeated until the simulation budget is exhausted or when a stopping criterion is met. There are many R packages to perform Step 1, see for instance planor (Kobilinsky, Bouvier, and Monod 2015), DiceDesign (Dupuy, Helbert, and Franco 2015), or lhs (Carnell 2016).For Step 2, GPareto relies on the DiceKriging package, which offers a choice of mean and covariance functions.The model parameters estimation is based on maximum likelihood, see Roustant et al. (2012) for details. Step 3 defines the sampling strategy, as the infill criterion determines the balance between exploration (search for new solutions) and exploitation (local improvement around existing observations).The EGO algorithm is based on the so-called expected improvement (EI) criterion.The improvement is defined as the difference between the current minimum of the observations and the new function value, such that for a GP model, EI is the conditional expectation of the improvement provided by a new observation Y (x): which has a closed form expression (see Jones et al. 1998, for calculations). Noisy objectives In many optimization problems, the objective cannot be evaluated exactly but through a "noisy" procedure, that is, one only has access to measurements of the form f i = y(x i ) + ε i .A classical hypothesis, adopted here, is to assume independent Gaussian centered noise, that is: . GP modeling naturally adapts to this case (see for instance Ankenman, Nelson, and Staum 2010), and the package DiceKriging offers options to take noise into account. However, the EGO algorithm may not be used directly; Picheny, Wagner, and Ginsbourger (2013) provides a review of the extensions that have been proposed to handle noisy objectives.Within those, the reinterpolation approach of Forrester, Keane, and Bressloff (2006) is attractive, since it amounts to building a secondary noiseless GP that can be directly used with EGO.As showed in Koch, Wagner, Emmerich, Bäck, and Konen (2015), this approach can be readily applied to the multi-objective case, and is implemented in GPareto. The multi-objective case When multiple objectives are considered (y has values in R q ), Steps 2 and 3 need to be modified.Let us remark first that it is possible to go back to a scalar problem and apply standard methods, for instance by relying on objectives aggregation (Knowles 2006;Zhang, Liu, Tsang, and Virginas 2010) or modeling desirability functions (Henkenjohann and Kunert 2007).However, these have been found to be relatively poor solutions in practice (Henkenjohann and Kunert 2007;Svenson 2011). GPareto focuses on approaches where GP models are fitted independently to each objective.Although it is possible to account for correlation between the different objectives, for instance using co-kriging models (e.g., Álvarez, Rosasco, and Lawrence 2011), experimental results in Svenson (2011); Kleijnen and Mehdad (2014) suggested that it provides little benefit compared to the additional complexity. Choosing infill points from a set of GP models is a complex question (see Section 2.2).Within GPareto, we focus on approaches that compute a single infill criterion from the list of models.Hence, Step 3 is identical to the mono-objective case, provided that an adequate infill criterion is used. Review of surrogate-based and Bayesian multi-objective optimization In the mono-objective case, the expected improvement criterion evaluates the potential gain of an additional point in terms of the expected decrease over the best observation so far.In a similar fashion, a multi-objective improvement function can be defined by estimating the expected "progress" brought by a new observation (relatively to the current set of nondominated observations P n ).This leaves room to put the focus either on a good coverage, on extremities, or on convergence toward the actual Pareto front, for which specific metrics, such as the hypervolume or epsilon indicators, have been proposed (see e.g., Svenson 2011;Emmerich et al. 2011).Specifically, the hypervolume improvement is the increment of the volume contained between the Pareto front and a reference point in the objective space, when a non-dominated point is added.The epsilon increment is the smallest scalar that must be added to components of a new point (in the objective space) such that it is dominated by the current Pareto front.An illustrative example is given in Figure 1.In terms of epsilon improvement, the green point is more interesting as it is farther away from the Pareto front, but the blue point is better in terms of volume increment. These indicators, among others, have been used to define generalizations of the expected improvement.Empirical comparisons showed the clear superiority of some approaches to others (Svenson 2011;Wagner et al. 2010), but no global consensus on a particular improvement function.In GPareto, two infill criteria derived from this point of view are available: the expected hypervolume improvement (EHI, Emmerich et al. 2011) and expected maximin improvement (EMI, Svenson and Santner 2016, related to the epsilon indicator).See the corresponding references for the technical details. Two alternatives have been included in GPareto as well.First, in the SMS-EGO approach (S-metric selection EGO, Ponweiser et al. 2008;Wagner et al. 2010), the improvement is calculated as the hypervolume added to the current Pareto front by the lower confidence bound of the prediction at x, hence it is closely related, but not equal to the EHI.To avoid large plateaus of zero improvement, an adaptive penalization is provided in regions where the lower confidence bound is dominated. Finally, the stepwise uncertainty reduction (SUR) criterion of Picheny (2015) is in turn concerned with the probability of non-domination (also called probability of improvement), that is, the probability of a point not to be dominated by the current Pareto set: P [x ̸ ⪯ X n ].Intuitively, regions in the design space with non-null probabilities indicate a potential improvement for the Pareto front, and the improvement considered is the reduction of the average of this probability over the design space. These sequential infill criteria share the common trait that they do not provide a continuous representation of the Pareto front but only consider the current set of non-dominated observations.This point is addressed in the following with a quantification of the uncertainty on both the Pareto set and front. Uncertainty quantification With limited evaluation budgets, the non-dominated solutions in the objective and variable spaces may not give a very precise or dense approximation of the Pareto front and set.However, the Gaussian process framework allows us to overcome this limitation by providing an uncertainty quantification of the optimization results. Pareto front (objective space) One straightforward idea is to use the surrogate models to give an estimate of the Pareto front, as is done e.g., in Calandra et al. (2014).While being fast, this approach is very dependent on the quality of the surrogates and there is no measure of uncertainty associated. In Binois et al. (2015a), an alternative relying on conditional simulations of Gaussian process models is detailed, which provides an estimate of the Pareto front and an associated measure of uncertainty. In short, it exploits the capacity of the GP models to generate different possible realizations N , . . ., S (q) N for the outputs conditioned by the observations, i.e., conditional simulations, see Figure 2.For each path, a Pareto front is obtained (say, P (1) , . . ., P (N ) ).Then, the set of fronts are used to define an average set P estimating the true Pareto front while the deviation from this set is used as a measure of uncertainty.Note that handling sets of conditional Pareto fronts as performed in Binois et al. (2015a) requires the use of random closed sets theory (Molchanov 2005); in particular, the estimator and uncertainty measure used are the Vorob'ev expectation and deviation, respectively.Visually, representing the deviation for each random Pareto front directly illustrates which parts of the Pareto front are precisely known or not (see Figure 5).As described in Binois et al. (2015a), the current version of this approach requires conditional simulations on discrete sets of inputs (for instance, a grid or a space-filling sample, which is the solution adopted in GPareto, see Section 3.3).This set must be large to ensure that no important potential solution is missed, which makes this approach computationally intensive. Pareto set (variable space) In a similar fashion, returning a smooth estimate of the entire Pareto set X * may be useful to practitioners.We propose here to rely on two complementary approaches. First, conditional simulations can be used here: From each set of GP realizations, the Pareto set X * i can be obtained.Then, the sets X * 1 , . . ., X * N can be used to estimate a density, e.g., using kernel density estimation. A complementary measure is the probability for a given point in the variable space to be nondominated by the current set of observations, P [x ̸ ⪯ X n ].This probability can be expressed in closed form (Keane 2006), so that it can be computed on a grid for instance to display the dominated and non-dominated regions in the variable space.The amount of intermediate probability values (not zero or one) quantifies the uncertainty on the Pareto set (Picheny 2015). Note that both approaches require extensive sampling over the design space, which makes them computationally intensive. Architecture The structure of the package reflects its main orientations: multi-objective optimization and associated quantification of uncertainty.In particular, readers familiar with the DiceOptim and KrigInv packages will find a very similar set of functions ranging from high-level interfaces to lower level criteria.Additional helper functions are also provided as well as test functions. Functions related to sequential design of experiments As described in Section 2.1, Gaussian-process based optimization can be separated in four steps.Depending on the characteristics of the problem at hand, several levels of control are available.For the sake of clarity, we start by describing the highest-level functionalities before detailing routines that enable more control on the optimization process or may be integrated in other procedures. User-friendly wrapper: easyGParetoptim This is a simple interface to multi-objective optimization that perform all steps described in Section 2.1, which does not require much knowledge on the specificities of Gaussian process based optimization.If no additional control parameters are set, all Steps 1-4 are performed according to default values. The minimal arguments for easyGParetoptim are the following, common with many optimization methods in R such as optim: • fn, this is the multi-objective function that returns the values of the objectives at a given design; • budget, the maximal number of evaluations of the expensive black-box function fn; • lower, upper, vectors giving the limits of the domain for optimization.A design of experiments may be passed using the argument par and corresponding values provided with values; otherwise a maximin LHS design is constructed from DiceDesign. Noisy objectives can be handled with the argument noise.var,which stands for the noise variance.We assume here that the user has prior knowledge of the variance.The two main options are to provide a vector of size q (constant noise) or a function (same arguments as fn) if the noise depends on x.Additional tuning of the inner procedures are available using the control list, in particular the criterion (method) and the optimization routine of the acquisition function (inneroptim).By default, easyGParetoptim uses SMS as criterion, with pso as inner-optimization routine.Both choices have been made to favor speed while ensuring robustness.They are also the default choice for the following GParetoptim routine. GParetoptim This function handles Steps 3 and 4, hence assuming that users have performed a design of experiments and built surrogate models at their convenience, which they provide in the argument model.Besides fn, lower, upper and noise.varshared with easyGParetoptim, more parameters are directly exposed, such as crit for selecting the infill criteria or cov.reestim to decide whether or not hyperparameters are updated after adding new observations.More flexibility is given using control parameters, optim_control for the optimization of the infill criterion and crit_control for parameters of this latter, that are useful for the following crit_optimizer function. crit_optimizer Optimizing the criteria, a.k.a.acquisition functions, is quite complicated due to their multimodality: see Figure 5 for an illustration.Besides, in general, no derivative expressions are available and there are large plateaus.On top of that, the attraction basin of the global optimum of the infill criterion may have a very small volume in the variable space (see Roustant et al. 2012, for the illustration of this problem).Nonetheless, acquisition functions are typically much cheaper to evaluate than the objective functions and intensive optimization can be carried out. Three solutions to perform this inner optimization are provided in GPareto: 1. the user can provide a set of candidate points with optimcontrol in crit_optimizer and GParetoptim (hence reducing the problem to a discrete search); 2. the default optimization routine is genoud (Mebane and Sekhon 2011), a genetic algorithm; 3. the psoptim optimization method (Bendtsen 2012), a particle swarm algorithm is also provided; and the corresponding tuning parameters may be passed to optimcontrol.Passing any other optimization method is also possible, given that it works as the standard optim method in R from package stats (R Core Team 2015). Criteria functions Four criteria are available in GPareto 1.1.6: • crit_SMS for the SMS-EGO criterion (Ponweiser et al. 2008;Wagner et al. 2010) (based on the MATLAB source code of the authors); • crit_EHI for the expected hypervolume improvement criterion (Emmerich et al. 2011) (based on the MATLAB source code of the authors for the bi-objective case); • crit_EMI for the expected maximin improvement criterion (Svenson and Santner 2016;Svenson 2011); • crit_SUR for the expected excursion volume Reduction criterion (Picheny 2015). The crit_SMS criterion has an analytical expression for any number of objectives while the one for crit_EHI is only for the bi-objective case.There is a semi-analytical1 formula for crit_EMI for two objectives.Note that the formula for crit_EHI is coded using Rcpp (Eddelbuettel and François 2011; Eddelbuettel 2013), which offers considerable speed-up over an R implementation. With m > 2, computations of crit_EMI and crit_EHI rely on sample average approximation (SAA) (Shapiro 2003), as proposed e.g., in Svenson (2011).The principle is to take samples from the posterior distribution of Y(x), i.e., Y(x) (1) , . . ., Y(x) (p) , and take the average of the improvement function over these samples: ).Note that a large sample size p is often needed to obtain a good approximation, which is at the cost of computational time.By default, the number of SAA samples nb.samp is set to 50. crit_SUR requires integrating some quantities over the design space X, which must be done numerically, making this criterion computationally intensive.Similarly to the KrigInv package (Chevalier et al. 2014a), several alternatives to select integration points are provided using the function integration_design_optim, including uniformly distributed random points, quasi Monte Carlo sequences, as well as importance sampling schemes (as described in Picheny 2015).For now crit_SUR is available for two and three objectives. In terms of complexity, both crit_EHI with m > 2 and crit_SMS use hypervolume computations provided in the emoa package (much more frequently for the first one, which is thus slower).Those have an exponential complexity in the number of objectives and also depend on the number of points in the Pareto front.For crit_EMI the complexity mainly depends on the number of sample points for the SAA approximation and linearly in the number of objectives, it is more affordable than crit_EHI for more than two objectives.For crit_SUR, the complexity is essentially related to the integration over the input domain which can become cumbersome with many variables. Importantly, except for crit_SUR, these criteria depend on the relative scaling of the objectives, i.e., multiplying one objective by a constant modifies the results.Scaling may be performed by the user, e.g., from the maximum and minimum values observed for each objective as in Parr (2012) or Svenson (2011).In addition, crit_EHI and crit_SMS need a reference point for bounding hypervolume computations.If no reference point is given by the user, with refPoint, we set it to R i = max User-friendly wrapper: plotGPareto Results given by easyGParetoptim or GParetoptim can be visualized using the plotGPareto function.The default output of this function is to display only the points visited during optimization along with optimal points.Depending on the number of objectives, the Pareto front approximation is a simple plot (two objectives), a perspective view of the Pareto front (three) or a representation in parallel coordinates (Inselberg 2009) (more than three). Then, three different outputs are possible to improve insight on the algorithm results.These can be obtained either by setting some options of plotGPareto or directly by calling the corresponding functions: • an estimation of the Vorob'ev expectation giving the expected location of the Pareto front along with a visualization of the corresponding uncertainty (option UQ_PF = TRUE or with CPF and plotSymDevFun); • an estimation of the density of Pareto optimal points in the variable space (option UQ_dens = TRUE or with ParetoSetDensity); • a visualization of the probability of non-domination in the variable space (option UQ_PF = TRUE or with plot_uncertainty). Uncertainty quantification on Pareto front The entry function is the creator of the 'CPF' class (for conditional Pareto front), which deals with computing the probability for a target in the objective space to be dominated, also known as the attainment function, Vorob'ev expectation (VE) and Vorob'ev deviation (VD), from a grid discretization.It takes as main arguments: • fun1sims, fun2sims, the sets of conditional simulations for both objectives, that can be computed for instance using the simulate function of DiceKriging; • response, the known objective values. The empirical attainment function is calculated on a grid in the objective space from the CPF sets given by the conditional simulations.Taking advantage of the regularity of the grid to compute volumes, the Vorob'ev expectation is computed quickly by dichotomy.Then the Vorob'ev deviation is a sum of hypervolume indicator values.The plot method applied to CPF objects displays the attainment function in gray-scale, and possibly the VE.In addition, the plotSymDefFun function can be used to display the spread of conditional simulations of Pareto fronts around the Vorob'ev expectation.See Binois et al. (2015a) for details. Uncertainty quantification on Pareto set The function plot_uncertainty, based on the print_uncertainty_nd function of the Krig-Inv package Chevalier et al. (2014a), draws contour lines of the probability of non-domination. In dimension larger than two, contour lines are drawn for each couple of two variables representing either the average, maximum or minimum of the probability over the other variables. The function ParetoSetDensity relies from one end on conditional simulations of the objectives given by the simulate function of DiceKriging, and on the other end on a kernel density estimation of the probability of belonging to the Pareto set.It returns an object of class 'kde' from the package ks (Duong 2016).This object can be displayed in small dimension (which is done by plotGPareto), or may be used to sample points. Search for target designs Finally, GPareto allows the user to search for additional points corresponding to a particular target in the objective space.Given a target point (for instance, a location along the estimated Pareto front based on the Vorob'ev expectation), the function getDesign returns the closest design, that is, the design that maximizes the probability of dominating the target in the variable space.This step requires running an optimization algorithm, which can be tuned similarly to crit_optimizer using an optimcontrol argument. Fast objectives Motivated by applications where some objective functions are computable at a negligible cost compared to other objectives, GPareto offers an option for MOO in case of co-existing cheapand expensive-to-evaluate objectives.As an example, in structural mechanics one objective is typically the mass (which is directly derived from the design variables) and the other depends on the response of the system, hence involving a finite element model.To ensure compatibility with the infill criteria, fast objectives are wrapped in the 'fastfun' class which mimics the behavior of methods such as predict or update.Then predicting the value at a new point amounts to evaluating the fast function, which returns the corresponding value with a zero prediction variance, exactly like what happens for already evaluated points.They may be used with the cheapfn argument in easyGParetoptim, GParetoptim and crit_optimizer. Numerical stability Another computational challenge with kriging, discussed, e.g., in Roustant et al. (2012), is the numerical non invertibility of covariance matrices.It usually happens whenever design points are too close.This is especially troublesome in optimization since, when converging, points are likely to be added close to each other2 .In GPareto, preventing this problem is achieved with the checkPredict function.Before evaluating the selected criterion, checkPredict tests whether the new point x is too close to existing ones, with a tunable threshold that can be passed as argument.Three options are available to define when designs are considered as "too close": • minimal Euclidean distance in the input space: min • ratio of the predictive variance s n (x) 2 over the variance parameter for stationary kernels; • minimal canonical distance coupled with k n : min The first two options are also used in KrigInv and DiceOptim respectively.The first one is less computationally demanding but also less robust. Moreover, to improve stability of the update of already existing models with new observations, it is possibly attempted twice.First, an update with re-estimation of the hyperparameters is performed.Then, if it has failed, a new update is tested with the old hyperparameters.If this is still insufficient to train the model with all observations, the user may try to remove some points or apply the jitter technique consisting in adding a small constant to the diagonal of the covariance matrix to improve its condition number, see e.g., Roustant et al. (2012). Replacing two close observations by one observation and its estimated directional derivative as proposed in Osborne ( 2010) is another appealing solution. Illustrating examples using GPareto This section shows the different functionalities of GPareto on three classical toy examples. Two objectives, unidimensional example We consider the following simple 1-dimensional bi-objective optimization problem from the literature, see e.g., Van Veldhuizen and Lamont (1999), re-scaled to [0, 1], to illustrate the different steps of the procedure and the key concepts of GP-based multi-objective optimization: We first define the initial design of experiments (design.init,six points evenly spaced between zero and one) and compute the corresponding set of observations response.init,which we use to build two kriging models with DiceKriging's km function and put them into a single list (model): R> design.init<-matrix(seq(0, 1, length.out= 6), ncol = 1) R> response.init<-MOP2(design.init)R> mf1 <-km(~1, design = design.init,response = response.init[,1]) R> mf2 <-km(~1, design = design.init,response = response.init[,2]) R> model <-list(mf1, Then, we call the main function GParetoptim to perform seven optimization steps using the EHI criterion.Note that EHI requires a reference point as a parameter, which corresponds to an upper bound for each objective (here [2, 2], if not provided, it is estimated at each iteration, see Section 3.2.4).The other mandatory inputs are the GP models model, the objective function fn, number of steps (nsteps) and the design bounds (lower and upper). We use this example to show three important features of the package: • the possibility to access different steps of the EGO strategy, • the use of 'fastfun' objects and, • the post-processing functionalities. On this analytical example, it is possible to display the true Pareto front and set using the plotParetoGrid function: R> plotParetoGrid(P1) The graphical output is shown in Figure 4. Now, we call directly the function crit_optimizer to choose the next point to evaluate using the SUR criterion.Here, the optimcontrol input is used to choose the genoud algorithm for the criterion optimization.The critcontrol input allows us to choose the integration points for the criterion, here a regular 21 × 21 grid. In Figure 5, we show the initial set of observations and the next point to evaluate according to each setup.For illustration purposes, the contour lines of the criteria are also computed.We see that using the 'fastfun' object (hence, additional information), the SMS criterion points clearly to a narrower region, which is in addition quite different from the ones given by the other setup.On both cases, the inner optimization loops successfully find the global maxima of the criteria surfaces.The red crosses show the optimal sampling points according to the criteria, found using genoud (left) and pso (right), respectively.Now, we apply two (for vignette building speed) steps of SUR, first with two regular objectives, then with the fastfun setting: R> sol <-GParetoptim(model = model, fn = fun, crit = "SUR", nsteps = 2, + lower = c(0, 0), upper = c(1, 1), optimcontrol = list(method = "pso"), + critcontrol = list(SURcontrol = list(distrib = "SUR", n.points = 40))) R> solFast <-GParetoptim(model = list(mf1), fn = fun1, cheapfn = fun2, + crit = "SUR", nsteps = 2, lower = c(0, 0), upper = c(1, 1), + optimcontrol = list(method = "pso"), + critcontrol = list(SURcontrol = list(distrib = "SUR", n.points = 40))) Then, we generate the post-treatment processes using plotGPareto.The graphical outputs are given in Figure 6.Optional parameters f1lim and f2lim are used to fix bounds for the top graphs to allow better comparison.First, we see the interest of using the 'fastfun' class when some objectives are cheap to compute: The Pareto front obtained this way is much more accurate (Figure 6, top), in particular for low values of the second objective. Interestingly, the two Vorob'ev expectations are similar, and provide a very good prediction of the actual Pareto front (Figure 4), except for the lowest values of the first objective.However, the Vorob'ev deviations (gray areas) show a higher local uncertainty for this part of the front.Overall the Vorob'ev deviation values (394 and 296, respectively) indicate a substantially better confidence on the predicted Pareto front using fastfun. The probability and density plots (Figure 6, second and third rows, respectively) provide complementary information on the Pareto set (input space).The probability plots indicate interesting (white) and uninteresting (black) regions, as well as uncertain ones (gray), but do not provide a clear insight on the Pareto set.Here, on both cases, the large gray areas show that additional observations may be beneficial, which is consistent with the large difference between the current Pareto front and the Vorob'ev expectation (Figure 6,top).On the other hand, the densities provide a rather accurate estimates of the Pareto set, in particular for the fastfun setup. Finally, one may want to extract points from the Vorob'ev expectation of the Pareto front (that is, the input realizing a particular trade-off) that have not been observed yet.To this end, the getDesign function returns the most probable design given a target in the objective space, and can be called as follow: Here, we have chosen a target [55, −30] that is on the Vorob'ev expectation, where the uncertainty is small but where no observation is near (Figure 6, top left).The getDesign output is a list with the value of the design (par), the value of the criterion, i.e., the probability that the newPoint objective is not dominated by the target) (value, here 90%) and the GP prediction of each objective with the associated uncertainty ((mean), (sd) and confidence intervals).Here, the value of the second objective reaches the target with large confidence, but the first objective value is quite uncertain. Four variables, three objectives Here we consider the DTLZ2 optimization problem (Deb et al. 2005) This time we simply use easyGParetoptim to solve the problem without having to train or prepare models. R> res <-easyGParetoptim(fn = DTLZ2, budget = 50, lower = rep(0, 4), + upper = rep(1, 4)) Then, we visualize the output using plotGPareto.Note that with dimensions larger than two and more than two objectives, only the Pareto front visualization and the probability plots are available.For the latter, we changed the grid size parameter (resolution) and the number of integration points (nintegpoints) to avoid overly costly figures. Then, we visualize the output using plotGPareto.Note that with dimensions larger than two and more than two objectives, only the Pareto front visualization and the probability plots are available.For the latter, we changed the grid size parameter(resolution) and the number of integration points (nintegpoints) to avoid overly costly figures. The graphical outputs are shown in Figure 7. From the definition of DTLZ2, the optimal value for both x 3 and x 4 is 1/2.This is clearly visible on the probability of non-domination graphs: The (x 3 , x 4 ) surface (bottom right) is unimodal with its maximum at (0.5, 0.5), the other graphs show a ridge at 0.5 for one of the variables.From this representation, optimal sets for x 1 and x 2 are more difficult to observe. Figure 1 : Figure 1: Comparison of additive-epsilon (left, arrows) and hypervolume (right, filled areas) improvements for two possible new observations (green and blue) to the current Pareto front (red points).The reference point for hypervolume computations is the black crossed circle.In terms of epsilon improvement, the green point is more interesting as it is farther away from the Pareto front, but the blue point is better in terms of volume increment. Figure 2 : Figure 2: Left and center: three conditional (i.e., interpolating at observations) simulations of objectives f 1 and f 2 , respectively, based on GP modeling.Right: corresponding images in the objective space.Pareto sets and fronts are shown in bold. Figure 3 : Figure 3: Summary of the optimization procedure on the 1-dimensional example.Top: objective functions are in black, with design points in blue.The red points show the Pareto set.The right figure shows the problem in the objective space (f 1 vs. f 2 for all x).The red line shows all the Pareto-optimal solutions of the problem and the blue line is the current Pareto front based on the six observations.Middle: GP models corresponding to both objectives based on the initial observations and corresponding acquisition criterion (expected hypervolume improvement) that is maximized to select the next observation.Bottom: GP models at the end of the optimization process and Pareto front returned by the method. Figure 4 : Figure 4: Actual Pareto front and set for (P1).As in the previous example, we first build an initial set of (ten) observations and a list of two GP models: Figure 7 : Figure 7: Perspective view of the Pareto front (left) and uncertainty in the variable space (right) for example 3. Table 1 : Svenson (2011)(2010)08)d objectives the method ofPonweiser et al. (2008)and references therein.The scaling and additional parameters are some of the drawbacks of multi-objective infill criteria, as discussed inWagner et al. (2010)andSvenson (2011).Summary of the characteristics of infill criteria available in GPareto.The computational costs are given for a bi-objective example.Note that the cost of crit_EHI is low in this case but increase exponentially with the output dimension.SURcontrol is a list of parameters depending on the integration strategy chosen.
8,712.2
2019-05-13T00:00:00.000
[ "Computer Science", "Mathematics" ]
ANALYSIS OF ENGLISH FOR INTERNATIONAL COMMUNICATION (EIC) RESEARCH PROJECTS CONDUCTED IN THE INDEPENDENT STUDY (IS) COURSE UNDERPINNING THE THAILAND 20-YEAR NATIONAL STRATEGY PLAN The main focuses of this study were to identify all research projects conducted by English for International Communication (EIC) students between 2017-2019 based on the Thailand 20-Year National Strategy Plan, to summarize and analyze the research project, and to suggest the scope of the research projects in the IS course. A total of 109 research projects proceeded by the EIC students were analyzed. The research instruments were the Research Topic Analysis Checklist (RTAC). The statistic used in data analysis was the percentage. The findings indicated that the most of the research projects were in the strategy of development and strengthening human capital (85%), social cohesion and just society (5.45%), and national competitiveness enhancement (1.09%) respectively whereas there was at 8.21% ungrouped projects. Noticeably, an innovation technology and area base of IS projects gradually increased with the 21 st century skills in 2019. Additionally, the understanding of both university mission and Thailand 20-Year National Strategy Plan related to the various research projects. This study can mutually provide a practical guideline for encouraging both teachers and students to explore more diverse interesting topic areas by bridging the gaps between the university mission, provincial policy and strategy, and Thailand 20-Year National Strategy Plan. INTRODUCTION Since the 8 th of October 2017, the Thailand 20-Year National Strategy Plan was announced in the Royal Thai Government Gazette on Saturday, with an immediate effect. The objective of the is to transform Thailand into a developed country by 2037. The 72page of document was drafted by the National Council for Peace and Order-appointed panels and validated by the cabinet in early June. The National Legislative Assembly approved it a month later. As a consequence, the public and private sectors have endeavored to incorporate the policies and activities of their respective organizations in response to the Thailand 20-Year National Strategy Plan, particularly the government units and educational institutions. The policymakers must incorporate all educational levels into their program to serve the afore mentioned national strategy. __________________ * Corresponding author, email<EMAIL_ADDRESS>Focusing on the Thai higher education systems that produce undergraduates and postgraduates adapted to the target markets, they must mold the Thai citizen into a global citizen in order for them to occupy both local and global markets after graduation. In other words, the Thai universities have directly contributed to the production of learning outcomes for the global market (Kangkha & Mungsiri, 2012). The educational administrators are now more aware of the importance of designing curricular to meet the requirements of students and respond to market demands. Hence, the Thailand 20-Year National Strategy Plan focuses primarily on achieving a balance between development security, economy, society, and environment through the participation of all sectors in global civil modelling. Undergraduates are required to conduct studies and research in their major prior to completing their bachelor degree (Al-Hawaj, et al, 2015;Uppamaiathichai, & Roueangrong, 2021). Then, the Independent Study (IS) Course generally becomes for the fulfillment of a partial degree in various universities, specifically in all the nine Rajamangala Universities of Technology (RMUT Thanyaburi, RMUT Krungthep, RMUT Lanna, RMUT Phra Nakhon, RMUT Tawan-ok, RMUT Rattanakosin, RMUT Isan, RMUT Srivijaya), as they have incorporated the research programme skills with educational issues and skills of the 21 st century into the Independent Study Course for EIC students to sharpen their learners' research competency. Undoubtedly, the EIC Programme conducted by the Department of Foreign Languages, Faculty of Liberal Arts, Rajamangala University of Technology Srivijaya (RUTS) has been designed with the research components with the 21 st century educational issues and skills as an integral part of the EIC curricular in these two subjects; the former is the Preparation for Independent Study (Pre -IS: 01315301) and the latter is the Independent Study (IS: 01315402) courses. Thus, numerous of research projects are conducted for the fulfillment of EIC degree. However, following the approval of the Thailand 20-Year National Strategy Plan, the research guidelines and soft skills provided in those two subjects, Pre-IS and IS, for training the EIC students have been inevitably updated with practical components by adopting and adapting the university mission, the provincial policy and strategy,and the objectives of the Thailand 20-Year National Strategy Plan. The following research question is addressed in this study: RQ: What are the research projects conducted by English for International Communication (EIC) students between 2017-2019 based on the Thailand 20-Year National Strategy Plan? LITERATURE REVIEW The evaluation is divided into six sections that cover the 12 th Social and Economic Development Plan, the Thailand 20-Year National Strategy Plan, the 21 st Century Skill, the RUTS Policy, the Songkhla Policy and Strategy, and the Description of Independent Study Course. These interconnected theoretical frameworks are elaborated in the following sub-sections. The 12 th Social and Economic Development Plan The 12 th Social andEconomic Development Plan (2017-2021) was established to enhance the integration of national development agendas with other development plans such as the 20 years National Strategy, Thailand 4.0 Strategy, and the National Sustainable Development Goals (Vimolsiri, 2017). It concentrates on science, technology, innovation, and human resource development. Its intention is to develop technological intensive production and digital economy, and to strengthen local economic 568 development. The plan has just been implemented on the 3 main concerns: sufficiency economy, sustainable development, and people centered development. This plan addresses 10 development strategies, including human capital development; opportunity building and reducing social inequality; strengthening the economy; green growth development; enhancing national security; public administration development or good governance; infrastructure development; promoting research and innovation; development of the local economy; and international development cooperation (Kuhavichanun, 2017;OECD/UNESCO, 2016;Theparat, 2019, andVimolsiri, 2017). The development of Thailand according to the 12 th Social and Economic Development Plan had been reformed with an expectation of prosperity, security, and sustainable development over the next 20 year in accordance with the sufficiency economy principle. Its effort is presented in the Thailand 20-Year National Strategy Plan (2017)(2018)(2019)(2020)(2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030)(2031)(2032)(2033)(2034)(2035)(2036). Thailand Economic Outlook (2017) stated that the goal of this long-term national strategy is to establish the guidelines and benchmarks for the country's development in order to ensure that the formulated policy will be stable and can be easily and smoothly implemented in the Social and Economic Development Plans (OECD/UNESCO, 2016). As Thailand's economic model has been established and is undergoing continuous improvement, this is a crucial factor for putting the concepts of the 12th Social and Economic Development Plan into practice, particularly in the higher education lecturing system. This provides great opportunities for the university academics to raise their students' and postgraduates' awareness of production in Thailand competitive environment. The Thailand 20-Year National Strategy Plan National Strategy Secretariat Office and The Office of the National Economic and Social Development Board (2017) defines National Strategy as an emphasis on balancing the development of security, economy, society, and environment through the incorporation of all sectors in the form of a "civil state" consisting of six strategies: The national strategy for national security The goal of National Security Strategy is to ensure national security and public contentment through the promotion of security, safety, independence, sovereignty, peace, and orderliness at the national, social, and community levels (National Strategic Committee, The Prime Minister's Office, 2016 and The National Defense College of Thailand, 2016). This security can be implied to all social environment sectors, such as society, education, economy, and politic, so that the Thai society can advance without anxiety or vulnerabilities. The national strategy for national competitiveness enhancement The National Strategy for National Competitiveness Enhancement aims to enhance national multidimensional capacity on the basis of three concepts: Firstly, it focuses on the origins of the national economy; local identity, culture, tradition, and lifestyle; maintaining the diversity of natural resources; and pursuing multidimensional comparative advantages. The knowledge will later be integrated with available technologies and innovations to accommodate global socioeconomic contexts in the 21st century. Secondly, it concerns "Adjusting the Present" to prepare for the future through national infrastructure development in terms of transport and logistics, science, technology, and advanced digital systems as well as environmental adjustment to facilitate future industrial and service developments; and (Marin, Schymik, & Tscheke, 2015). Lastly, it moves to "Creating New Future Values" to enhance entrepreneurs' capacity; develop younger generations; adjust business models to meet fast changing market demand; implement strategies to accommodate anticipated future contexts with a focus on learning from the past and adjusting the present for further development; and leverage governmental support to help generate income and employment, expand trading and investment opportunities in global markets, enhance income and general well-being of Thai people, increase the number of middle-class citizens, and reduce inequality (National Strategic Committee, The Prime Minister's Office, 2016). The strategy for human capital development and strengthening The objective of the Strategy for Human Capital Development and Strengthening is to develop Thai citizens of all ages to become virtuous, skillful, and exemplary members of society. The scope covers promotion of physical, mental and intellectual qualities, adequate multidimensional developments, sustainable well-being at all stages of life, public mindedness, and social responsibility. The citizens are also expected to be frugal, generous, disciplined, and ethical, equipped with logical reasoning and globalized 21st century skills, particularly in English and in third language communication. Furthermore, citizens are encouraged to preserve local languages while encouraged acquire practices of lifelong learning and development. The development of these strategies will assist the nation in fostering modern innovators, thinkers, entrepreneurs, farmers, etc. based on the development of personal skills and abilities (Marin et al., 2015; National Strategic Committee, The Prime Minister's Office, 2016; Poovarawan, 2017). The strategy for social cohesion and just society The Strategy for Social Cohesion and Just Society aims to foster collaboration between private sector, general public, and local communities for strategy implementation. The public will be encouraged to participate as a mechanism to enable society-wide cooperation. This will promote decentralization of power and responsibilities among local administrative organizations, strengthen the independent management of local communities, and create viable and healthy economic and social environment aimed at producing quality citizens who can contribute in perpetuity to families, communities, and society. Furthermore, the government is committed to ensure equitable and inclusive access to high-quality public services and welfare practices (The National Defence College of Thailand, 2016). The strategy for eco-friendly development and growth The Strategy for Eco-Friendly Development and Growth aims to achieve sustainable development through the manifestation of a healthy society, economy, and environment; the implementation of good governance, and the establishment of integrated partnerships at both national and international levels. Strategic and operational plans will executed in accordance with a design based on geographic region, and the implementation will be facilitated by promoting the direct participation of all sectors involved. Implementation will focus on fostering economic, environmental, and quality of life development on both sides. The focus is to create balance among these three factors to promote sustainability for the future generations (Lidskog & Elander, 2012;Potts, 2010). The strategy for public sector rebalancing and development The Strategy for Public Sector Rebalancing and Development aims to reform and enhance the country's governmental administrative services based on the principle of "government of the people for the people and the common good of the nation and the happiness of the general public". To achieve this objective, the scale of government agencies should be proportional to their roles and missions, with the roles of regulatory agencies clearly distinguished from those of operating agencies. Furthermore, in order to operate with sound governance and for the general public's benefit, all government agencies must be results-driven and fully adaptable. Adapting big data and digital technologies judiciously will help improve the public sector's performance in accordance with international standards. Government agencies should be open to intersectoraloperations and participation from all relevant parties to ensure quick and transparent responses to public needs. All sectors in the society should value honesty, integrity and frugality while resisting all kinds of malfeasance. Moreover, laws should be up to date, precise and clear, and should be enacted only when necessary in line with international legal practices to minimize disparity and accommodate the country's development. The system of justice of the country should be equitable and non-discriminating with the judicial process, and it should operate in accordance with the rule of law (Hsu et al, 2016;Omisore, 2018). The 21 st Century Skill The term 21 st century skill refers to a broad set of knowledge, skills, work habits, and character traits that educators, school reformers, college professors, employers, and others believe are crucial for success in today's world, particularly in collegiate programs and contemporary careers and workplaces. In general, 21st century skills can be applied across all academic disciplines and throughout a student's educational, professional, and civic existence. Although the specific skills considered to be "21st century skills" may be defined, categorized, and determined differently from person to person, location to location, and school to school, the term does reflect a general, if somewhat loose and fluctuating, consensus. In the 21 st century, which is generally accepted to be the age of globalization, the information society and knowledge-based economy have had a big impact on educational reform so that the key theme of education is lifelong learning. Thus, the idea of learningto-learn skills was widely adopted in educational policy all over the world. Consequently, traditional methods of teaching and learning, such as materialism and consumerism, gave way to anti-authoritarian concepts. This turned into the learner-centered approach, which promotes autonomous learning skill and engages learners in lifelong learning (Al-Hawaj et al., 2015). The majority of language instructors and researchers consider both the theory and practice of autonomous learning. Three main rationales underpin promoting autonomy in ELT (Warschauer, 2000). Firstly, learning is a lifelong process. It is clear that the teacher cannot teach learners everything they would like to know; therefore, the best method for a teacher to serve students is to equip them with self-learning strategy. In addition, the rapid pace of technological development in modern world necessitated modifications to the outside-ofclassroom learning strategies. Consequently, fostering learner autonomy in our classes can best prepare learners to pass through to the real world where they can explore their strategic competence to learn what they need to know through their own experience (Al-Hawaj, et al., 2015). Secondly, promoting learner autonomy is a priority due to the global availability of a vast number of English sources that can be used as learning inputs. The learners can access numerous channels of information that equip them with tools and strategies which will empower them to benefit from the opportunities in extending classrooms (Hedgcock & Ferris, 2018). The last reason relates to the essence of learning. The most effective learning has a personal learning process at its core. This occurs when learners recognize their desires and exercise their motivation to learn. Teachers can encourage them to be actively involved through positive learning activities and a positive environment. Not all responsibility for the learning and instructing process rests with teachers. Students are empowered to assume responsibility by determining their own requirements, objectives, and evaluation criteria (Kangkha & Mahadi, 2017). This leads them to shape their fundamental learner autonomy (Kangkha, 2012). As a teacher in 21 st century, we cannot deny online education. Computer based aids in teaching and learning process which involves in acquiring computer enhanced learning information and study materials which are primarily gained from computer that is connected to the internet can be defined as online education. Thus, the online education now plays more a significant role in our career. Teachers and students can now complete their studies online with the assistance of computers Judson, 2006). For instance, it facilitates and supports teacher and learner time management. They can study at any time including revising, reviewing, and preparing study materials in advance. Furthermore, online education improves the individual's career development in which he or she can get more professional opportunities, such as a higher salary, promotion to a higher rank, and more effective working skills. Therefore, electronic literacy (mastery of basic technology skills) has become a prerequisite for both teachers and learners in this era. Without necessary electronic competency, we will be educationally and professionally disadvantaged (Prasongmanee, et al., 2021). People have effectively adopted 21st century skills as learning standards for their ability or competency (commonly associated knowledge, skills, work habits, and character. This includes: Critical thinking, problem solving, reasoning, analysis, interpretation, synthesizing information; Research skills and practices, interrogative questioning; Creativity, artistry, curiosity, imagination, innovation, personal expression; Perseverance, self-direction, planning, self-discipline, adaptability, initiative; Oral and written communication, public speaking and presenting, listening; Leadership, teamwork, collaboration, cooperation, facility in using virtual workspaces; Information and communication technology (ICT) literacy, media and internet literacy, data interpretation and analysis, computer programming (Iamtrakul & Klaylee, 2019;Sethakul & Utakrit, 2019Suphapanworakul, et al., 2020; Civic, ethical, and social-justice literacy; Scientific literacy and reasoning, the scientific method; Global awareness, multicultural literacy, humanitarianism (Rerkklang, 2018). The RUTS Policy The university defines the organization's core values, which are represented by 4 letters, RUTS, and are used instead of the abbreviated name of the university. 'R' stands for "Responsibility" referring to take responsibility for yourself and responsible for duty to produce professional practitioners. ' U ' stands for "Unity" defining as one unity, teamwork, strengthening in order to upgrade and increase the capacity to produce manpower and create innovation for sustainable social development. 'T' stands for "Technology" calling for catching up with the development of modern technology as a management tool educational management and networking. ' S ' stands for "Shining" showing as a radius of wisdom creating intelligence through practice and creativity based on love and faith in order to create innovation that is the wisdom of the people of Thai. However, the core value of RUTS is the "Innovative University". Hence, all provided subjects trend to reaching out the learning innovative outcomes and impacts. The Description of Independent Study Course The Pre-IS and IS courses are compulsory courses provided for the undergraduate students in fourth EIC at the Faculty of Liberal Arts, RUTS. The pre-requisite course is called as the Pre-IS which provides parts of term paper; choosing topic; searching information; reviewing related literature; ensuring citation and referencing; writing references or bibliography. The IS course describes on the instrument construction; data collection; data analysis; conclusion and discussion; and data presentation in spoken and written forms. It can be said both courses have put the research study and creation of knowledge independently in the form of research focusing on students' critical thinking and research; data presentation in spoken and written forms based on the results of their own study in a form of survey, projects, web-pages, and other forms. The Policy and Strategy of Songkhla Province According to the policy, knowledge of community enterprise management should be generated among the people. For example, knowledge of accounting, production technology, product design, branding, and marketing management are highlighted in the policy. Local academic institutes should play a role in closely practicing and monitoring the management of enterprises. It is obviously seen in the higher education course description that a lecturer needs to learn and raise awareness on the afore mentioned issues. RESEARCH METHODOLOGY In order to succeed in conducting the research, the research methodology can be described as the follow: Research Population The research population were the research topics in the IS course, EIC, Faculty of Liberal Arts, RUTS from the year 2017-2018 with a total of 109 topics. Research Instruments The Research Topic Analysis Checklist (RTAC) was used as a research instrument which included with two parts namely the language and linguistics research theme and the Thailand 20-Year National Strategy Plan Research Procedures and Data collection There were five steps in conducting this research study, described as follow: (1) studying and analyzing the objectives of The Thailand 20-Year National Strategy Plan, the university mission, provincial policy and strategy, and the Pre-IS and IS courses, (2) designing and pre-evaluating the RTAC, semi-structured interview and questionnaire. Then, submitting the research tool to three experts, which is followed by editing based on as the experts' recommendation and suggestion, (3) identifying and categorizing the 109 research topics into the RTAC. In case, if there were any issues like unclear topics, clarification from research owners, and unidentified topics, the researcher asked for help from the experts. Next, all identified and categorized topics in the TRAC were checked and confirmed by the experts. Finally, the researcher and the owners took responding to the questionnaire and semi-structured interview as a whole picture, (4) collecting and analyzing the data from all mentioned research tools, the researchers selected each statistic matched to data patterns, but mainly in percentage, (5) drafting and reporting all collected and analyzed data into the format of research report. RESULTS AND DISCUSSION The findings showed that the most of the research projects were in the strategy of human capital development and strengthening (85%), social cohesion and just society (5.45%), and national competitiveness enhancement (1.09%) respectively, whereas there was at 8.21% ungrouped projects as shown in table 1. Based on the data shown in Table 1, it was found that the number of research projects involved in the strategy of human capital development and strengthening was extremely high, at approximately 93 topics (85%). The second was about the social cohesion and just society at 6 topics (5.45%). The national competitiveness enhancement was only one topic (1.09%) whereas the 9 research topics (8.21%) were ungrouped projects respectively. The following details were samples of research topics categorized based on the Thailand 20-Year National Strategy Plan: Strategy of human capital development and strengthening The following research topics can be identified and categorized into the strategy of human capital development and strengthening as stated in the Thailand 20-Year National Strategy Plan, as their primary focus is on improving and developing the human potential in language learning. Example 1 The In example 1, 2, and 3, it is obviously seen that the objectives of those research topics are to improve the samples' skills of language study by using digital technology. Strategy of social cohesion and just society The following research topics can be grouped into the strategy of social cohesion and just society as mentioned in the Thailand 20-Year National Strategy Plan because these research topics aimed to solve the problem in communities. Therefore, the researchers needed to lookback to their local communities and identify the problems; and then applied the research procedures to deal with the problem. Example 4 Developing In example 5, 6, and 7, it can be implied that the researchers had to link up their research in their locals. However, these research topics must be adapted to the languagestudy abilities of the samples. Strategy of national competitiveness enhancement This research topic can be grouped as the strategy of national competitiveness enhancement as mentioned in the Thailand 20-Year National Strategy Plan because it describes the related issues. Example 7 Improving English Speaking Skills of the Cosmetic Shopkeepers at Kimyong Market by Using L.C.V Webpage 2/2019 In example 7, it can be said that the researchers need to connect their locals and communities by designing the LCV webpage to help the cosmetics' traders in selling their product to foreigners. However, this research topic is still to be improved the samples' language study skills. Ungrouped projects These following research topics can be specified as ungrouped projects because they cannot be placed into any strategy as stated in the Thailand 20-Year National Strategy Plan. In example 8, 9, and 10, it can be referred that the researchers investigated the basic data in their research, which are rarely used to reach out the innovative learning outcomes. Noticeably, an innovation technology and area based IS projects gradually increased with the 21 century skills in 2019. Additionally, when the subject mangers and advisors can understand the provincial strategy, university mission and Thailand 20-Year National Strategy Plan, they can manipulate the various research projects related to those mentioned strategies. CONCLUSION AND SUGGESTION This study can mutually provide a practical guideline for encouraging both teachers and students to explore more diverse interesting topic areas by bridging the gaps between the university mission, provincial policy and strategy, and Thailand 20-Year National Strategy Plan. The Understanding of the objectives of Pre-IS and IS courses, the university mission and policy, provincial policy and strategy, Thailand 20-Year National Strategy Plan, have strongly potential and beneficial to encourage and support lifelong language learning. However, it depends on how the teacher as the subject manager maximizes his or her potential to guide an individual's development from a young generation to ongoing lifelong learning mode. This induces the sustainable development in teaching and learning in higher education. The researchers recommended that the future researchers conduct a similar study to provide a practical guideline for encouraging teachers and students to explore more diverse interesting topic areas in order to fill the the voids between the university's stated mission, provincial policy and strategy, and National Strategy Plan from the home country.
5,812.8
2023-05-29T00:00:00.000
[ "Education", "Linguistics" ]
What is Missing from the Local Stellar Halo? The Milky Way's stellar halo, which extends to $>100$ kpc, encodes the evolutionary history of our Galaxy. However, most studies of the halo to date have been limited to within a few kpc of the Sun. Here, we characterize differences between this local halo and the stellar halo in its entirety. We construct a composite stellar halo model by combining observationally motivated N-body simulations of the Milky Way's nine most massive disrupted dwarf galaxies that account for almost all of the mass in the halo. We find that (1) the representation by mass of different dwarf galaxies in the local halo compared to the whole halo can be significantly overestimated (e.g., the Helmi Streams) or underestimated (e.g., Cetus) and (2) properties of the overall halo (e.g., net rotation) inferred via orbit integration of local halo stars are significantly biased, because e.g., highly retrograde debris from Gaia-Sausage-Enceladus is missing from the local halo. Therefore, extrapolations from the local to the global halo should be treated with caution. From analysis of a sample of 11 MW-like simulated halos, we identify a population of recently accreted ($\lesssim5$ Gyrs) and disrupted galaxies on high angular momenta orbits that are entirely missing from local samples, and awaiting discovery in the outer halo. Our results motivate the need for surveys of halo stars extending to the Galaxy's virial radius. INTRODUCTION In ΛCDM cosmology, galaxies grow hierarchically, with smaller systems continuously merging into more massive galaxies (e.g., White & Frenk 1991).Our clearest view into this hierarchical assimilation comes from the stellar halo of the Milky Way, which is almost entirely comprised of debris from accretion events (e.g., Di Matteo et al. 2019;Mackereth & Bovy 2020;Naidu et al. 2020).Families of halo stars that arrived as part of the same galaxy retain similar phase space properties (e.g., energies, angular momenta, actions; Brown et al. 2005;Gómez et al. 2013;Simpson et al. 2019) as well as shared chemical abundance patterns (e.g., Lee et al. 2015;Cunningham et al. 2022).With detailed chemodynamical data, it is challenging (e.g., Jean-Baptiste et al. 2017) but possible to determine which populations of stars were originally associated with the same dwarf galaxy merger event.From satellite galaxies being currently disrupted (e.g., Sagittarius; Ibata et al. 1994) to stellar populations fully integrated into the halo (e.g., Thamnos; Koppelman et al. 2019), dwarf galaxies in all stages ksharpe<EMAIL_ADDRESS>NASA Hubble Fellow of accretion can be found in and around the stellar halo, encoding our Galaxy's assembly history. The stellar halo is difficult to study directly due to its small relative mass (≈ 1% of the Galaxy's stellar mass; Deason et al. 2019), large spatial extent, and the necessity to collect full 6D phase-space and abundance data to reconstruct its history with high fidelity.Thanks to numerous spectroscopic surveys (e.g., RAVE, Steinmetz et al. 2006;SEGUE, Yanny et al. 2009;LAMOST, Cui et al. 2012;GALAH, De Silva et al. 2015;APOGEE, Majewski et al. 2017;H3, Conroy et al. 2019; and the Gaia mission, Gaia Collaboration et al. 2018), we have detailed chemodynamical parameters for thousands of stars in the stellar halo.However, owing to observational feasibility, the vast majority of halo stars studied in depth are located in the solar neighborhood, within a few kiloparsecs of the Sun (the "local halo"). From the local halo, it is possible to infer the properties of the more distant halo.An important technique is integrating local halo stars' orbits and analyzing their properties based on their orbital apocenters, which is the maximum distance they reach from the Galactic center.This method is, for example, applied to determine the net rotation of the outer halo, which may provide evidence as to its method of formation (e.g., Carollo et al. 2007;Schönrich et al. 2011;Beers et al. 2012;Helmi et al. 2017). However, we know that the local halo is unrepresentative of the stellar halo in its entirety.For instance, stars associated with the prominent dwarf galaxies Cetus (e.g., Newberg et al. 2009) and Sagittarius are not found within the solar neighborhood; thus, any extrapolations from local samples cannot capture any information about these two accreted dwarf galaxies.Apocenter analyses do not accurately capture the more radially extreme stars in an accreted dwarf.Accreted stars are most likely to be found close to their apocenters (e.g., Deason et al. 2018), meaning stars on large-apocenter orbits are least likely to be found near the Sun.Additionally, there is a degree of selection bias when determining which stars within the solar neighborhood actually belong to the stellar halo and which are associated with the Galactic disk (e.g., stars that are retrograde with respect to the disk are more likely to be classified as halo stars). In this paper, we aim to precisely determine what differences might exist between extrapolations from the local halo and the stellar halo as a whole.We begin by constructing a composite model of the Milky Way's stellar halo, described in Section 2 and Section 3.1.Then, in Section 3.2, we study the relative representation of stars in the local halo compared to the stellar halo in its entirety for all accreted dwarfs in the composite model.We also perform apocenter analysis, after integrating the orbits of local halo stars, to compare apocenter properties to those of whole halo stars, focusing on net prograde/retrograde motion.In Section 3.3, we compare the composite halo with the eleven halo simulations from Bullock & Johnston (2005); Robertson et al. (2005); Font et al. (2006) (hereafter BJ05).Finally, in Section 4 we summarize our conclusions and present a few final remarks. METHOD 2.1. Simulation Sources In this section, we describe how our composite Milky Way stellar halo is constructed, as well as the set of simulations from BJ05 we use to place the composite model in context. We note that several lower mass (< 10 6 M ) dwarf galaxies (e.g., Shipp et al. 2018;Ji et al. 2020;Bonaca et al. 2021;Tenachi et al. 2022;Dodd et al. 2022;Chandra et al. 2022) are predicted (e.g., Robertson et al. 2005;Deason et al. 2016;Fattahi et al. 2019) and observed (e.g., Naidu et al. 2020;Helmi 2020;An & Beers 2021) to contribute a small minority of the stellar halo.While we do not include such systems in our composite Milky Way model, they are captured in the BJ05 simulated halos we analyze, and the general trends we report apply to them. We use star particles from five tailor-made N-body simulations, representing GSE (Naidu et al. 2021), Sagittarius (Vasiliev et al. 2021), the Helmi Streams (Koppelman et al. 2019), Wukong/LMS-1 (hereafter Wukong, Malhan et al. 2021), and Cetus (Yuan et al. 2021).We note that for Sagittarius, we cut out the intact core and keep the remaining ≈ 3 × 10 8 M found within the disrupted tails.However, Sagittarius is shown in its entirety as a part of Figures 1 and 2. Four of the dwarfs we consider -Kraken, Sequoia, Thamnos, and I'itoi -have not yet had dedicated simulations run.We select representative models for these four objects from the simulated dwarf galaxies in the halos from BJ05 on the basis of similarities in phase and real space.We begin by selecting the sample of dwarfs from BJ05 with similar total mass, average z-component angular momentum (L z ), and total energy to each of the four dwarfs.We then visually compare the L z − E tot to observational data.Where multiple simulations had comparable phase space distributions, similarities in average galactic radius were also considered. Later in the paper (Section 3.3) we also compare the eleven BJ05 simulated halos in their entirety to our composite Milky Way stellar halo.These halos follow cosmologically motivated accretion histories for Milky Way Note-Objects are listed in order of decreasing total stellar mass.Stellar masses are sourced from Naidu et al. (2022a) for all objects except for Kraken, for which we use Kruijssen et al. (2020).The total stellar mass of Sagittarius is shown in the parenthetical; we cut approximately 3 × 10 8 M about the center of the Sagittarius dwarf galaxy as it is often considered separately from analysis of the stellar halo.N * is the number of particles from each simulation.Due to differences in simulation resolution for these nine objects, we weight our analysis on the mass per particle (the total stellar mass of each dwarf galaxy divided by the number of particles in its N-body simulation).We include the "time unbound", representing the disruption epoch for each dwarf galaxy when it ceased to be a gravitationally bound object.For the stated range on the Helmi Stream's time unbound, we adopt the median of the range reported in Koppelman et al. (2019).For Kraken, Sequoia, Thamnos, and I'itoi, simulations are selected from BJ05 based on similarities in Lz, total energy, and 3D Galactocentric distance. mass galaxies.Each halo has on the order of 100 accreted dwarf galaxies. The key properties of the individual simulations comprising the composite Milky Way halo are summarized in Table 1.To account for the differences in resolution between simulations, we weight each particle by distributing the total stellar mass of each accreted dwarf galaxy evenly across all of the simulation's N particles (see "Mass per Particle" in Table 1). Dynamical Properties and Orbit Integration We integrate orbits for our composite stellar halo particles in the default gala Milky Way gravitational potential (Price-Whelan et al. 2018).This potential consists of a spherical nucleus and bulge, a disk, and a spherical dark matter halo (Bovy 2015).For the additional eleven halo simulations, halo parameters reported in BJ05 are implemented via gala's composite potential functionality.Due to the differences between our adopted fiducial Milky Way potential and the potentials for the BJ05 simulations, the four simulated dwarfs selected to represent Thamnos, Sequoia, I'itoi, and Kraken are rescaled to retain their approximate positions in phase space -all particle velocities are scaled by the ratio of the circular velocities at their average radius in the Milky Way potential to that of their BJ05 potential (v circ,MW /v circ,BJ05 ). We use the default Galactocentric frame from Astropy v4.2.1.This frame is right-handed, and the origin is placed at the galactic center.Prograde motion is represented by a negative z-component of angular momentum (L z < 0).Orbit integrations are performed on stars within the solar neighborhood via gala.The orbits of all stars in the solar neighborhood are integrated for-ward by 5 Gyr.This replicates how the outer halo is often explored in the literature via local halo samples (e.g., Carollo et al. 2007). Overview of the Composite Milky Way Stellar Halo The analysis in this paper centers around the composite simulated Milky Way stellar halo described in Section 2.1.We display this composite halo as a projection onto the XY and XZ planes in Figures 1 and 2, respectively.Each of the simulated dwarfs are shown in color in a mini-panel surrounding the composite view.Simulations selected from BJ05 are located along the right. We observe three types of distributions for these simulated dwarfs.In order of increasing relaxation: (1) stream-like, including Sagittarius and Cetus; (2) spheroidal, including GSE, the Helmi Streams, Sequoia, Wukong, and I'itoi; and (3) compact spheroidal or flattened, including Kraken and Thamnos.Based on this figure, it appears that the solar neighborhood is most likely to represent stars from spheroidal-type dwarfs, due to their more homogeneous distribution.Likewise, stream-like dwarfs and the most compact of the spheroidal dwarfs are most likely to be absent from the solar neighborhood, since their physical extent does not intersect the position of the Sun. We quantitatively explore the relative representation of accreted dwarfs in the local halo in Section 3.2, and study their angular momenta distributions in Section 3.3.We begin our analysis by considering to what degree each accreted dwarf galaxy is represented within the local halo sample.In an unbiased sample, we expect the relative representation of each accreted dwarf by mass to approximately equal its whole halo mass fraction. Examining Bias in the Local Halo In Figure 3 borhood then includes the galactic center.Note that in practice, it can be extremely challenging to probe the halo around the Galactic center due to extinction.This situation is changing rapidly thanks to Gaia -for instance, Rix et al. (2022) recently used the low-resolution XP spectra from DR3 to reveal a metal poor population which is very compact around the Galactic center.Now, we examine the properties of local halo stars at their apocenters.In the literature, apocenter analysis is used to extrapolate observations of the local halo to the distant halo -local halo stars are taken to be distant stars on an interior part of their orbit.However, there are some challenges associated with this type of analysis.Even in a homogeneous stellar halo, it is more difficult to capture stars with large apocenters in a local halo sample -these stars spend much of their orbits at large radii, and thus are unlikely to be found near the Sun. In the upper panel of Figure 4, we show the relative amount of stellar mass at each radius approximated based on apocenters of local halo stars.We also show GSE, the stellar halo without Sagittarius, and the stellar halo in its entirety.We show non-Sagittarius halo stars since Sagittarius is the most massive component beyond about 30 kpc, and it is useful to see to what degree the results depend on this one dwarf.Similarly, GSE dominates the local halo, and it is interesting to isolate its effect. The most striking feature in the upper panel of Figure 4 is that local halo stars are selectively probing 10 − 15 kpc.This peak corresponds to the final apocenter of the GSE galaxy before it was entirely disrupted in the Naidu et al. (2021) simulations.This prediction has been recently confirmed by the H3 Survey (Han et al. 2022).Another key feature is that stars beyond about 20 kpc (the "outer halo") are very sparsely represented in the local halo relative to their true distribution. The lower panel of Figure 4 shows the relative amount of stellar mass as a function of L z in the outer halo beyond 20 kpc.A large prograde component from Sagittarius is present in the whole halo which is missing from the local halo.Local halo stars are much more radial than the entire halo; i.e., very small values of L z are overrepresented.This is because, analogous to the Sagittarius dwarf galaxy, stars with higher angular momenta from other dwarfs are less likely to pass near the Sun. We examine the prograde/retrograde motion of the halo in more depth by considering L z as a function of 3D galactocentric distance, r gal (Figure 5).The whole halo is prograde on average out to 100 kpc, and the magnitude of L z increases with radius.In the more distant halo, this is due to the contribution from Sagittarius.When we exclude Sagittarius, the net motion in the distant halo is significantly retrograde owing to GSE. Interestingly, the local halo samples selected from the composite model predict a retrograde outer halo, the magnitude of which is consistent with Carollo et al. (2007Carollo et al. ( , 2010)).However, the actual non-Sagittarius halo in the model at these distances is even more retrograde.This trend is because the outer-most wraps of GSE at r gal 50 kpc predicted by the Naidu et al. (2021) Outer Halo (r gal > 20kpc) Figure 4. Histograms comparing the properties of the present day whole halo (in black), whole halo without Sagittarius (brown), and GSE (blue) to those of the local halo's apocenters (for a 3 kpc solar neighborhood in purple).Top: The relative stellar mass at each Galactocentric radius.The gray dashed line at 20 kpc denotes the boundary beyond which we classify stars as being in the 'outer halo'.At radii greater than about 20 kpc, we see a smaller fraction of stars from local halo apocenters when compared to the entire population of halo stars.The local halo's apocenter distribution peaks at ≈ 15 − 20 kpc, corresponding to GSE's inner orbital apocenter (e.g.Naidu et al. 2021;Han et al. 2022).Bottom: Lz for local halo stars with apocenters in the outer halo (beyond 20 kpc) to those of the actual halo beyond 20 kpc, using the same color scheme as the upper panel.The local halo is significantly more radial, i.e., concentrated at Lz ≈ 0 than the actual halo.associated with GSE (Naidu et al. 2020), are extremely retrograde. Comparison to Other Milky Way-like Stellar Halo Simulations Our composite halo is comprised of simulations representing the nine most massive accreted galaxies which comprise the vast majority of the stellar halo by mass.However, we note that there are several lower mass objects that we do not include in this analysis, which owing to their small halo fractions present only minor pertur- bations to our findings here.Further, there are likely several disrupted dwarfs that are yet to be discovered (e.g., Fattahi et al. 2020).To explore the full possible parameter space, we compare the nine dwarfs within our composite halo to the ≈ 1, 000 found within the eleven Milky Way-like stellar halos from BJ05. In Figure 6, we place our composite Milky Way stellar halo in context amongst the eleven BJ05 Milky Waylike stellar halos.Each panel corresponds to a single stellar halo, within which every accreted dwarf galaxy is plotted as a separate circle, with more massive dwarfs being larger.Points are colored as per the fraction of the dwarf's mass that is represented in the local halo of radius 3 kpc; dwarfs with no stars within the local halo are shown as empty black circles. The accreted dwarf galaxies in our composite halo, when compared to the halos from BJ05 fall within a similar band of increasing average L tot with decreasing time unbound (the time at which each galaxy ceased to be a gravitationally bound object).Across this band, we see a gradient of representation wherein objects in the bottom right (i.e., high angular momenta, recently accreted objects) are the least likely to be present in the local halo.This pattern is consistent with Sagittarius and Cetus in our composite halo. These trends excitingly imply that there is a large population of recently accreted disrupted dwarfs (empty circles in Figure 6) with high angular momenta orbiting almost exclusively at large distances which are yet to be discovered.These galaxies are predominantly low-mass systems, relatively unmixed, and present as coherent streams at 50 kpc that promise to provide exquisite constraints on the mass distribution (e.g., Bonaca et al.Twelve panels showing properties of disrupted dwarfs from each of the eleven BJ05 Milky Way-like stellar halos that are produced exclusively from dwarf debris, and our composite stellar halo.For each accreted dwarf galaxy, the "time unbound", or lookback time to when the galaxy ceased to be a gravitationally bound object, is plotted against the logarithm of average angular momentum.Each point, representing a distinct disrupted dwarf galaxy, is colored by the fraction of the object's mass in the 3 kpc local halo, with black, empty circles having no local halo stars.Circle sizes are proportional to the total object stellar mass to the power 3/4.In the lower right panel, each of the nine halo objects are labeled, with colors consistent to those in Figures 1, 2 and 3. Notice that objects with high angular momentum and more recent times unbound -particularly those with total angular momenta 4000 kpc km s −1 -generally have low or no solar neighborhood representation.Additionally, objects at very low angular momenta, 300 kpc km s −1 tend to have lower representation than those with moderate angular momenta. 2014; Sanderson et al. 2017;Vasiliev et al. 2021) and dynamic disequilibrium in the Galaxy (e.g., due to the Large Magellanic Cloud; Garavito-Camargo et al. 2019;Conroy et al. 2021;Petersen & Peñarrubia 2021;Erkal et al. 2021;Lilleengen et al. 2022). To a much lesser extent, dwarfs with very low angular momenta which were accreted very early in the history of the Galaxy ( 12 − 13 Gyr ago) are also poorly represented in the local halo.For the few accreted dwarfs in this regime which are entirely unrepresented, their debris are concentrated around the galactic center, and therefore beyond the solar neighborhood.However, interestingly a local halo census manages to capture the majority of these most ancient galaxies ingested by the Milky Way. CONCLUSIONS The overwhelming majority of halo studies rely on solar neighborhood samples.We create a composite Milky Way stellar halo from simulations of the nine most massive disrupted dwarf galaxies to contrast the local halo to the whole halo.Further, we use the eleven stellar halo models built for Milky Way-mass galaxies from BJ05 to place our composite model in context.Our findings are as follows: 1.The local halo does not accurately represent the composition of the stellar halo; some dwarfs are excluded entirely from the sample, while others may be greatly over-or under-represented.[Fig. 3,Sec. 3.2] 2. Extrapolating the properties of the outer halo via orbital integration of local halo stars does not yield an accurate reflection of the outer halo.For example, the strong retrograde rotation of the outer halo (excluding Sagittarius) is underestimated.This is because the most distant halo stars as well as those with the highest angular momentum do not pass through the solar neighborhood.[Fig. 4 and 5,Sec. 3.2] 3. Comparing with the BJ05 simulations, we find that the chief class of disrupted dwarf galaxies entirely missing from the local halo is comprised of recently accreted systems with high angular momentum (e.g., Cetus).These systems are underrepresented in our current census of the halo.[Fig. 6,Sec. 3.3] In light of the biases discussed in this work, we urge caution when interpreting local halo samples.The composite model presented in this work may already be used as a realistic approximation of the stellar halo to e.g., model survey selection functions.We envision future work will produce self-consistent simulations including boutique models for all dwarfs. Our findings motivate whole halo samples that both reach to the edge of the Galaxy as well as into the Galactic center -efforts in this spirit include the 2MASS M-giant sample from Majewski et al. 2003, the Pan-STARRS RR Lyrae samples from Sesar et al. 2017;Cohen et al. 2017, andthe H3 Survey (Conroy et al. 2019).Upcoming surveys like SDSS-V (Kollmeier et al. 2017), DESI (Allende Prieto et al. 2020), 4MOST (Helmi et al. 2019), and WEAVE (Dalton et al. 2012) promise to re-veal the most ancient as well as most recent entrants into our Galaxy which are currently missing from our census of disrupted dwarf galaxies. Figure 1 . Figure 1.Composite Milky Way stellar halo, displayed in the XY plane.Each accreted dwarf galaxy is shown in color as its own panel and in the large, composite panel.The four panels directly to the right of the large composite plot are those selected from BJ05.In the large panel, point size and opacity are roughly scaled to the power 3/4 and 1/3 of mass per particle, respectively, with slight scaling variations for visual clarity. Figure 2 . Figure 2. Composite Milky Way stellar halo, displayed in the XZ plane.See caption of Figure 1. Figure 3 . Figure3.Comparison of the fractional composition of the whole halo to that of the local halo for each object.The left panel shows this comparison for a local halo with radius 3 kpc, while the right shows the same for an extended local halo with radius 10 kpc.The central gray line is where all points would fall if representations were equal between the local halo and the whole halo.The upper and lower gray dashed lines show a local halo representation 2× and 0.5× the representation of the whole halo, respectively.Note that the two entirely unrepresented objects with a local halo fraction of zero, Cetus and Sagittarius, are shown as upper limits, and that the above plots are in log scale. simulations, and motivated by the "Arjuna" substructure Figure 5 . Figure5.Comparison of the average Lz of all halo stars (in black), halo stars excluding Sagittarius (brown), GSE stars (blue), and local halo apocenters (for a solar neighborhood of 3 kpc in purple).One standard deviation is shaded above and below each line.The gray dotted line marks the beginning of the outer halo (20 kpc).The angular momentum distribution of the outer halo extrapolated from the local halo is significantly more radial than the actual halo. Figure 6 . Figure6.Twelve panels showing properties of disrupted dwarfs from each of the eleven BJ05 Milky Way-like stellar halos that are produced exclusively from dwarf debris, and our composite stellar halo.For each accreted dwarf galaxy, the "time unbound", or lookback time to when the galaxy ceased to be a gravitationally bound object, is plotted against the logarithm of average angular momentum.Each point, representing a distinct disrupted dwarf galaxy, is colored by the fraction of the object's mass in the 3 kpc local halo, with black, empty circles having no local halo stars.Circle sizes are proportional to the total object stellar mass to the power 3/4.In the lower right panel, each of the nine halo objects are labeled, with colors consistent to those in Figures1, 2 and 3. Notice that objects with high angular momentum and more recent times unbound -particularly those with total angular momenta 4000 kpc km s −1 -generally have low or no solar neighborhood representation.Additionally, objects at very low angular momenta, 300 kpc km s −1 tend to have lower representation than those with moderate angular momenta. Table 1 . Summary of Accreted Dwarf Galaxies in the Milky Way Stellar Halo Composite Model.
5,852
2022-11-08T00:00:00.000
[ "Physics" ]
Optimal Selling of an Asset under Incomplete Information We consider an agent who wants to liquidate an asset with unknown drift. The agent believes that the drift takes one of two given values and has initially an estimate for the probability of either of them. As time goes by, the agent observes the asset price and can therefore update his beliefs about the probabilities for the drift distribution. We formulate an optimal stopping problem that describes the liquidation problem, and we demonstrate that the optimal strategy is to liquidate the first time the asset price falls below a certain time-dependent boundary. Moreover, this boundary is shown to be monotonically increasing, continuous and to satisfy a nonlinear integral equation. Introduction This paper treats the problem of optimal timing for an irreversible sale of an indivisible asset under incomplete information about its drift.The asset price is assumed to follow a geometric Brownian motion X with unknown drift, and an agent who decides to sell at time t receives at this time the amount X t .The objective of the agent is to choose a liquidation time for which the expected value of the discounted asset price is maximised.Such problems are important for all types of investors with insufficient knowledge of the future trend of an asset. In the case with complete information about the model parameters of X, the corresponding optimal liquidation problem is trivial.Indeed, if the drift is larger than the interest rate, then on average the asset price grows faster than money in a risk-free bank account, and the agent should keep the asset as long as possible.Similarly, a drift smaller than the interest rate implies that the agent should liquidate the asset immediately, and instead deposit the money in the bank.However, we remark that the assumption of complete information about the parameters of X is quite strong.While the volatility of an asset, at least in principle, can be estimated instantaneously by observing the price fluctuations over an arbitrarily short time period, the drift is notoriously difficult to estimate from historical data.In fact, to achieve International Journal of Stochastic Analysis a decent accuracy in the estimate for the drift, one typically needs observations of the process from hundreds of years. Instead, we allow for incomplete information by modelling the drift as a random variable which is not directly observable for the agent.Initially, the agent's beliefs about the drift are summarised by a probability distribution.As time goes by, however, he observes the asset price process, and based on these observations his beliefs may change.Naturally, if the asset price rises quickly, then the agent will consider it more likely that the drift takes the larger of the two values.Consequently, he would in this case postpone the liquidation.Similarly, if the asset falls drastically, then it is likely that the true drift is small, and the agent would be more inclined to liquidate his position early.We show below that this intuition is true; that is, there exists a boundary between the continuation region and the stopping region such that the optimal liquidation time coincides with the first time the asset price falls below this boundary.We also derive monotonicity and continuity properties of the boundary, and we show that it satisfies a nonlinear integral equation similar to the one which characterises the optimal stopping boundary for the American put option. Related problems of liquidating an indivisible asset have been studied in 1, 2 .These papers study a risk-averse agent who wants to sell an indivisible asset with the possibility of hedging some of the risk by investing in a correlated stock market.The paper 3 , see also 4 , studies a problem of optimal selling of an asset, where optimality is measured by closeness between the current asset price and its ultimate maximum over the whole time period.In all the papers referred to above, the agent is assumed to have complete information about the underlying price processes.The methods we use to treat the incomplete information in our setting are standard and based on filtering theory see, for example, 5 .An early application of these techniques is the sequential testing of two alternative hypotheses about the drift of a Brownian motion; for further details and related references, see Chapter VI.21 in 6 .Similar techniques to tackle investment problems in markets with incomplete information are also applied in 7, 8 , where the problem of maximising expected utility of terminal wealth by trading in different assets is studied.The papers 9, 10 study the optimal timing for an investment under incomplete information.Mathematically, the investment problem in 9 is equivalent to the pricing of an American call option written on an asset with unknown drift.Using filtering techniques, the problem is reduced to an optimal stopping problem with complete information, but with two underlying spatial dimensions.A clever observation in 10 reduces the two-dimensional problem into a one-dimensional optimal stopping problem, but in general for a time-dependent payoff function for one specific choice of parameters, however, the time dependence disappears and the optimal stopping problem can be solved explicitly .In the present paper, the optimal liquidation problem has a linear payoff, which implies that the problem can be reduced to a one-dimensional optimal stopping problem for a time homogeneous diffusion with an affine payoff regardless of what parameters are chosen.Consequently, this reduced problem is straightforward to analyse using standard methods from optimal stopping theory. The present paper is organised as follows.In Section 2 we formulate the liquidation problem with incomplete information, and we apply filtering techniques to write it as a twodimensional problem with complete information.Moreover, we apply a Girsanov transformation that reduces the problem to a one-dimensional optimal stopping problem for a time homogeneous diffusion with an affine payoff function.We also provide the solution of the optimal liquidation problem in terms of the boundary of the auxiliary optimal stopping problem ; see Theorem 2.5.The auxiliary optimal stopping problem is treated in Section 3, where we demonstrate the existence of a monotonically increasing and continuous optimal International Journal of Stochastic Analysis 3 stopping boundary.We also show that the boundary together with the value function solves a parabolic free boundary problem.In Section 4 we derive an integral equation for the optimal stopping boundary.Finally, in Section 5 we study a related situation in which the agent seeks an optimal time to close a short position in the asset. The Optimal Liquidation Problem and Its Solution To model the situation with incomplete information, we assume that the asset price process X follows a geometric Brownian motion with unknown drift μ and constant volatility σ > 0.More precisely, where W is a standard Brownian motion independent of μ on a probability space Ω, F, P . Here, for simplicity, we assume that the drift μ can only take two values μ h and μ l satisfying μ l < r < μ h , where the interest rate r ≥ 0 is a constant, and the initial asset price X 0 is a positive constant.We consider an agent who owns the asset and wants to liquidate his position before a given future fixed time T > 0. At the initial time 0, the true value of the drift μ is not known, but we assume that the agent has an initial guess for the probabilities of the events {μ μ l } and {μ μ h }.More explicitly, we assume that the agent's initial estimate of the probability of the event {μ μ h } is a constant Π 0 ∈ 0, 1 .Accordingly, the estimate of the probability of {μ μ l } is 1 − Π 0 .Furthermore, we assume that the agent can observe the value process X, but neither the drift μ nor the Brownian motion W. This is a natural assumption since, in a real world situation, no underlying Brownian motion can be observed, and to estimate the drift with a high precision is infeasible. Example 2.1.Consider a Brownian motion Z t at bB t with drift a and volatility b here B is a standard Brownian motion .An estimate for the drift a based on observations over the time period 0, t would be a Z t /t, and a 95% confidence interval is then given by a − 1.96b/ √ t, a 1.96b/ √ t .Even if the volatility is small, say b 0.1, in order for the confidence interval to be reasonably tight, say a − 0.02, a 0.02 , one needs approximately 100 years of observations!Moreover, the observation time that is needed grows inverse quadratically in the length of the confidence interval. The objective of this paper is to determine when to sell the stock in order to maximise the expected wealth.More precisely, let {F X t } t∈ 0,T be the completion of the filtration generated by the process X.The agent then seeks an F X -stopping time τ with 0 ≤ τ ≤ T for which the supremum is attained, where the supremum is taken over F X -stopping times τ. Remark 2.2.Note that in the omitted cases Π 0 0 and Π 0 1, the problem is simply a problem with complete information, and the solution is trivial.Indeed, if Π 0 1, then μ μ h and e −rt X t is a submartingale, so optional sampling yields that V X 0 e μ h −r T .Similarly, if Π 0 0, then e −rt X t is a supermartingale and V X 0 .Also note that it is necessary to have T < ∞ in order International Journal of Stochastic Analysis to avoid a degenerate problem when Π 0 > 0. In fact, plugging in the stopping time τ n and letting n tend to infinity shows that in the perpetual case we would have an infinite value V . Remark 2.3.Inserting τ 0 into 2.2 yields a lower bound V ≥ X 0 .Another lower bound can be found by comparing with the corresponding "European value" X 0 Π 0 e μ h −r T 1 − Π 0 e μ l −r T determined by inserting τ T in 2.20 .Moreover, an upper bound for V can be found by observing that increasing u l to r simply gives a higher payoff.In that case it is clear that e −rt X t is a submartingale, so the optional sampling theorem gives that Naturally, if Π 0 is small, then the agent is rather confident that the true drift is u l , and he would liquidate immediately and rather deposit the money in the bank.On the other hand, if Π 0 is close to one, then he considers it likely that the drift is u h , and he would prefer to postpone the selling.By observing the process X, however, the agent's estimates for the probabilities of the events {μ μ h } and {μ μ l } may change.For t ≥ 0, let be the probability at time t that μ μ h conditional on the observations of X up to time t.From Theorems 7.12 and 9.1 in Liptser and Shiryayev 5 , the value process X and the belief process where ω μ h − μ l /σ and W, F X is a P -Brownian motion defined by Note that the drift of X depends on Π, so the optimal stopping problem 2.2 has two underlying spatial dimensions.However, since X and Π are both expressed in terms of the same Brownian motion W, the number of spatial dimensions can be reduced.Indeed, in the following we follow 10 and use a Girsanov transformation to reduce the problem to a onedimensional stopping problem.Denote by E * the expectation operator with respect to the new measure P * , and let τ ≤ T be an F X -stopping time. Then 2.16 where the third equality follows by conditioning upon F X τ together with the martingale property of η. Remark 2.4.Note that the measure change defined in 2.8 slightly differs from the one in 9, 10 , where the new measure instead is defined so that the Radon-Nikodym derivative coincides with To reduce the number of spatial dimensions, Klein 10 then employs the equality where 2.19 If ε 0, then the obtained optimal stopping problem is time homogeneous, and an explicit solution can be found.The measure change in 2.8 is tailormade for the situation of a linear payoff structure considered in the current paper.Thanks to the linearity of the payoff, the optimal stopping problem on the right-hand side of 2.16 is expressed in terms of a time homogeneous diffusion Φ with an affine payoff function independent of time.Note that this is the case not only for ε 0 but also for all possible parameter values. In view of 2.16 , we introduce the auxiliary optimal stopping problem where and the supremum is taken over stopping times with respect to the filtration generated by W. Note that Moreover, an optimal stopping time for the problem 2.20 translates to an optimal stopping time for the original problem 2.2 . In the next section we study the optimal stopping problem 2.20 .In particular, we prove the existence of a continuous and monotonically increasing function b : 0, T → 0, ∞ such that the stopping time is an optimal stopping time, that is, a stopping time for which the supremum in 2.20 is attained.The following result is then a direct consequence of relation 2.18 . Theorem 2.5.Let b be the function described above, the existence of which is proved in Proposition 3.2.Define the stopping time Remark 2.6.The optimal stopping boundary and the optimal stopping time τ * are illustrated in Figure 1.Note that it also follows from the analysis of the auxiliary problem below in particular 3.25 and relation 2.22 that V is the solution of a free boundary problem.Indeed, straightforward calculations show that V U 0, Φ 0 , where the function U t, φ satisfies 2.25 Remark 2.7.Note that the value V exhibits an easy monotone dependence on the model parameters μ l , μ h , r, Π 0 , and T .The dependence on volatility is slightly more involved to be analysed.However, it is a consequence of Corollary 2.7 in 11 that Γ is monotonically increasing in the diffusion coefficient i.e., in ω μ h − μ l /σ , so V is decreasing in σ.The intuition behind this is that, in the case of a small volatility, learning of the true value of the drift is fast, which is beneficial for the agent.The optimal stopping boundary X 0 /Φ β 0 e εt b β t , a simulated path of the asset price X, and the optimal stopping time τ * .We used the parameter values σ 0.3, μ h 0.5, μ l −0.3, r 0.1, T 0.5, X 0 10, Π 0 0.5. The Auxiliary Optimal Stopping Problem In this section we study the optimal stopping problem 2.20 .This problem is similar to the one arising in the valuation of American put options; compare 12 and Chapter 2.7 in 13 .We prove the existence of a monotone and continuous optimal stopping boundary, and we show that the boundary and the value function Γ solves a related free boundary problem. Recall that Z satisfies dZ u Z u σωdu ωdW u μ h − μ l du ωdW u , u ≥ 0. 3.1 We will also use the representation Z u zH u , where With this notation, Proof.Assume that z 2 > z 1 > 0, and let τ be an optimal stopping time for Γ t, z 2 in the sense that International Journal of Stochastic Analysis 9 such an optimal stopping time exists, see e.g., Theorem D.12 in 13 .Then we have 3.5 Ito's formula gives that the process Y t : e μ l −r t H t satisfies dY t μ h − r Y t dt ωY t dW t . 3.6 Since the drift μ h − r is strictly positive, e μ l −r t H t is a submartingale, so the Optional Sampling Theorem gives that which shows that Γ is Lipschitz continuous in z. 3.8 Now 3.9 where and where we used the fact that e μ l −r t H t is a submartingale.Note that F t → 0 as t → 0, which implies that the second term on the right-hand side of 3.8 tends to zero as t 2 − t 1 → 0. A similar argument applies to the first term in 3.8 , thus showing that Γ is continuous as a function of t.Since Γ is also uniformly continuous with respect to z, this finishes the proof. Choosing the stopping time τ 0 in 2.20 , we find that Γ t, z ≥ G z : 1 z.Define the continuation region C by and the stopping region D by 3.12 According to general theory for optimal stopping problems, see, for example, 6 , the stopping time is an optimal stopping time in 2.20 .Therefore, to determine an optimal stopping time, it suffices to determine the optimal stopping region D. LG z : is the infinitesimal operator of Z.A simple calculation shows that 3.15 It therefore follows from Ito's formula that e μ l −r s G Z s is a submartingale for s ≤ inf{u : Z u < r −μ l / μ h −r }.By the Optional Sampling Theorem, all points z, t with z > r −μ l / μ h −r belong to the continuation region C. A better bound for the stopping region D is easily derived by comparing Γ with the corresponding "European value".More precisely, we have that 3.16 Therefore, all points z, t such that z > 1 − e μ l −r T −t / e μ h −r T −t − 1 satisfy Γ z, t > 1 z; that is, they belong to the continuation region.Note that the function b E t : 1 − e μ l −r T −t / e μ h −r T −t −1 is increasing and satisfies b E −∞ 0 and b E T r −μ l / μ h −r . 3.17 Moreover, the supremum in 2.20 is attained for the stopping time 3.18 Proof.First note that For some fixed t ∈ 0, T and z > z > 0, suppose that t, z is in C. Then there exists a stopping time τ such that 3.21 Since the process Y t : e μ l −r t H t is a submartingale, compare 3.6 , the Optional Sampling Theorem gives 3.22 Therefore, t, z also belongs to the continuation region C, proving the existence of a function b : 0, T → 0, ∞ such that 3.23 The fact that b only takes values smaller than r − μ l / μ h − r follows from the discussion before Proposition 3.2, and the monotonicity of b follows from the monotonicity of t → Γ t, z .Finally, the right continuity of b follows from the fact that the continuation region C is an open set, and the optimality of τ D is already established. International Journal of Stochastic Analysis Remark 3.3.In view of the discussion preceding Proposition 3.2, the optimal stopping boundary b t satisfies A similar bound is then valid also for the optimal stopping boundary in Theorem 2.5. Proposition 3.4.The value function Γ t, z satisfies the boundary value problem 3.25 Proof.Since Γ t, z > 1 z for z > b t and Γ t, b t 1 b t for t ∈ 0, T , it follows that lim inf 3.26 Thus, it remains to show that lim sup 3.27 For any ρ > 0, denote by τ ρ : τ * t,b t ρ the optimal stopping time for the starting point t, b t ρ as defined in 2.23 .We have ρE * e μ l −r τ ρ H τ ρ . 3.28 We know that the optimal stopping boundary s → b s is increasing on t, T and s → ω/2 − σ s is a lower function of the Brownian motion W at zero.It follows that τ ρ → 0 P * −a.s. as ρ → 0, which tells us that E * e μ l −r τ ρ H τ ρ −→ 1 3.29 as ρ → 0 by the dominated convergence theorem.Hence, 3.27 holds, so z → Γ t, z is C 1 at z b t , and Γ z 1 G .The proof that Γ satisfies the partial differential equation in 3.25 relies on the continuity of Γ and follows along the same lines as, for example, in the case of the American put option, compare page 72 in 13 .We omit the details. International Journal of Stochastic Analysis 13 Remark 3.5.Furthermore, it can be shown that the pair Γ, b is the unique solution to the free boundary problem 3.25 within some appropriate class of functions .We leave this and instead refer to Chapter 2.7 in 13 , where this is shown for the American put option. Together with 3.30 , this yields 3.32 Using 15 , it follows that for any t, z in the rectangle.Since both the value and the gain functions are continuous, this leads to Γ t * , z > G z for any z ∈ c, d , which contradicts the fact that t * , z is in the stopping region.Therefore, b t is continuous on t ∈ 0, T and b T − r − μ l / μ h − r . An Integral Equation for the Optimal Stopping Boundary In this section we derive an integral equation for the optimal stopping boundary.The derivation follows along similar lines as for the American put option; see 12 . Theorem 4.1.The optimal stopping boundary b t satisfies the integral equation x −∞ e −y 2 /2 dy is the cumulative distribution function of the standard normal distribution. Proof.Fix a t ∈ 0, T and Z 0 z ∈ 0, ∞ .Applying Ito's formula to e μ l −r s Γ t s, Z s and taking the expected value give where G y 1 y and F LG− r −μ l G as before.The use of Ito's formula can be motivated by similar arguments as for the American put option; cf.14 .Straightforward calculations give e μ l −r T −t E * G Z T −t E * e μ l −r T −t E * e μ l −r T −t Z T −t e μ l −r T −t ze μ h −r T −t . 4.3 The integrand of the right-hand side in 4.2 is 4.4 We have Remark 4.2.Using local time-space calculus, it was proved in 14 that the optimal stopping boundary of the American put option is the unique solution to the corresponding integral equation.Using similar techniques, uniqueness for 4.1 can be established.We omit the details. Closing a Short Position In this section we consider an agent with a short position in the asset and who seeks an optimal time to close the position.To study this situation we formulate the optimal stopping problem v inf 0≤τ≤T E e −rτ X τ , 5.1 where the infimum is taken over F X -stopping times τ.All the assumptions about the model are as described in Section 2. By exactly the same arguments provided above, we find that v X 0 1 Φ 0 γ 0, Φ 0 , 5.2 where γ is defined through the auxiliary optimal stopping problem γ t, z inf and the infimum is taken over stopping times with respect to the filtration generated by W.Moreover, an optimal stopping time for the problem 5.3 translates to an optimal stopping time for the original problem 5.1 . The following results parallel those for the optimal liquidation problem for a long position, and the proofs are omitted. International Journal of Stochastic Analysis Moreover, the infimum in 5.3 is attained for the stopping time τ D : inf{0 ≤ u ≤ T − t : Z u ≥ b t u }. 5.7 The optimal stopping boundary b t satisfies the integral equation Remark 5.3.Unlike the optimal liquidation problem for a long position, the problem of this section also makes sense to study with an infinite horizon. Proposition 3 . 6 . The boundary b t is continuous on 0, T and b T − r − μ l / μ h − r .Proof.It follows from Proposition 3.2 that b is right continuous on 0, T .To prove the leftcontinuity, define b T r − μ l / μ h − r , and assume that the boundary b t has a jump at t * ∈ 0, T , that is, b t * > b t * − .By 3.15 and a continuity argument, there exists a δ < 0 and a one-side open rectangle R : t , t * × c, d ⊆ C with b t * − ≤ c < d < b t * such that 0≤τ≤T −t E * e μ l −r τ 1 Z τ , √ u ln b t u b t − ωσu ω 2 u 2 b t μ h − r e μ h −r u 1 Corollary 5 . 2 . 1 b t e μ l −r T −t b t e μ h −r T −t − T −t 0 μ l − r e μ l −r u 1 − N 1 ωThe infimum in 5.1 is attained for the stopping time τ . By Girsanov's Theorem, W is a P * -Brownian motion.Next, define the likelihood ratio Φ by Φ t Π t / 1 − Π t .A straightforward application of Ito's formula gives dΦ t ω 2 Π t Φ t dt ωΦ t dW t . so both X and Φ are geometric Brownian motions under P * .Moreover, the filtration generated by W coincides with the one generated by X. X is a P * -Brownian motion, the process η is an F X -martingale under P * .Let d 1 ,
6,120.8
2011-12-20T00:00:00.000
[ "Economics" ]
“Promotion of Ukraine’s export to China: priorities and institutional framework” In the context of neo-protectionism and in terms of WTO membership, regulatory mechanisms for promoting the national producer and country’s external expansion require an institutional basis. This paper primarily aims to explore the resource and institutional component in promoting Ukraine’s exports to Chinese market and to identify the level of Ukraine’s export promotion system effectiveness based on combinatorial approach, which includes the calculation of quantitative indicators of foreign trade in the form of international production and marketing cooperation and estimation of qualitative parameters of export promotion effectiveness. The empirical findings indicate following: high dynamism of increasing mutual trade volume; enlargement of trade flows asymmetry caused by the negative trade balance of Ukrainian economy; limited list of commodity groups of Ukrainian exports in mutual trade with China with stable relative advantages; dominance of low-value-added commodities among export priority groups; absence of beneficial effect of such a factor as “long-term partnerships” in the mutual trade flow. The paper reveals that national export promotion system in Ukraine can be characterized by low efficiency and strong potential for growth. The authors emphasize the importance of intensifying the projects and mechanisms of financial and investment support for exporters with increasing the level of their innovative orientation. Prospects for further research in this area are as follows: the assessment of macroeconomic effects from the introduction of export promotion tools for the national economy of countries of origin of goods and importing countries; detection of anticompetitive risks in the implementation of selective support programs for exporters. INTRODUCTION In the context of sluggish global GDP dynamics (3.0% in 2018 and a decline to 2.6% in 2021 according to estimates by World Bank experts (World Bank, 2019)) and disappointing forecasts for high risks of new recessions, economic competition among countries for the possibility to promote their own national product both to traditional and new markets is aggravated.The importance of this issue is constantly growing due to the fact that for a large number of countries of the world community, export is considered as one of the most important drivers of national economic development.The traditional neoliberal approach in justifying export priorities and ensuring its competitiveness is based on achieving price advantages, improving the quality and innovation of the exporting products.According to authors' view, in the context of the XXI century, such an interpretation of the content of national export strategies priorities is not enough, because now intangible, i.e. institutional factors of economic growth should play a decisive role.Therefore it is cru-LLC "СPС "Business Perspectives" Hryhorii Skovoroda lane, 10, Sumy, 40022, Ukraine BUSINESS PERSPECTIVES cial to justify these new mechanisms for developing the export potential through creating of the effective institutional support systems for exports. The novelty of the research focus reflects the authors' view on the processes of modernization of the institutional matrix of export promotion in the existing conditions of international competition and the strengthening of non-protectionist approaches in the protection of national markets adaptively to the conditions of a particular country and in the context of its priority trade partnership.These research objectives have not only theoretical, but also practical content. Ukraine is currently in active processes of geospatial reorientation of its foreign trade flows caused by a sharp collapse of mutual trade with Russian Federation and strengthening of the European vector of cooperation.The weight of the export component in the economic growth of the country is confirmed by the following statistics: the level of external openness in terms of exports share in country's GDP in 2018 reached 45.2%, thus, the task of increasing the presence of domestic exporters in foreign markets is being updated for the Ukrainian economy.Despite the intensification of strategic partnership with European countries, one of the priority directions in Ukraine's foreign economic policy is the development of strong ties with the countries of the Asian region, which according to UNCTAD statistics in 2018 accounted for 36.60% of world GDP (of which 15.20% is for China), 41.16% and 38.01% of world exports and imports. The recommended shift in the focus of institutional support for Ukrainian exports towards Asian markets is important due to: • existing non-tariff restrictions on export supplies from Ukraine to the EU, regulated by the DCFTA, which are not in line with the export potential of the Ukraine and are hampered by the possibilities for its intensive use; • the need for diversification of export supplies in order to safeguard against the risks of falling foreign exchange earnings in case of deterioration of economic, political or diplomatic conditions of cooperation with a trading partner; • more stable and high level of economic dynamics at the level of GDP, domestic consumption and therefore demand in Asian countries, first of all, the People's Republic of China (PRC), the Republic of Korea, Singapore, Thailand, as well as a number of other countries, which allows them to demonstrate increased capacity for import purchases. The importance of issues raised in the article is not limited only by Ukrainian business and government regulation.Asian market entry is extremely difficult for many exporters because of striking socio-cultural, political, legal and economic characteristics of their environment.In many segments of the world markets of goods and services, foreign companies are facing a strong presence of Chinese enterprises.In many ways, this power is caused by an effective system of state support for their exports.Thus, increasing product expansion into the PRC market is now a challenge not only for business, but also for the institutional capacity of many countries in the world.Empirical intelligence on the example of Ukrainian-Chinese trade may be useful in national export development programs improvement. In 2018, the share of Ukrainian exports to Asian countries amounted to 29.06%, of which China accounted for 4.65%.Despite the relatively low figures for China, the prospects of trade and economic partnership between Ukraine and China seem to be unprecedented, and, therefore, an important research objective for the authors was to evaluate the resource and institutional component in promoting Ukraine's exports to the PRC market; to identify the level of Ukraine's export promotion system effectiveness; to disclose the current state of export support tools implementation in Ukraine and to determine ways to strengthen it. LITERATURE REVIEW The authors of the paper share the widespread but not dominant expert approach about the paradoxical impact of globalization on national economic development.It is a complex interaction of multidirectional processes: on the one hand, internationalization, integration (Cruz, 2014), and on the other hand, regionalization, localization and fragmentation of the world space (Naisbitt, 1994 Considering the problems of institutional support for export promotion in Ukraine, which is, unfortunately, an outsider to globalization processes, it is quite logical to raise the issue of the expediency, scale and consequences of increasing export orientation of the national economy.The analysis of the recent empirical studies indicates the inconsistency of the results obtained.Thus, despite the positive impact of export expansion on the economies through the mechanism of comparative advantage (Leonidou et These questions are currently at the epicenter of the search for both expert theorists and practitioners.In this context, the authors hope that the approach, which combines institutional and neoliberal views on the problem of selecting and evaluating regulatory instruments in foreign trade, will find interest among professionals. METHOD In order to evaluate the effectiveness of the use of institutional mechanisms for promoting exports to the PRC market, it was proposed to use an integrated approach that covers: • methods and their quantitative indicators, which allow to assess the success of the results of regulation of foreign trade activity of the country, in particular the relative trade preference index, export efficiency coefficient, export/ import structure by ABC-and XYZ-analysis, calculated on the basis of official public data of national statistical services of Ukraine; • Integrated Export Promotion Regulatory Efficiency Index, calculated on the basis of expert assessments provided to the whole export support system or its separate components from exporting companies and experts, as well as data from national statistical services of Ukraine. The argumentation in favor of the combinatorial approach proposed by the authors is as follows: first, the involvement of quantitative indicators of the evaluation of bilateral trade performance allows to identify the resource and technological efficiency of a country's export potential and relations of its trade partnership, paying tribute to the neoliberal understanding of international trade; second, the appeal to qualitative assessments of the export support system performance reflects the author's desire to propose a more comprehensive approach in identifying the newest factors for ensuring the competitive status of countries in the world markets for goods and services, taking into account the strengthening of the role of institutions, including state and non-state origin. RESULTS The logic of the disclosure of the scientific problem raised in the present study leads to the revealing the main components of export promotion that has now been developed in Ukraine, the discovery of the results of calculations according to the methods and indicators proposed by the authors that directly or indirectly certify the effectiveness of the national export promotion system, and definition of sectoral priorities for deepening the trade and economic relations between Ukraine and the People's Republic of China. Consequently, the system of promotion of exports is a complex of measures by the state represented by its regulatory bodies or organizations, as well as non-state institutions, in order to simplify the process of selling national products by stimulating exporting companies within the country and providing them with practical assistance outside the country of origin.Such measures often include consultations on local legislation and practice of conducting the business in the country of a potential foreign business partner, providing export credits and guarantees on favorable terms, information support, etc.As a result, such state and non-state support of exports is aimed primarily at strengthening the competitiveness of national enterprises in international markets, creating favorable conditions for the promotion of national business interests in foreign markets. To address these challenges, priority sectors in Ukraine have been identified, including: food industry, in particular, the production of food ingredients, ready-made food and organic products.The food ingredients in the document include canned food, fresh slicing, frozen and cooked vegetables, juice concentrates, pastes and any readyto-eat or for further processing products.Readymade food products, recognized as a priority for export, include confectionery, poultry, beverages, sunflower oil, honey, juices, tomato paste, canned vegetables, dairy products. The export strategy of Ukraine also included the creation of Export Credit Agency in 2018, the implementation of a "one-stop shop" project for border crossing of goods (works, services) by 2020.Within the framework of the export strategy, the top 20 markets were identified for Ukrainian exporters, which, if they choose the right forms and tools for working with them, are able to show fairly fast results, among which, besides EU coun-tries, Egypt, India, Belarus, Georgia, Moldova, Iran, Saudi Arabia, China, Japan, USA, Canada, Switzerland and Bangladesh. Another document outlined by us "Unified Integrated Strategy for Agriculture and Rural Development for 2015-2020" identifies 10 key priorities, including access to international markets, trade policy and export promotion.The task of shifting focus from the raw materials markets to the export of processing products is set.Indicators of the implementation of the Strategy in the field of trade policy are: increasing the volume of Ukrainian agricultural products for export by 20% until 2020; the final stage of talks on five free trade areas with the new countries by 2020; creation of a system of export financing and lending until 2020. The Strategy proposes a number of other export support instruments: providing export markets access channels for small and medium-sized producers under a simplified procedure; creation of the brand "Product of Ukraine"; work on the recognition of the equivalence of control systems and compliance; the strengthening of the role of economic departments of embassies and the introduction of the institute of sales representatives (on the basis of joint public and private funding) in the most promising for trade countries; initiatives to prepare manufacturers for participation in international exhibitions and to determine the list of recommended exhibitions. As already mentioned, the Export Strategy of Ukraine defined the Chinese market as a priority in terms of the national economic interests of Ukraine.At the same time, it should not be assumed that it is only from the period of the Strategy's adoption that the Ukrainian-Chinese trade and economic partnership can be considered.Of course, both the initial conditions and the scale of the state export support policy in these countries are different.But there is every reason to rely on the existing advantages of Ukraine in trade with the PRC even amid almost total absence of export promotion in previous years to their substantial strengthening already in the medium term. The assessment of the level of relative advantages of Ukraine and the People's Republic of China in mutual trade, which is related to indirect resultant export support efficiency results, was carried out by the formula (1). where RA ij -demonstrative relative advantage of i-country by j-commodity; EX i , IM i -export andimport of i-country; EX ij , IM ij -export and import of j-commodity of i-country. The indices of relative advantages in trade between Ukraine and the PRC in 2011-2017 are calculated on the basis of the Table 1 data and indicators of the sectoral structure of Ukraine's foreign trade are presented in Table 2. From Table 2 it can be concluded that in recent years the list of export commodity groups of Ukraine, which had stable relative advantages in the mutual trade with China (indices of relative advantages are positive and more than 1) is extremely small.These commodity groups include: fats and oils of animal or vegetable origin; mineral products; products of plant origin.The obvious problem is that all these groups refer to so-called low-tech exports, which are characterized by a low level of added value when implementing the relevant foreign trade agreements.At the same time, according to calculations for the PRC, the highest level of relative advantages in exporting the products to Ukraine is shown by the following commodity groups: shoes, hats, umbrellas; various industrial goods; textile materials and textiles; chemical products and related industries. Analyzing more significant range of external expansion of Ukrainian exporters, extending it not only to the Chinese market, but also to the entire Asian continent the most priority commodity groups of Ukrainian exports for the Asian market are: fats and oils of animal or vegetable origin, grain crops, ferrous metals, remains and wastes of food industry, products of the flour-grinding industry, tobacco and industrial tobacco substitutes, seeds and fruits of oilseeds, sugar and sugar confectionery, milk and dairy products, eggs; honey, cocoa and products from it (Table 3). Appealing to the analysis of the dynamics of relative advantages in trade between Ukrainian enterprises and counterparties from the People's Republic of China, it is usually quite complicated to distinguish the role of institutional efficiency of export promotion through direct quantitative indicators.In view of this, an assessment of the regulatory effectiveness of Ukraine's exports promotion to the People's Republic of China is proposed to be carried out using the scoring method on the basis of statistical data of the statistics services of Ukraine, authors' calculations, assessments of the experts' representatives and analysis of the regulatory framework, that regulates the international trade and economic cooperation (Table 4).Over the past 10 years, Ukraine has made some progress towards institutional support for export promotion, which is evidenced by the 1.3 times growth of the integral index proposed by us.This was largely due to the deepening of the diversification of foreign trade between the specified countries by expanding the list of commodity groups traded, the introduction of state support programs for domestic exports, the creation of new and im-proved existing regulatory framework, albeit sluggish but still increasing innovation of Ukrainian exports. It was proposed to use ABC and XYZ analysis to understand the dynamics of the sectoral structure of foreign trade between Ukraine and the People's Republic of China.Carrying out ABC analysis, there was taken into account the share of sales of commodity groups of Ukraine's exports to the PRC, summing up the results by the method of accumulation and distributing it into three groupswith high level of weight of commodity group(s) in total exports (share -80%), with average level of weight (share -15%) and low level of weight (share -5%). Another proposed method is XYZ analysis, which is interesting in that it determines how stable is the demand for exports from Ukraine by the PRC in the context of its separate groups, the supply of which commodity groups is based on long-term partnerships and on which commodity groups exports are carried out non-rhythmically, hence, the lack of established partnerships with Chinese counterparts.In the context of this approach, the definition of products in assortment group X is important, because it provides the basic and stable volume of Chinese demand for Ukrainian goods, so the focus of regulatory support and assistance from domestic state and non-state institutions should be maximized.The significance of the commodity groups Y and Z is less noticeable, however, identification of them makes it possible to prioritize efforts for regulators.The summary results of the ABC and XYZ analysis of foreign trade between Ukraine and the PRC by commodity groups under the codes of Ukrainian Classification of Goods of Foreign Economic Activity (UCG FEA) are presented in Table 5.Unfortunately, the "perfect combination" of AX was not achieved for Ukrainian exports year by year. In general, during 2011-2017, in the structure of Ukraine's export, it is possible to distinguish the commodity group, whose export stability was the highest -"Machines, equipment and mechanisms; electrical equipment".The least stable was the export of mineral products, but their share in the total export earnings was quite significant (36.16% in 2017).Obviously, the negative fact is that the total share of commodity products with low level of export stability is almost 6 times higher than the share of goods whose exports are persistent and usually reflect long-lasting strong partnerships. The situation regarding import of goods from China to Ukraine is slightly different (Table 6). During 2011-2017, the sectorial structure of imports from the People's Republic of China to Ukraine was characterized by rather high level of conservatism and sustainability.Certain exceptions are the import of the commodity group "Other industrial goods".The group's import was the least stable -"Machines, equipment and machinery; electrical equipment" (in 2017, the combination of AZ). A comparative analysis of the efficiency of Ukraine's exports to the countries of Asia, the EU and other regions of the world has shown the high competitiveness of agricultural products in the Asian market (Table 7).It should be noted that the communication and marketing component of the export promotion system is gaining importance nowadays.In view of the peculiarities of the PRC's marketing envi-ronment, it is recommended to pay attention to the following key specific features of the PRC's marketing environment in promoting the Ukrainian products to the Chinese market: • dominance among the producers of the PRC of a narrow specialization of production with elements of copying on the basis of mass production; • brand awareness is more favorable, compared with the indicator of economic freedom and financial capacity; • high level of interest from Chinese consumers in new products and innovations in commodity policy; • during the formation of communication programs on the PRC market, including advertising messages, it is desirable to use an image that combines accuracy, restraint and mentality with Asian creativity. Another peculiarity of the Chinese market is that Chinese citizens display a high degree of dependence on brands and public opinion.In China, the sale of an obscure and unknown product is complicated.In view of this, producers who intend to sell their products in China, especially retail consumer goods, should register the trademark in the China Trade Mark Office (CTMO), which prevents breaches and creates a long-term reputation for the brand. Institutional support of export activity is a prerequisite of success in terms of controversial trends of trade liberalization and increasing of latent protective measures.The approaches and methods to assess the level of the export support system effectiveness, proposed by the authors, can serve as a methodological basis in substantiating and modificating national programs for developing the export potential of countries. CONCLUSION In the context of the intensification of international competition and recession in the regional and global markets for goods and services, the institutional capacity of states to maintain their own exports is an integral part of modern national economic development strategies. The results of the quantitative analysis conducted in 2007-2017 revealed following: high dynamism of increasing mutual trade volume between Ukraine and PRC, enlargement of trade flows asymmetry caused by the negative trade balance of Ukrainian economy; limited list of commodity groups of Ukrainian exports in mutual trade with China with stable relative advantages; dominance of low-value-added commodities among export priority groups; the absence of a beneficial effect of such a factor as "long-term partnerships" on the mutual trade flow.The results of the assessment of the qualitative parameters of export promotion system in Ukraine identify its low efficiency and strong potential for growth. The recommended priorities in the application of measures and instruments of state influence on export activity are following: activation of projects and mechanisms of financial and investment support for exporters; strengthening the information and communication; diplomatic assistance for Ukrainian exporters in foreign markets. Prospects for further research in this area are: the assessment of macroeconomic effects from the introduction of incentive and export promotion tools for the national economy of countries of origin and importing countries; detection of anticompetitive risks in the implementation of selective support programs for exporters; substantiation of the most effective components of export promotion for countries with low financial capacity of export support programs. Table 2 . Indices of relative advantages in the mutual trade between Ukraine and China by product groups in 2011-2017Source: Calculated by the authors. Table 3 . Priority commodity groups of Ukraine's exports for sale in the Asian market by the criterion of relative advantagesSource: Calculated by the authors. Table 4 . Integral index of regulatory effectiveness of promoting Ukraine's export to the PRC in 2008 and 2017 Source: Calculated by the authors. Table 5 . Matrix of the structure of Ukraine's exports to the PRC in 2011-2017 based on ABC, XYZ analysis Source: Calculated by the authors. Table 6 . Matrix of the structure of Ukraine's import from the PRC in 2011-2017 based on ABC, XYZ analysis Table 7 . Comparative analysis of export efficiency of certain commodity groups of Ukraine in the EU, Asia and other regions of the world in 2012-2017 Source: Calculated by the authors.
5,257
2019-10-07T00:00:00.000
[ "Economics", "Business" ]
Enrichment: Journal of Multidisciplinary Research and Development In the rapidly advancing technological era, the integration of technology in education becomes crucial. However, research conducted by Askal (2015) reveals a gap between the school's culture and leadership style with the digital era. The study found that 93% of school principals acknowledge the importance of digital leadership, indicating awareness of the need to adapt to digital leadership in this era. Digital leadership, digital culture, and employees' digital capabilities influence the sustainability of organizational performance, particularly in the education sector. Therefore, this research aims to examine the impact of digital leadership, digital culture, and employees' digital capabilities on the sustainability of organizational performance, especially in educational organizations. The study employs survey and structural analysis methods, with a case study conducted at MPK KAJ Middle Schools in South Jakarta and South Tangerang. The results show a significant positive influence between digital leadership and digital culture, digital leadership and employees' digital capabilities, digital culture and organizational performance sustainability, employees' digital capabilities and organizational performance sustainability, as well as between digital leadership and organizational performance sustainability, with digital leadership and digital culture as factors influencing the sustainability of the school's organizational performance through digital leadership. INTRODUCTION In the rapidly changing and instant era, advanced technologies are transforming teaching and learning landscapes swiftly.In an ideal teaching and learning setting, technological integration should be present, enabling students to utilize new technologies to support their learning, similar to other learning tools.However, as per Askal (2015), a gap exists between current school leadership and digital culture and leadership styles.Present school leaders experience a knowledge and application gap as they navigate leading digital advancements and implementing these practices in the school learning environment.In the study, it was found that 93% of school principals reported an awareness of digital leadership, indicating recognition of the move towards digital leadership in the digital era.Yet, they face limited opportunities to implement digital leadership due to inadequate training and technological infrastructure for utilizing technology in supporting learning and school improvement.This signifies a lack of understanding of rapidly advancing technology. Further research by Crosby (2020) emphasizes digital competence as a crucial skill for students to learn and work effectively in an increasingly digital world.To excel in education, students are required to possess the necessary skills to adeptly and effectively use various technologies across different spaces, places, and situations.Digital competence not only aids students in personal engagement and communication but also in their future workplace success. Supporting this, Gonzalez's study (2016) reveals that employers' attention towards the digital skills of their current and potential employees is on the rise, as nearly every organization relies on these skills in transitioning between different levels of maturity models.Thus, the connection between Gonzalez and Crosby's research is elucidated in Pagani's study (2006), highlighting the crucial inclusion of digital competence in teaching and learning activities.One way to identify digital capabilities in these activities is through assessment rubrics developed by academic coordinators. Taking the context described into consideration, the following inquiries have been raised, and a research model has been established, as illustrated in Figure 1 for this study.Hence, this research aims to explore the influence of digital leadership, digital culture, and employee digital competence on organizational performance sustainability, particularly in educational organizations.It is hoped that this study will assist educational institutions in implementing aspects that enhance school performance, particularly in the current digital era. METHOD In this study, primary data sources consisted of stakeholders from the Secondary School (SMP) MPK located in South Jakarta and South Tangerang, hereafter referred to as respondents.The population for this research comprised stakeholders of the Secondary School MPK KAJ in South Jakarta and South Tangerang, Indonesia.A total of 111 individuals were included in the sample, comprising stakeholders from the Secondary School MPK in South Jakarta and South Tangerang.The questionnaire responses were distributed among various stakeholder groups, with 74% being teachers, 10% administrative staff, 9% vice principals, and 7% school principals.The respondents who completed the questionnaire displayed diverse lengths of work experience: less than 5 years (25%), 5 -10 years (12%), 10 -15 years (12%), and more than 16 years (51%).Furthermore, the respondents who participated in the questionnaire covered a wide range of age groups: individuals under 25 years (9%), those aged 26 -35 (25%), those aged 36 -45 (16%), those aged 46 -55 (27%), and individuals above 55 (23%).The gender distribution of the respondents who completed the questionnaire consisted of 44% male and 56% female participants.Regarding the highest level of education, the respondents who completed the questionnaire had a varied educational background: 1% with high school diploma or equivalent, 2% with a Diploma (D3), 86% with a Bachelor's degree (S1), 11% with a Master's degree (S2), and 1% with other qualifications. This study employs a survey method to examine sociological relationships between variables.It utilizes quantitative research techniques to analyze the psychological connections within the sampled population, employing a specific research approach-structural methods including Path Analysis and Structural Equation Modeling (SEM).This method aims to generalize cases and individual statuses for a broader understanding. The study focuses on the dependent variable of organizational performance sustainability and independent variables of digital leadership, digital culture, and employee digital capabilities.Likert scales measure these variables' indicators.Data collection techniques encompass: 1. Interviews with School Principals to gather necessary information. 2. Documentation analysis, examining supporting data or documents. 3. Questionnaires distributed to School Principals, Vice Principals, Teachers, and Educators to gauge responses regarding digital leadership, organizational culture, employee digital capabilities, and organizational performance sustainability within the context of the Secondary School MPK KAJ in South Jakarta and South Tangerang areas.The measurement of variables was carried out by assigning indicators to each variable.For the variable organizational performance sustainability, 8 indicators were adopted from the study by Magd, H. and Karyamsetty, H. (2020).For the variable digital leadership, 9 indicators were drawn from the research by Avolio, Bruce J. and Dodge, George E. ( 2000) and Wesly, J., et al. (2021).In the case of the variable digital culture, 9 indicators were taken from the study by GorjianKhanzad, Z. and Ali A. Gooyabadi (2022) and Ferdian, A. and Annisaa Rahmawati (2019).Lastly, for the variable employee digital capabilities, 10 indicators were adopted from the research by Korhonen, J. J., and Asif Q. Gill (2018) and Balyk, N., et al. (2020). Instrument validity and reliability are tested.Pearson's Product-Moment Correlation is used for validity testing.If the correlation value exceeds the critical value (0.05 significance level), the sample is deemed sufficient for further analysis.Table 1 indicates that each indicator is valid to proceed to the next stage of analysis.For modeling, Structural Equation Modeling (SEM) with Partial Least Squares (PLS) is employed.PLS-SEM is chosen due to small sample size and non-normally distributed data (Hair, et. al., 2014), as well as its accuracy in evaluating latent constructs (Tajpour, 2021).SEM's first-order model is used, establishing direct relationships between latent variables and measurement variables (Rashid, 2020).By utilizing the indicators and variables provided earlier and aligning them with the research questions previously formulated, a PLS-SEM model can be constructed as depicted in Figure 2. 3 presents the results of the outer loading analysis that has been performed.As observed in the presented table, there are several values that fall below 0.7.This signifies that certain indicators fail to adequately represent their corresponding construct variables.To enhance the accuracy of the model, as per Hair (2014), these indicators should be eliminated.Consequently, the indicators with outer loading values below 0.7 need to be removed.After removing these indicators, the resulting outer loading values are displayed in Table 4.It can be observed from the provided table that the outer loading values of each indicator now exceed 0.7.This indicates that the indicators used effectively represent their respective construct variables.Hence, these indicators will be utilized for further analysis.Following the analysis of outer loading and the selection of relevant indicators, the subsequent step involves conducting Average Variance Extracted (AVE) analysis.According to Hair (2014), AVE serves as a criterion for testing the convergent validity of a construct variable by examining the average squared outer loading values of each indicator connected to the construct variable.Consequently, a recommended AVE value is above 0.5, signifying that the construct can explain more than 50% of the variance in its constituent indicators.The AVE results for each construct variable in the model can be observed in Table 5.It is evident that the AVE values for each variable exceed 0.5, indicating that the model conforms to the criteria set by Hair (2014).By incorporating the revised indicators, Figure 3 illustrates the final model that will be employed for the Inner Model analysis and hypothesis testing in this research. Figure 3. Final PLS-SEM Model Inner Model testing or structural model testing is one of the methods used to assess the formulated hypotheses.This testing process consists of two parts: the test of direct influence significance and the examination of indirect influence or mediation.Table 6 presents the values of the significance test for the influences of each variable.The results of indirect effects or mediation can be calculated and their outcomes presented in Table 7. RESULTS AND DISCUSSION In Table 6, it can be seen that Variable B influences Variable C with a positive coefficient and a P-value below the significance level (0.05).Hence, Variable B, or digital leadership, significantly affects Variable C, or digital culture.Thus, H1 is accepted.Digital leadership, rooted in social influence processes, embodies the initiation of sustainable change by leveraging technology for organizational success.It signifies a gradual transformation led by leaders to instigate continuous change within an organization.This transformative process demands the establishment of specific habits that become ingrained as the organization's new culture.As digital culture is an integral part of digital transformation, the efficient route to cultivating it within an organization lies in the adoption of digital leadership.Through technological utilization and digital strategies, digital leadership shapes a robust and progressive digital culture within the entrepreneurial community.Practicing digital leadership empowers leaders to inspire, guide, and educate community members to optimize digital technology, fostering technology adoption, online collaboration, and innovation.The digital culture fostered by digital leadership profoundly influences communication, participation, and technology utilization within the community, ultimately contributing to overall business growth and success. Moving on to test H2, which is whether digital leadership significantly affects employees' digital capabilities, we need to examine the influence of Variable B on Variable D. Variable B affects Variable D with a positive coefficient and a P-value below 0.05, indicating that digital leadership significantly influences employees' digital capabilities.Therefore, H2 is accepted.Digital leadership can be understood as setting direction, influencing others, initiating continuous change through access to digital information, and building relationships to anticipate significant shifts for an organization's future success.This definition highlights the capacity of digital leadership to impact and drive change by leveraging digital information and influencing employees to enhance their digital skills.Leaders can establish rules and activities to improve employees' digital capabilities.Thus, it can be concluded that digital leadership significantly and positively affects employees' digital skills.Practicing digital leadership, leaders support employees in developing digital competencies by providing guidance, training, and resources.Leaders focused on digital leadership empower employees to effectively utilize digital technology, enhance productivity, adapt to technological changes, and contribute to the digital development of entrepreneurial communities. For H3, the variables of interest are digital culture (C) representing organizational performance sustainability (A).In the table, the influence of Variable C on A has a positive coefficient with a P-value of 0.037, below 0.05.This indicates that digital culture significantly impacts organizational performance sustainability, thus H3 is accepted.Digital culture signifies a participatory approach that leverages technology to interact with humans, making it crucial for organizations to integrate technology for growth.By fostering a digital culture, organizations can enhance their performance sustainability, especially in the digital age.This sustenance is reflected in improved outcomes, such as educational success in the case of this study.Digital culture embodies values, norms, and practices that embrace technology across organizational operations, enabling better collaboration, innovation, and adaptability.In community entrepreneurship, a robust digital culture fosters efficient technology adoption, effective collaboration, and long-term performance sustainability. To test H4, we need to consider the impact of employees' digital capabilities (D) on organizational performance sustainability (A).In the table, the influence of Variable D on A has a positive coefficient with a Pvalue of 0.005, which is below 0.05.Hence, employees' digital capabilities significantly affect organizational performance sustainability, leading to the acceptance of H4.Digital capabilities among employees can be understood as their ability to integrate and utilize data and information technology in their activities and responsibilities to enhance value for beneficiaries.This aligns well with the concept of organizational sustainability, which involves meeting present needs without compromising the needs of future generations.In the context of educational organizations, social performance serves as a crucial indicator for assessing organizational sustainability.Schools can improve their social performance by enhancing the digital skills of their employees.By elevating their digital capabilities, employees can better understand their students' needs, allowing them to adapt teaching styles accordingly.Consequently, students' abilities improve, contributing to overall organizational sustainability.Hence, the digital capabilities of employees significantly and positively impact organizational sustainability. Subsequently, for H5, we focus on the impact of digital leadership (B) on organizational performance sustainability (A).In the table, the influence of Variable B on A has a positive coefficient with a P-value below 0.05.This confirms that digital leadership significantly affects organizational performance sustainability, and H5 is accepted.Digital leadership can be defined as a social influence process mediated by information technology to drive organizational performance improvement.Hence, digital leadership has the potential to impact organizational performance.By embracing digital leadership, leaders can alter the direction of an organization, which in turn can lead to changes in its performance.When the direction change is appropriate and effective, it can significantly enhance the organization's performance. According to Table 7, Variables C and D act as mediators linking Variables B and A. In this context, H6 is tested by examining the role of digital culture (C) as a mediator between digital leadership (B) and organizational performance sustainability (A).In the table, Variable C acts as a mediator with a positive coefficient and a P-value of 0.041, below the significance level of 0.05.This demonstrates that digital culture significantly mediates the influence of digital leadership on organizational performance sustainability, validating H6.Digital culture can influence organizational performance sustainability and is also influenced by digital leadership.It becomes a leader's responsibility to manage and control this digital culture within the organization.Thus, it can be ensured that digital leadership significantly affects digital culture.Subsequently, digital culture also impacts organizational performance sustainability because the establishment of new and positive habits within the organization can enhance its performance.The organization's performance is influenced by the habits practiced by its employees.Therefore, digital culture serves as a mediator between digital leadership and organizational performance sustainability, where leaders can establish new policies to transform the existing digital culture within the organization, and these habits will subsequently impact the sustainability of organizational performance. Next, for testing H7, the focus shifts to employees' digital capabilities (D) acting as a mediator between digital leadership (B) and organizational performance sustainability (A).In Table 7, Variable D acts as a mediator with a positive coefficient and a P-value of 0.01, below the significance level of 0.05.Hence, employees' digital capabilities effectively mediate the influence of digital leadership on organizational performance sustainability, leading to the acceptance of H7.As previously explained in the preceding points, digital leadership significantly influences the digital capabilities of employees, and these digital capabilities also have a positive and significant impact on the sustainability of organizational performance.Therefore, it can be stated that the digital capabilities of employees are a crucial aspect for an organization's success and sustainability.These capabilities can be achieved when leaders implement a digital leadership approach.By establishing policies that enhance these capabilities, such as conducting training and workshops, organizations can experience a significant improvement in performance.Hence, the digital capabilities of employees can be confirmed as mediating the influence between digital leadership and organizational performance sustainability. CONCLUSION Positive and significant relationships were identified between digital leadership and digital culture, indicating that leaders adopting digital leadership styles can cultivate new habits that evolve into the organization's digital culture.Similarly, a positive and significant association was found between digital leadership and employee digital capabilities, suggesting that digital leadership has the potential to enhance employees' digital proficiency. Furthermore, the study revealed positive and significant connections between digital culture and organizational performance sustainability.This affirms that integrating a digital culture within an organization can lead to improvements and, subsequently, enhance organizational performance sustainability.Similarly, the analysis demonstrated a positive and significant relationship between employee digital competence and organizational performance sustainability.Improved digital competence among employees positively affects student capabilities and consequently contributes to better organizational performance sustainability. Moreover, the research uncovered a positive and significant correlation between digital leadership and organizational performance sustainability.This highlights that the implementation of digital leadership can significantly influence an organization's trajectory and lead to improved sustainability in its performance.The study also indicated that digital culture mediates the relationship between digital leadership and organizational performance sustainability.This suggests that leaders' policies and interventions can transform the existing digital culture, which, in turn, impacts organizational performance sustainability. Additionally, the research found that employee digital competence mediates the link between digital leadership and organizational performance sustainability.This implies that the digital capabilities of employees plays a mediating role in how digital leadership influences the organization's sustainability in performance. In conclusion, this study's implications shed light on the intricate interplay between digital leadership, digital culture, employee digital capabilities, and organizational performance sustainability within the context of the Secondary School MPK KAJ.Implementing a digital culture and embracing digital leadership have the potential to enhance employee skills and drive the sustained performance of educational organizations.Looking ahead, future research could consider exploring the influence of budget allocation for digitalization and expanding the scope to larger educational organizations to draw comprehensive insights about sustainable performance practices in the Indonesian context. Figure 2 . Figure 2. Initial PLS-SEM Model To assess the suitability of the model, an outer model test is required.The Outer Model Testing involves evaluating the relationship between construct variables and their corresponding indicators.This examination is conducted by measuring the outer loadings, which represent the regression outcomes of each indicator variable on its respective construct variable.Hair et al. (2014) stipulate that acceptable outer loading values should exceed 0.7, while values below 0.4 are considered inadequate.Table3presents the results of the outer loading analysis that has been performed.Table 3. Outer Loading Values of Indicator Variables Indicator A B C D A1 0.725 A2 0.667 A3 0.563 A4 0.741 A5 0.793 A6 0.760 A7 0.696 A8 0.688 Table 1 . Results of Validity Testing Alpha formula, with values > 0.70 indicating reliability.Table2displays that every indicator is reliable and can proceed to the subsequent stage of analysis. Table 2 . Results of Reliability Testing Table 3 . Outer Loading Values of Indicator Variables Table 5 . AVE of Construct Variables in the Model
4,354
2023-08-21T00:00:00.000
[ "Education", "Computer Science", "Business" ]
Structured report data can be used to develop deep learning algorithms: a proof of concept in ankle radiographs Background Data used for training of deep learning networks usually needs large amounts of accurate labels. These labels are usually extracted from reports using natural language processing or by time-consuming manual review. The aim of this study was therefore to develop and evaluate a workflow for using data from structured reports as labels to be used in a deep learning application. Materials and methods We included all plain anteriorposterior radiographs of the ankle for which structured reports were available. A workflow was designed and implemented where a script was used to automatically retrieve, convert, and anonymize the respective radiographs of cases where fractures were either present or absent from the institution’s picture archiving and communication system (PACS). These images were then used to retrain a pretrained deep convolutional neural network. Finally, performance was evaluated on a set of previously unseen radiographs. Results Once implemented and configured, completion of the whole workflow took under 1 h. A total of 157 structured reports were retrieved from the reporting platform. For all structured reports, corresponding radiographs were successfully retrieved from the PACS and fed into the training process. On an unseen validation subset, the model showed a satisfactory performance with an area under the curve of 0.850 (95% CI 0.634–1.000) for detection of fractures. Conclusion We demonstrate that data obtained from structured reports written in clinical routine can be used to successfully train deep learning algorithms. This highlights the potential role of structured reporting for the future of radiology, especially in the context of deep learning. Electronic supplementary material The online version of this article (10.1186/s13244-019-0777-8) contains supplementary material, which is available to authorized users. Data from structured reports can greatly facilitate development of deep learning algorithms. Fully automated workflows for training of deep learning networks can easily be implemented. A proof of concept for the detection of ankle fractures is presented and achieves satisfactory performance. Background Recently, the application of computer vision techniques and especially deep learning to evaluate plain radiographs or computed tomography exams has been extensively discussed in radiology [1][2][3]. Consequently, in the last few years, numerous groups have published papers describing promising applications of deep learning algorithms in radiology. Various studies were reported where the authors developed and trained deep neural networks to perform automated diagnosis or triage of plain radiographs. While some of those relied on manual review and labeling of the images to establish a valid ground truth (e.g., detection for of humerus fractures [4], hip fractures [5], and wrist fractures [6]), other relied on automatically extracting image labels from the written radiological reports associated with the imaging study [7][8][9]. As radiological reports are usually written in a prose-like, non-standardized form, techniques such as natural language processing (NLP) are needed, to analyze the reports and extract meaningful labels to be used in further training of the neural network. Compared to manual review labeling, the latter approach is much more efficient and scalable, thus enabling larger datasets to be compiled for the subsequent training of the neural networks. However, as was shown, e.g., in the case of the CheXNet paper [10], this also has the potential to introduce inaccuracies and uncertainties which are inherent to variations in NLP [11]. With more and more advances in computer vision and deep learning technologies and algorithms, it seems that one of the only remaining challenges is the availability of accurately labeled datasets. It would, therefore, be desirable if data from clinical routine could be used to provide reliable labels without the need for potentially error-prone NLP or timeconsuming manual labeling by human expert readers. One possibility to make data from clinical routine more readily usable could be structured reporting (SR) which has long been proposed by various radiological societies [12][13][14]. Structured reporting aims at standardizing report content and language, thus making the report more machine readable. Some studies have demonstrated the usage of data extracted from structured reports for calculation of various statistics [15,16]. This approach could also be useful in the context of training deep learning algorithms. Therefore, the aim of this study was to propose an example workflow where date from structured reports is used to extract accurately labeled training data from an institution's picture archiving and communication system (PACS). As a proof of concept, we show this by using this data to retrain a pretrained convolutional neural network (Inception V3) for the detection of fractures in ankle radiographs. Materials and methods Starting in late 2017, structured reporting was introduced at our tertiary care institution. Various IHE MRRT-compliant report templates were created and installed in a dedicated open-source reporting platform [17,18]. The reporting platform had previously been developed at our institution using only standard web-technologies and could be accessed from the clinical workstations by the reporting radiologists. To facilitate its usage in clinical routine, it was fully integrated in the radiologists' workflow and connected to the institutions radiology information system (RIS) and PACS. All radiologists received in-person training on how to use the reporting platform and the templates and could contact the developer any time if problems occurred. At the time of reporting, the radiologists were able to either use the standard RIS reporting engine, including speech recognition, or start reporting in the structured reporting platform. Usage of the reporting platform was neither enforced nor incentivized. To ensure the correct patient and study context, the RIS constructs a URL-call that passes the relevant patient and study information to the reporting platform. Upon completion of the radiological report in the platform data, the structured reports were stored in the platform's database as discrete information thus allowing for easily machine-readable reports. Use case and patient selection During the initial phase of set up of the structured reporting platform, various report templates had been created. While most templates focused on computed tomography or magnetic resonance imaging, some templates pertaining to conventional radiography were also developed. As basis for this proof of concept, we chose to focus on a rather simple use case using only plain radiographs. For the purpose of this study, we chose to use data from cases where plain radiographs of the ankle were obtained in the context of trauma (fracture/no fracture) and for which structured reports had been written using the above-mentioned platform (Fig. 1). All reports were written between August 2017 and September 2018. As radiologists were free to decide whether to use the structured reporting template or to write a conventional narrative report, the studies included were not consecutive. Structured reporting and image retrieval The "cx.ankle.trauma" template contained four dropdown menus where the reporting radiologist could select whether or not fracture, joint effusion, soft tissue swelling, or other relevant findings were either present or absent (Fig. 2). Apart from that, the template allowed for free-text entry of the corresponding finding. The source-code of the template can be found in Additional file 1. Upon completion of a report, the corresponding report content was stored in the reporting platform's dedicated database where each report field corresponds to a specific column in the pertinent table. Consequently accessing the column "select_fracture" of the "cx.ankle.trauma" table returned either "yes" if a fracture was present or "no" if absent. Thus, we created a combination of MySQL queries that would retrieve the relevant information from the corresponding database tables. To facilitate manipulation of these data, we designed a workflow in Rapidminer 9.0 (RapidMiner, Cambridge, MA, USA) that allowed for more intuitive visualization of the data manipulation (Fig. 3). In the first step, all relevant patient and study data was queried, while also the reports created with the "cx.ankle.trauma" template were retrieved. Through joining and filtering operations, it was possible to first build a complete table where all reports were associated with the relevant patient and study information (local patient ID and DICOM Study Instance UID). Subsequently, this table was split into separate lists for reports with and without reported fractures. These lists were then exported as comma separated value (CSV) files so that in a second step a small Python (Python Software Foundation. Available at http://www.python.org) script could be used to query and retrieve the corresponding images from the institution's PACS and export them as JPEG files into two separate folders (one folder for images with fractures and one for images without fractures). Convolutional neural network retraining workflow The main focus of this study was not on the training of a convolutional neural network (CNN) but rather on the workflow of using label data from IHE-MRRT compliant report templates. We therefore chose to limit this part of the study to a simple retraining of a preexisting CNN on a binary classification task. A TensorFlow model of the Inception V3 architecture [19], pretrained on ImageNet, was used to retrain the last fully connected layer. For the purpose of this study, we used the following standard hyperparameters: cross-entropy loss function, learning rate 0.01, batch size 32, and 2000 training steps. As the deep learning part was not the main focus, we did not attempt to optimize those settings but chose reasonable hyperparameters known to result in adequate learning performance, while also allowing for training on standard a graphics processing unit (GPU). Nevertheless, various random data augmentation techniques, such as scaling (+ 10%), cropping (− 10%), brightness (+ 10%), and horizontal flip were used to improve generalizability as the dataset was rather limited. Before retraining the CNN, 8% of all images were selected randomly and set aside from the training set to be used for validation of the final model. To compensate for unbalanced group sizes in the training dataset, the images from the smaller group were upsampled to the number of the larger group. The computation was performed on a single server (Intel Core i7-8700K CPU, 64 GB DDR RAM, NVIDIA GeForce GTX 1080 Ti GPU). The model's predictions and corresponding probabilities on the final validation set were recorded in a CSV file and used for calculating the diagnostic performance of the model. Statistical analysis All statistical analysis was done using R 3.4.0 with RStudio 1.1.463 [20]. Receiver operating curve (ROC) analysis was performed using the pROC package [21]. To calculate sensitivity, specificity, as well as positive and negative predictive value, the operating point that yielded the highest Youden's index was selected from the ROC analysis. Results As usage of structured reporting for plain radiographs remained limited during the period included in this study (August 2017 and September 2018), only 157 out of 1186 ankle radiographs (equals to 13.2%) had been reported on by 16 different radiologists (mean reports per radiologist 10 ± 4) using the structured reporting platform. For all of these 157 patients, anteroposterior ankle radiographs were available in the PACS and could be retrieved successfully. Mean patient age was 43.0 years (SD = 21.0 years; 76 female and 81 male). For final training and analysis, 144 images were included (129 with fractures, 28 without apparent fractures). The remaining 13 patients (eight with fractures, five without apparent fractures) were set apart as final validation set. In order to compensate for unbalanced group sizes in the training group, the 28 images showing no fracture were upsampled (i.e., copied repeatedly) during retraining of the network to balance out the 129 images showing fractures. Once implemented and configured, completion of the whole workflow (from database query to final evaluation of model performance) took under 1 h (retraining of the CNN accounted for around 35 min). The learning curve of the training process is shown in (Fig. 4). Discussion Structured reporting has been described as the fusion reactor for radiology [22]. Various previous studies have shown that structured reports provide numerous advantages in clinical routine [23][24][25][26][27][28][29][30]. In this paper, we provide further evidence that structured reporting could play a crucial role in advancing developments in the field of radiology. Especially with the recent advent of deep learning techniques, there is a strong need for machine-readable accurate labels to images [2,31,32]. While many challenges of the past regarding computational power and technological issues for deep learning have been solved over the past few years, the main hurdle preventing radiology from leveraging the potential of these technologies has been a lack of large data sets with high-quality labels. This is mostly due to the fact that radiological reports are still in most cases written as unstructured narrative text. Extraction of information from such free-text reports is time-consuming and depends on the completeness and the quality of the reports. Individual variations in language and style can lead to inconsistencies and uncertainties that could potentially impair the quality of the dataset. Therefore, researchers need to rely on manually reviewing and labeling data, which can be time-consuming and is therefore difficult to implement on a large scale. Theoretically, these challenges could be overcome by using natural language processing (NLP) to extract the relevant information from the radiological reports. However, this can potentially introduce a relevant number of incorrect labels to the dataset since generally sensitivity and specificity of such systems are only around 90% [33]. Our proposed workflow addresses these challenges since it utilizes data from structured reports generated during routine clinical practice. Thus, no additional workup of the dataset is needed to provide reliable and standardized labels for the training of deep learning algorithms. Considering that only a rather small fraction (13.2%) of all reports was created using the structured reporting templates during the period included in this study, it can be assumed that the performance of the trained model could substantially be improved if more radiographs would have had corresponding structured reports. Certainly, the most important challenge radiologists face when using structured reporting is the notable change in workflow. In our case, the structured reporting platform required the user to use the mouse and the keyboard to input the report, thus preventing him from work with the PACS viewer while composing the report. Better integration of structured reporting tools (e.g., with speech recognition and tighter PACS integration) could help to improve the adoption of structured reporting in clinical routine. The present study has some limitations: first, we did not re-evaluate the reports for diagnostic accuracy. Secondly, and certainly more importantly, the dataset used for the purpose of this study was rather small and unbalanced. There are several options to address such imbalances. In our case, we opted to apply oversampling of the underrepresented class (no fracture) as we did not want to discard any useful data. However, this approach has a certain tendency to overfit, since some examples are used multiple times. To alleviate this effect, we applied data augmentation techniques to the training dataset (scaling, flipping, cropping, etc.). Nevertheless, for a clinically applicable algorithm, other solutions to the class imbalance problem should be considered, such as undersampling, cost-sensitive learning, or other more advances techniques [34][35][36]. Performance of the algorithm therefore needs to be viewed as only preliminary and not clinically useful, especially since a selection bias toward simple cases in which the radiologists were more comfortable using the structured reporting platform cannot be ruled out. However, this was beyond the intended scope of this study. The proposed workflow nevertheless clearly demonstrates and underlines the value of structured reporting in the context of machine learning and artificial intelligence and is in line with the key research priorities as defined by in an intersociety roadmap for foundational research on artificial intelligence in medical imaging [37,38]. Especially with the possibilities to link specific parts of the report content to ontologies such as RadLex, the IHE MRRT profile provides an interoperable way to allow for easier pooling of datasets across various institutions while maintaining reliable label data [18,39]. Conclusion Of course, a widespread implementation of structured reporting will have a significant impact on the radiologist's daily work and may not be applicable to all cases and all clinical scenarios. Nevertheless, our study further highlights the need for to push toward more structured reporting in clinical routine, as it seems the most practical approach to obtain high-quality report data for various future developments. Users should therefore urge vendors to provide practical solutions that allow for easy access to and usage of report information for further analysis and usage in deep learning projects. Additional file Additional file 1: cx.ankle.trauma template. (HTML 4 kb) Abbreviations AUC: Area under the curve; CNN: Convolutional neural network; CPU: Central processing unit; CSV: Comma separated values (a file format); FDA: Food and drug administration; IHE MRRT: Integrating the healthcare enterprise management of radiological report templates; GPU: Graphics processing unit; JPEG: Joint photographic experts group (a file format); MySQL: My structured query language (a database management system); NLP: Natural language processing; PACS: Picture archiving and communication system; RIS: Radiology information system; ROC: Receiver operating curve; SD: Standard deviation
3,897.6
2019-09-23T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
The Effects of Integrating Technology on Students’ Conceptual and Procedural Understandings in Integral Calculus This paper discusses the effects of using two different learning approaches to students’ understanding ofintegral calculus. Experimental and control groups were formed at random to participate in this research. Each group was divided into three sub groups which are low ability, medium ability and high ability groups. The formation of these subgroups was done according to their marks in an integral calculus pre-test given to them prior to the lessons. In general, students in the experimental group outperformed their peers in the control group in terms of their grasp of both conceptual and procedural understandings of integral calculus. By using mathematical software in learning integral calculus, the medium ability and the high ability students in the experimental group progressed better than the low ability students. On the contrary, in the control group, the maximum percentages of improvement in both conceptual and procedural understandings were from the low ability group. Since the main objective of integrating technology in the learning of integral calculus is to enhance every student’s understanding, a better implementation strategy needs to be drafted in the future. One possible way is to expand the usage of the technology in other calculus topics. Introduction There is no doubt that mathematics is important in many fields, including engineering and engineering technology fields (Grove, 2012;Haripersad & Naidoo, 2008;Henderson & Broadbridge, 2007;Pearson & Miller, 2012;Mynbaev, Bo, Rashvili, & Liou-Mark, 2008).It is an ultimate gateway to engineering education and eventually into the engineering profession (Pearson & Miller, 2012).Calculus is one of the topics defined as a fundamental course in mathematics and engineering (Haripersad, 2011;Huang, 2011;Mahir, 2009).A completion of any engineering degree is highly correlated to the completion of high school calculus and a few college level calculus courses (Pearson & Miller, 2012). Despite the well-documented importance of this subject, however, there are well-documented problems related to the learning of this subject.One of the problems discussed in the literature is students' under preparedness in this subject (Haripersad, 2011;Henderson &Broad bridge, 2007).In addition, the "surface learning" approach used in the teaching and learning of secondary mathematics has also been discussed (Selden, 2005).One of the earliest studies done by Orton (1983) has highlighted few examples in handling the difficulties in teaching integral calculus at secondary school.He highlighted that some educators reacted by avoiding to introduce this topic in school, while others reacted by introducing integral calculus as a rule.Another issue that has been discussed is about students' problems in transferring the mathematical knowledge learned to the related technical subjects in engineering or engineering technology courses (Mynbaev, et al., 2008). In addition, even if students found calculus an enjoyable subject in school, their enjoyment does not reflect an easy pathway to succeed in tertiary calculus courses (Tall, 2010).This is an understood phenomenon as students are expected to engage in a deep learning of concepts at the postsecondary level (Selden, 2005).Students who performed well in calculus in school still have to struggle in learning mathematical analysis at the university.What more happened to the less able students?This problem needs to be tackled at an early stage of university learning.If it is left untreated, the less able students are likely to become more confused and eventually may not complete their degrees (Tall & Razali, 1993).Furthermore, calculus, including integral calculus, is not only the prerequisite for higher mathematics subjects, but it is also crucial for all calculus-related technical subjects (Salleh & Zakaria, 2012). The issues highlighted by students in this study are indifferent to those emphasized in the literature.In this study, engineering technology students' views were gained through an informal interview session.Two students were chosen at random to give their opinions about their experiences in learning mathematics, particularly calculus.Both of them agreed that they had previously learned all the calculus topics offered in the engineering technology mathematics syllabus for bachelor-level students at the university; however, they admitted that they failed to understand all of the topics due to various reasons and mainly because of the teaching approach.According to them, their mathematics lecturer taught the subjects by writing all the formulae on the white board and asking students to memorize them.They also mentioned that their previous lecturer gave a very minimum explanation on the application of certain formulae.Instead, they were asked to memorize steps involved in solving any mathematics problems. Views from an engineering student were also gained in order to compare with the engineering technology students' views.When asked about her opinion in learning calculus, she automatically mentioned that it is the most difficult subject.She added that she managed to pass the subject by memorizing rules and steps.Furthermore, she claimed that she did not understand this subject because she could not visualize the ideas behind each concept.The experience shared by the engineering and engineering technology students provided insights that university calculus has problems in both the teaching and the learning aspects. In this study, both the teaching and the learning for integral calculus were tackled through a blended approach between the existing lecture mode of teaching and the mathematical software as an aid of learning.Maple software was chosen due to its powerful properties in enhancing students' mathematics understanding.It allows not only numerical computations but also symbolic, algebraic, and graphical manipulations.These features help students to perceive calculus from different angles (Samková, 2012).The integral calculus topic was chosen based on the final semester examination analysis done at the university involved in this study.From the analysis, it was found that the integral calculus topic has a very high failure rate (Salleh & Zakaria, 2011).Therefore, this study was conducted with one major objective, which is to improve engineering technology students' understanding in integral calculus through the integration of Maple in the learning of this topic. Material and Methods A total of 50 students enrolled in Technical Mathematics 2 were selected to learn integral calculus with the help of Maple software.This group represented the experimental group.The control group, which was designed to undergo the integral calculus lessons without the help of the mathematical software, was formed by selecting another 50 students.These students were chosen from five different existing technical courses at random.The assignment of these two groups was done carefully to ensure the minimum tendency of creating an extraneous effect that would jeopardize the internal validity.These two groups were formed by choosing students from different technical courses at random.The experimental group consisted of students from two technical courses: Automated System and Maintenance Technology (ASMT) and Machine Building and Maintenance Technology (MBMT).The control group consisted of students from three different technical courses.The three courses were Air Conditioning and Refrigeration Technology (ACRT), Automotive Maintenance Technology (AMT), and Metal Fabrication Technology (MFT).These students had totally different technical subjects, and they met each other only during mathematics classes.This measure was taken in order to avoid any discussions and sharing of information about the techniques learned in mathematics classes.Therefore, the potential of creating an extraneous effect was minimized. Due to time constraints, the lecture sessions of these two groups were done simultaneously.Obviously, there were two mathematics lecturers involved in delivering the lectures.However, the involvement of these two lecturers did not affect the treatment of the subject since the major difference between the two approaches lied in the activities conducted during the tutorial sessions.During the tutorial sessions, students were exposed to two different learning environments.In this case, there was only one lecturer involved to conduct all tutorial classes.Throughout these sessions, the experimental group was brought to the computer laboratory to complete the exercises using Maple software.On the other hand, students in the control group underwent the sessions without any help of Maple software (see Figure 1).For the lecture, the slides for the experimental group were prepared by the lecturer involved in teaching the topic with the help from the researcher.The slides were prepared to include Maple output elements.On the contrary, the lecture slides for the control group were prepared by the lecturer involved without any elements of Maple output. Figure 1. Study design Prior to the intervention period, a pre-test on integral calculus was given to both groups.The objectives of giving the pre-test were to determine the basic knowledge level of integral calculus in both groups and to find out whether the previous knowledge was homogeneous among students in both groups.Based on the pre-test results, students in both the experimental and control groups were divided into three subgroups, namely low ability, medium ability, and high ability groups.Subsequently, both the experimental and the control groups underwent an intervention period of five weeks, which is equivalent to 40 hours.At the end of the fifth week, students in both groups were given a common post-test in order to determine which group was better in terms of the integral calculus conceptual and procedural understandings. The pre-test and the post-test consisted of an Integral Calculus Test.This test was developed by the researcher and had been piloted to ensure its validity and reliability.The pilot study was conducted in the July-December 2011 session, which involved a group of 79 students who had learned integral calculus.The reliability of the test was measured using Rasch model analysis.Item reliability indices for both constructs in Integral Calculus Test, i.e., conceptual and procedural understandings, were measured high with 0.95 and 0.96, respectively.The person reliability indices for both constructs were also proven good at 0.77 and 0.86 for conceptual and procedural understandings, respectively.With these analyses, it could be concluded that the test was reliable to be used by other respondents with similar characteristics to those involved in the pilot study. Teaching Elements The approach used in the lecture emphasized the explanation of the concepts.The conceptual understanding of integral calculus is more than merely memorizing and applying rules and steps.Instead, it consists of the construction and the connection of ideas within students' mind.It is also developed by relating the existing knowledge with any new related information learnt (Hiebert & Lefevre, 1986).Therefore, in this study, the lecture slides were prepared to facilitate the development of these processes within each individual student involved, as illustrated in Figure2. Concepts The introduction of integral calculus. The integral calculus was introduced by discussing about area under the curve.Based on that, the definition of integral calculus was subsequently developed.The idea of integral calculus was also explained through the application of Maple software. Substitution method The concept of substitution method was explained through the relationship between integral calculus and differential calculus.Also, counter-examples were given to invoke a conflict of cognitive in students' mind.This approach was done to create disequilibrium which can enhance students' understanding.The application of Maple Tool to solve problems involving substitution technique was also introduced to help students discover the concept behind this method by themselves. Figure2.Examples of lecture slides developed for experimental group Learning Activities In this study, the activities were developed based on Dubinsky's APOS theory.APOS (Action, Process, Object, and Schema) was chosen because this theory emphasizes on the construction of knowledge internally by each individual student himself or herself.This theory refers to the mental structures in which an individual may build in response to any given mathematical problem.The theory hypothesizes that a mathematical concept will be developed by an individual when he or she manages to convert existing physical or mental objects into an understanding in the form of appropriate schemas (Dubinsky & McDonald, 2001;Weller, Arnon, & Dubinsky, 2011).The activities in a newly developed strategy were then implemented using a teaching cycle known as ACE.ACE is a cycle which comprises three main components: Activity using Maple software, Class discussion based on Maple outputs, and Exercises outside class hour.Some of the activities developed are shown in Figure3. Some of the Concepts Sample Activities in Tutorial Comments The introduction of integrating product of functions Students were given an opportunity to conjecture the process of integrating product.They were also asked to evaluate the integral using Maple software in order to check their answer. Students were given a set of complete Integral Calculus Manual to be referred while completing their integral calculus problems with Maple software. Teaching and Learning Elements in the Control Group The teaching approach used in the control group did not emphasize the explanation of concepts; instead, it stressed the fluency of process.Students were encouraged to memorize the steps involved to solve any integral calculus problems.They were also exposed to the patterns and types of final examination questions they would encounter.Past years' examination questions were given during the lecture, where students were asked to work on the questions individually.The solutions were discussed, and the lecturer summarized by highlighting the pattern of steps involved so that the students could memorize them.The emphasis of the fluency of process to solve problems can be observed from the slides used during lecture (see Figure 4). Concepts The introduction of integral calculus. The integral calculus was introduced by directly discussing the process and rules. Substitution method The substitution method was explained through recognising the pattern of the function given. The steps of solving were emphasized so that students would recognize not only the pattern of the functions, but also the relevant steps involved in solving each pattern.The tutorial sessions for the control group were conducted in normal classrooms.During these sessions, students were given weekly tutorial questions.These tutorial questions were uploaded in the ECITIE, which is the university's online portal.Students printed out the tutorial sheets and attempted the questions prior each week's tutorial session.During the tutorial session, they discussed any problems they encountered with their friends and also with the lecturer.In this case, the discussion was done without any help from any mathematical software. The mediums involved during the tutorial sessions were pen and paper only. Findings and Discussion The statistical package PASW Statistics 18 was used to analyze the data obtained from the integral calculus test. Students in the experimental group scored a higher mean in both conceptual and procedural understanding compared to students in the control group (see Table 1).To determine whether the differences were significant differences, a statistical test was performed.The differences in students' mean performance score after the 40 hours intervention period in conceptual understanding and procedural understanding were determined by using the Hotelling's T 2 multivariate test.This test was chosen in order to reduce the type 1 error.In addition, independent t-tests were also done in order to identify the effect of each dependent variable.Table 2 shows the outcomes of Hotelling's T 2 and the independent t-tests for both dependent variables.The Hotelling's T 2 value of 0.238 was significant at p < 0.05.Therefore, in general, there was a difference in students' mean performance score in integral calculus between those who used Maple software and those who did not use the software in learning this topic.In order to know specifically which variables were responsible for significant main effects, independent t tests were performed.The independent t tests for the conceptual and procedural understandings were found to be significant.These imply that students' performance in both groups were significantly different. The effect size values of the intervention applied were also reported because statistical significance wasnot adequate to imply the significant effectiveness of the whole treatment (Thompson, 2002).In the PAWS Statistics 18 package, the effect size values were measured in terms of eta squared values ( 2 ).In this study, the effect size value for the whole treatment was 0.192.This 2 value is equivalent to Cohen's d of 0.975 (d> 0.80), which is considered a large effect (Cohen, 1988).Therefore, in this study, the strategy as a whole presented a large effect on the differences between students' performance in both groups.Students' conceptual understanding was found to be one of the factors that largely contributed to the significant differences between those who used Maple software and those who did not use Maple software in the learning of integral calculus with the  2 value of 0.168 (Cohen's d = 0.899).However, students' procedural understanding influenced the significant differences between the groups at a moderate level with the  2 value of 0.067 (Cohen's d = 0.536). To know exactly which group benefitted the most from the intervention, students' improvement in both types of understanding was measured.Table 3 shows how much students in both groups improved in both conceptual understanding and procedural understanding.The improvement values were reported in percentage.For the experimental group, the maximum improvement in conceptual understanding was gained by student in the medium ability group, with 80.83% improvement.The maximum percentage of improvement in the low ability group was found to be higher than the maximum percentage in the high ability group.Nevertheless, the minimum improvement value in the low ability group was zero.In terms of procedural understanding, the maximum improvement was gained by students in the high ability group, followed by the medium ability group, and lastly, the low ability group improved the least.Similarly, for the average percentage value, the medium ability group improved better than the other two groups in conceptual understanding.Students in the high ability group improved more than the other two groups in procedural understanding. For the control group, the maximum improvement in both the conceptual and the procedural understandings was gained by students in the low ability group.However, based on the average value, generally, students in high ability group improved more than the other two groups in both types of understanding.This study also found that students in the medium ability group did not improve at all in the conceptual understanding and improved the least in procedural understanding.Also, the minimum improvement values in both the conceptual and the procedural understandings for students in the low ability and medium ability groups were found to be negative values. Conclusion In general, engineering technology students benefited from the integration of Maple software in learning integral calculus at the university.Both types of the understanding, i.e., integral calculus conceptual understanding and procedural understanding, were investigated in this study and were found to be successfully enhanced using the mathematical software.This study also found that students in the medium ability group benefited the most from this approach to understanding the concepts of integral calculus.In terms of the procedural understanding, the high ability group was found to benefit the most.The low ability group improved the least in both types of understanding in learning integral calculus with the help of technology with a minimum improvement value of zero in conceptual understanding.This indicates that this approach did not benefit some of the less able students in improving their integral calculus conceptual understanding.Since the main objective of the integration of the technology in integral calculus was to improve every student's integral calculus understanding, an appropriate measure needs to be figured out in the future.In the future, the implementation of this approach needs to be planned properly so that all students, including the less able students, will benefit from the strategy.One factor to be considered to improve its potential in enhancing students' understanding is to introduce a longer intervention period.Since it was not possible for the researcher to extend the duration of teaching and learning integral calculus topics, other means need to be figured out.One possible mean is to extend the use of this strategy in other calculus topics, for instance, in functions, limits and differential calculus topics.With a wider coverage and a longer intervention duration, it is hoped that the potential of this strategy will be maximized. Figure 4 . Figure 4. Examples of lecture slides for control group Table 1 . Pre-and post-tests results for both groups Table 2 . Multivariate and univariate tests results Table 3 . Percentage of improvement for both groups
4,509.4
2012-11-30T00:00:00.000
[ "Computer Science", "Education", "Mathematics" ]
Internationalization of Universities Myth & Realities: A Case Study of COMSATS Institute of Information Technology (CIIT) Pakistan This study was based to assess trend of Pakistani universities towards internalization as compared to the successful models of USA, UK, Australia and Canada with particular efforts being made by COMSATS Institute of Information Technology, Pakistan. The primary data of Pakistani universities, views and ideas of researchers were discussed and analysis about the realities to utilize the treasures of “knowledge based economic” opportunities. All such efforts were resulted due to reduction of government funds for the universities. The parameters of discussion and analysis were based on data collection from prominent Pakistani Universities as a primary source while secondary source was research based wherein researchers’ experiences quoted and shared to assess the myth and realities for internationalization of international universities in certain regions of the world. The major findings were focused towards the induction of international students’ success stories in universities of USA, UK, Canada, Australia and Malaysia. The universities of these countries earned profit and reduced the dependence government funds and achieved economic self-reliance breakthrough based on recruitment of International students in the past two decades. In south East Asia the Malaysian Universities has adopted these role models of successful countries and is now on the top position for recruiting international students. In Pakistani universities this trend is also flourishing very rapidly. The comparison of CIIT with other Pakistani universities especially in the area of international students’ recruitment indicates that the International Islamic university (IIU) was at the top in Pakistan. It was concluded that CIIT Pakistan is moving on faster track towards internationalization of its academic and research programs and as a policy allocated huge number of scholarships for international students and also attained international ranking of QS 3 stars out of 5 in general terms and 5 out of 5 in academic, outreach and civic engagements and placed among 250 QS Top Asian Universities and at 4rth position in Pakistani general universities in view of national ranking of higher education commission in the year 2013. The other Pakistani universities have also excelled in research and academics and created ample environment for cross cultural adaptations and also moving towards internationalization like CIIT Pakistan. Introduction The International universities in USA, UK, Australia and Canada have played an important role for stabilized funding resources through utilization of knowledge based economic systems and emerged as major international students induction players in the world in the last two decades. This was achieved as a result of shortfall occurred in government funding and such universities of higher learning were diverted towards corporatization. Monk [1] discussed the deregulations aspects undertaken in South East Asia like China where universities structures and other essential modalities were changed for universities corporatization. Talik [3] and Elkin [4] also emphasized the deregulations and remolding of the university's academic programmes towards internationalization and finding new sources of knowledge economy. The universities all over the world and especially in Asian region become more competitive and undertaken various incredible changes in their infrastructure, academic methodologies, research initiatives including collaborative research in the way of International Ranking & acceptance by the international community. In the Asian sub-continent, Pakistani universities both in public and private sector institutions the quality of higher education services were increased manifold during the last decade and inclined towards knowledge based economic systems. As per indicators mentioned in the Annual Report [10], some of Pakistani universities made efforts for internationalization of their research and academic programs and also provided enhanced facilities to induct International Students like International Islamic University and CIIT Pakistan. As per Annual Report [12] CIIT Pakistan has speeded up their marketing efforts internationally and gone into certain mutual collaborative agreements for students/teachers exchange programs, collaborative research and projects joint ventures and to induct more international students in their systems in spite of various ethnic and terrorism fears in the region and in Pakistan. CIIT ranked in top 250 QS Asian International Universities with QS 3 star ranking in general and 5 star in academics and outreach civic engagements. In Pakistan universities are now on the track of internationalization and inspired from role models of world universities of USA, UK and Australia. In south East Asia, Malaysian universities are playing important role and attracting /inducting huge number of international students. Objectives The prime objective of this research was based on to assess CIIT Pakistan with other universities and to familiarize and discuss international role models international world universities, the trend of other Pakistani universities towards internationalization. Pace of International students' recruitment by exploring the knowledge based economic role models to fulfill the gaps of substantive government funding in universities and to set needs for detailed research in the relevant areas for Pakistani universities and CIIT Pakistan. Methodology Observation: A wider web based survey was conducted for investigating the websites of Pakistani and International universities internationalizing their academic programme and recruits international students for profit earning. Primary and Secondary research has been carried out to investigate the previous related work done by the researcher worldwide. Questionnaire Survey: A questionnaire was designed and developed to fulfill the objectives of this study and provided online to respondents (Registrar's of the Pakistani Universities). This questionnaire was comprised of following sections. Designation of respondent (Owner/Manager/staff), Type of stakeholder (Main, sub Management), capability of university Status in terms of finances), availability of International Students (Yes/No), Level of internationalization attained (respondent perception), Legal and intellectual immigration barriers in internationalization of universities. Data Analysis: Respondents' data and secondary research was analyzed according to sections and variables. Structural Change Adaptation of Universities The higher education sector has played a vital role in the internationalization of the national universities with desired structural changes adaptation and entering into the race of global knowledge economy. In these contexts various socio economic changes were witnessed in various regions of the world and particularly in South-East Asia. Monk [1] has the view that in Vietnam and main land of China some deregulation activities at the Government level were observed for more growth in industrial commodities export and various policies of health, transport, communication and education were changed for internationalization. Evolution of International Education Market Most of the universities are looking inclined towards internationalization of their academic & research programs in the competitive international knowledge based economic market and selling their competencies /higher education services in different regions of the world. Currently the universities in Asian Region are also indulged in the race to win and make profit are evolving as business corporations. In south Asian Region most of the universities has their major agenda to capture international students market of Middle East and Africa. Tin [2] has emphasized the trend of students' induction success stories and quoted that courtiers like Singapore, Taiwan and Japan appeared on leads besides India, Pakistan and Malaysia. As mentioned by Talik [3] that the internationalization of universities even in the developed countries like UK and USA, the trend of state funding was declined and most of the universities diverted towards Marketization and converged to business hubs in the knowledge based economic benefits streams and managed their economic resources through internationalization of research programs and selling of academic programs with international students intake. He pleaded that in higher education systems, the state funding has a unique value for national economic development of the country. Assessment of Universities Internationalization Elkin [4] has the generalized view that in a country, the university internationalization could be assessed with the following parameters;  The nationally practiced academic programs standardization for according to the international focused programs. The Cudmore [5] reported that by the year 2025, the number of international student at world level shall be about 07 million which will be a positive sign of good earning of the universities and achieving internationalization of higher education globally. He emphasized such projection on the study of various institutions of Canada, USA and UK. He further deliberated the reasons why international students prefer to be enrolled in the universities of Canada, Australia, USA and UK, the only reason is of lacerative market, value of the institution and successful cultural diversity. Trend of internationalization of universities The trend of internationalization of universities especially for the recruitment of international students was not declined due to 9/11 disaster / terrorism attacks on USA. Jacobson [6] and Walker [7] discussed the trend of students' induction and quoted that the students' induction was not affected rather a continual increase of students' intake was carried out in USA universities from China, India and South Korea. They have further emphasized and indicated that about 25% decrease of students intake was observed from Muslim states like Pakistan, Saudi Arabia, Malaysia, and Kuwait & United Arab Emirates (UAE). From these countries the interest of International students' recruitment was diverted towards universities of Canada, Australia and New Zealand being alternative source of USA universities has transpired that international students' recruitment was not only considered for internationalization of institutions but also to generate revenue by charging high fee from the applicants for coping with issues of lack of state funding. Edward. J [8] has also emphasized that in most of the countries, higher education institutions and universities are adopting American model of internationalization of academic and research programs and attraction of international researchers that caused brain drain in different regions. The English proficiency requirements were made a standard for students' enrolment. The American institutions are playing as prominent actor in internationalization process and appearing as role model especially for developing countries that creating greater impact towards international cooperative research and educational exchange partnerships. Immigration policies The Immigration policies of the countries also affect International students Induction programs. The countries who in real sense support their higher educational institutions for their internationalization and developing research linkages and students recruitment might relax immigration rules and regulations and encourage the international students ,researcher /teachers recruitment from the international market. The Third world countries now inspired to cater for skill and technological advancements to meet international standards. This can only be achieved through development of educated work force in the institutions that may develop linkages in the learned universities of the developed world and to achieve minimum requirements of their academic excellence. While in the way of such growth in these countries like Pakistan the trend of commercialization of international student recruitment creating a negative impact on the developing world. Ranking & International Students The existing of international students seems to be a integral part of international ranking of the university. Hazelkorn E [9] quoted the examples of Australia, Germany and Japan and stressed that the international students play important role as weapon in the battle of talent. In Australia 2 universities are in top 100 universities Shanghai Jiao Tong Academic Ranking of World universities or 8 in the Times QS Ranking of world universities 2007 and Germany had 6 or 3 and Japan had 6 or 4 respectively. Likewise, in Pakistan as per Higher Education Commission of Pakistan (HEC) ranking as per details mentioned in the Annual Report [10], the COMSATS Institute of Information Technology, Pakistan is at No. 4 amongst 136 Pakistani general universities exists in both public and private sectors. International Students Comparison of Some Pakistani Universities The International Islamic University (IIU) is at the top position in respect of international students' recruitment in Pakistan. There are 1726 international students belongs to 40 different countries. The National University of Science and Technology (NUST) has only 11 international students and in Bahria University there are only 14, while the COMSATS Institute of Information Technology (CIIT) Pakistan enrolled 21 International students including 5 in Masters programs during the year 2012-13. The CIIT is inspiring and strengthening international linkages and has planned to induct at least 250 international students in next 03 years, besides collaborative agreement with more than 100 international universities of the world including USA, UK and Germany. Major source of induction The major source of induction of international students for the European universities was Asia but now in Asia most of the universities in the region have attained excellence in research and academics and indulged in the race of internationalization of their programs for better earning and creating cross cultural environment. The view point of Galway [11] shows three reasons to recruit international students; the opportunity to generate revenue, mix up of foreign prospective & culture to local perspective traditions for creating multicultural environment and thirdly to enhance institutional international knowledge business economic trade links like contractual & sponsored joint ventures and projects. As per quotes in the Annual Report [12], in Pakistan CIIT is focusing to recruit international students from member states of Organization of Islamic Cooperation (OIC) and During the past few years the trend of internationalization was increased in Pakistani universities in spite of intense cultural issues and adoptability. In the CIIT model, the humanitarian aspect for internationalization is very prominent in view the reason that this Institute specially offered 260 Masters level scholarships to the international students but the aspect of corporatization and business earning with higher fee charging cannot be ignored like other universities of the world. Results In the World especially, in South East Asian region some deregulations and structural changes in universities were undertaken and such changes also expanded in Middle East, Africa and Pakistan and diverted towards internationalization to generate own funding resources by inducting international students with high fee structure. The adoption of knowledge based economic systems has been adhered on the role models practiced in USA, UK, Australia and Canada in the last 2 decades. In Pakistani university, especially CIIT Pakistan is moving on faster track towards internationalization and inspired for induction of international students in future with more leadership commitment on academic and research collaboration with international universities and organizations. Conclusions The global trend of internationalization of universities now diverging towards corporatization is evident and most of the international universities especially in Asia, South East Asia are adopting the policies for revamping of their structures, methodologies and trends as according to the role models of USA Canada and UK universities. The universities are also diverging towards contractual research, joint ventures of academic research & projects, including by efforts made by the individual academicians. In Pakistan the CIIT is aggressively moving towards internationalization in view of manifold advancement in scholars exchange programs, faculty and staff training all over the world under HRD programs and with the help Government funded projects. The global humanitarian higher education aspect of CIIT is positioning it in future and emerging like a major player of higher educational service provider and then its steady conversion to corporatization in Pakistan. Recommendations a) Further primary and secondary research is need to undertaken to assess exact position of Pakistani universities with particular reference to CIIT Pakistan and their participation and efforts towards internationalization, international ranking and corporatization. b) Futuristic Study for internationalization of CIIT Pakistan to be conducted by the researchers on the basis of existing successful role models of developed countries like USA, Germany, UK, Canada and Australia. In Asia the comparison with role model of Malaysia.
3,512.6
2015-01-01T00:00:00.000
[ "Education", "Computer Science", "Economics" ]
Integrated Mobile Laboratory for Air Pollution Assessment: Literature Review and cc-TrAIRer Design : To promote research studies on air pollution and climate change, the mobile laboratory cc-TrAIRer (Climate Change—TRailer for AIR and Environmental Research) was designed and built. It consists of a trailer which affords particles, gas, meteorological and noise measurements. Thanks to its structure and its versatility, it can easily conduct field campaigns in remote areas. The literature review presented in this paper shows the main characteristics of the existing mobile laboratories. The cc-TrAIRer was built by evaluating technical aspects, instrumentations and auxiliary systems that emerged from previous studies in the literature. Some of the studies conducted in heterogeneous topography areas, such as the Po Valley and the Alps, using instruments that were chosen to be located on the mobile laboratory are here reported. The preliminary results highlight the future applications of the trailer and the importance of high temporal resolution data acquisition for the characterization of pollution phenomena. The potential applications of the cc-TrAIRer concern different fields, such as complex terrain, emergency situations, worksite and local source impacts and temporal and spatial distributions of atmospheric compounds. The integrated use of gas and particle analysers, a weather station and environment monitoring systems in a single easily transportable vehicle will contribute to research studies on global aspects of climate change. Introduction Air quality and atmospheric pollution are still serious challenges worldwide [1][2][3]. Indeed, air pollution contributes to at least 7 million deaths worldwide every year [4][5][6]. According to the State of Global Air 2020 report of the Health Effect Institute, in 2019, it moved up from the fifth to the fourth cause of global deaths, covering 12% of the total. Atmospheric pollutants can be directly emitted by both anthropogenic and natural sources: although the former are the most significant contributors to the major compounds, the latter should not be overlooked [7,8]. These primary compounds can also chemically interact to promote the formation of secondary air pollutants [9]. Activities related to the combustion of fossil fuels (coal, oil, gas and gasoline), energy production, agricultural burning or other industrial processes for power generation, space heating and transportation significantly increase the particulate matter (PM) concentration in the atmosphere [10,11]. At the same time, fires, sea spray, soil erosion and resuspension of dust naturally contribute to particulate matter increment [12]. Sources of CO include fossil fuels and motor vehicle exhausts, while the burning of coal and oil raises SO 2 concentration values [13][14][15]. NO x and VOC are two other critical air pollutants with health and environmental aspects, but they are also involved in the ground-level ozone production mechanism. CO, VOCs and CH 4 are, in fact, oxidized in the presence of solar radiation and NO x to form ozone, a secondary pollutant responsible for photochemical smog and health risks linked to respiratory diseases [16,17]. Exposure to high concentrations of gas and particle pollutants gives rise to demonstrated negative effects on the environment and on human health [18][19][20][21]. In adverse conditions of both outdoor and indoor pollution, a large number of organs can generally be affected, such as the lungs [22], heart and brain [23] and eyes [24]. Air pollution also causes relevant impacts on cancers [25], cardiovascular diseases [26], diabetes and obesity [27] and subjective well-being [28]. Air pollutants are also strongly harmful to the environment, with devastating consequences evident. For instance, soil and water matrices are damaged by the effects of acid rains, which usually corrode buildings, statues and infrastructures too [29]. Anthropic emissions, such as those from traffic or industrial activities, contribute to the production of haze, which strongly reduces the atmospheric visibility, especially when opportune accumulation conditions occur [30]. Moreover, the excessive deposition of nitrogen oxides and other nutrients affects the local eutrophication phenomena of surface water and consequently results in damage to aquatic ecosystems [31]. In wider terms, wildlife is dangerously threatened by the effects of air pollution [32]. It is now known that there is a mutual relationship between air pollution and climate change [32]. The compositions of some constituents of PM, such as black carbon, can induce a warming effect on the overall climate through the absorption of solar and infrared radiation [33]. Furthermore, the deposition of these particles on the surfaces of glaciers promotes their melting [34]. Since particles are actively involved in cloud formation, their atmospheric concentration can affect the cloud reflection of solar radiation, the temporal and physical characteristics of precipitations and how long they last [2]. Simultaneously, the main, direct consequences of climate change, such as the warming over the higher latitude areas and the decrease in the frequencies of cyclones, influence the air pollution phenomena [35]. Indeed, the alteration of meteorological events and atmospheric stability directly affects the concentration of PM and ozone [35,36]. The reduction of the number of precipitation and wind events increases the stagnation conditions and decreases the possibility of the dilution and deposition of pollutants [37]. Furthermore, the increase of temperature and the exposure to solar radiation promote higher ground-level ozone concentrations and long-lasting phenomena [38][39][40]. The effects of climate change are clearly visible in mountain glaciers, which are, for this reason, key strategic indicators of global warming [41]. Due to their morphological composition, the Po Valley and the Alps in Italy are some of the most critical European areas where the combined effect of climate change and air pollution is observable. Their geographical location is interesting from a scientific viewpoint, since it contains different environments (sea, mountains, valley) subjected to continental, Mediterranean and Alps climates, which are sometimes influenced by Saharan contributions too [42]. In light of this, especially in strategic areas such as the Po Valley, the importance of air quality monitoring, depending on the boundary conditions, in order to understand the impacts of megacities, anthropogenic activities and natural sources on atmosphere dynamics and pollutant concentrations is clear, as it makes it possible to examine global climate change effects in depth and to safeguard the environment and human health. For this reason, additional data as well as those provided by institutional agencies are needed to expand the amount of information from spatial and temporal points of view [43]. In fact, there is an increasing necessity for approaches that are able to reach complex terrain with the aim of carrying out research and promptly intervening in emergencies, taking advantage of high-time-resolution instrumentation. Mobile laboratories are some of the systems currently used to reach this goal, with a focus on air, soil, water and biota applications [44]. In air studies, monitoring campaigns can be conducted in mobile or stationary ways. The main purpose of the first is usually to obtain the spatial characteristics of collected data, while in the second modality a strategic sampling point is added to the fixed monitoring network. In both cases, these laboratories are complex systems which need an accurate design, especially with regard to instruments' operative conditions, inlet characteristics and power supply. Once collected, the data require a dedicated processing strategy to obtain satisfying results on air pollution trends [45]. Moreover, the proper selection of instruments that can be easily transported is required. The aim of this work was to describe the design of a mobile laboratory, the cc-TrAIRer (Climate Change-TRailer for AIR and Environmental Research), consisting of a non-motorised instrumented trailer and with the purpose of conducting monitoring campaigns to analyse the effect of specific sources, weather, and atmospheric and boundary conditions on air quality parameters in order to incorporate the examined phenomena in the investigation of global aspects of climate change. In this article, we present a literature review of the design of existing mobile laboratories (Section 2) and then discuss the equipment of the cc-TrAIRer (Section 3), focusing on instruments and the management of auxiliary systems, such as the electrical, informatic and energetic systems. Lastly, a brief overview of the results of campaigns performed in some sites from the Po Valley and the Alps is presented (Section 4). The literature analysis and the previous studies conducted led to the optimization of the trailer design. This article aimed to supply a technical background for future applications of the cc-TrAIRer. Literature Review In recent years, mobile laboratories have become increasingly widespread for activities involving the monitoring of air quality standards and pollutants. They can be applied for urban, rural, background, industrial or emergency monitoring and they can also be used for the assessment of a specific source. Due to their on-going development, studies on mobile air quality measurement campaigns can be found in the literature. In this section, the instrumentation equipment, technical aspects and main purpose of these kinds of laboratories are analysed. One of the first applications was a semi-trailer designed by the University of California [46], used mainly for research purposes. Despite the possibility of evaluating gas and PM concentration at different points, its bulky structure reduced the chances of easy use in specific tight and uncompromising places. Also, the operational activities were strongly affected by the presence of an electrical network. In fact, the truck did not guarantee an electrical range through generators or internal energy storage technologies. Technological evolution led to the design of smart and smaller systems, which made it possible to carry out studies with a better electric range and higher versatility from a spatial and temporal point of view. Nowadays, the most popular vehicles for use in mobile monitoring activities with instruments able to assess air pollution parameters are vans [47][48][49] and trucks [50], but other kinds of transport can also be found in the literature, such as trams [51,52], a trailer [53], SUVs [54][55][56], a bicycle [57] and recreational vehicles [58][59][60]. Unlike in the past, one of the main goals of current studies is the monitoring of the most important air quality parameters to evaluate compliance with limits imposed by national and international environmental legislation. As reported in the literature [61][62][63], the main parameters responsible for air pollution and dangerous for the environment and for human health that are tracked by mobile laboratories are PM, black carbon, NO x , O 3 , SO x , CO, CO 2 , NH 3 and VOCs. There are also some systems that have been designed to identify only specific pollutants. For instance, Bush et al. [64] monitored CH 4 , CO 2 and CO concentrations in order to understand the influence of urbanization, traffic and point sources on their spatial distribution. The implementation of these technologies for a specific parameter can also be found for transcontinental CH 4 concentration characterization [58,59] and for more local conditions, such as to identify the location of fugitive pipeline leaks around an urban area [65,66]. Tao et al. [67] investigated the concentrations of greenhouse gases and air pollutants through CO 2 , CO, CH 4 , N 2 O, NH 3 and H 2 O QCL and LICOR sensors. Other mobile laboratories [68] are able to verify the presence of metals or halogens, such as chromium and bromine, in different environments. Finally, the impact of local sources has been studied thanks to mobile campaigns assessing the main greenhouse gases [69]. In addition to the principal purposes of the mobile laboratories, some of them can be modified to host different technologies or they can be partially exploited for specific measuring campaigns [70,71]. In this way, versatility becomes an important advantage in terms of potential use. Weather conditions significantly affect the dynamics of the atmosphere, gas transformation processes and the transportation of pollutants [72,73]. For this reason, meteorological parameters are usually monitored simultaneously with air quality ones, in order to characterize their relationship. The most common measured parameters are temperature, relative humidity and barometric pressure [74,75]. Wind intensity, wind direction and rain intensity are usually monitored too [76,77]. Rarely, global radiation [47] and the solar spectrum are investigated [78]. In addition to air quality and weather parameters, traffic flows are sometimes supervised through traffic cams or sensors [79]. The main purpose of this section is the analysis of the designs of existing mobile laboratories available in the literature. An accurate selection of papers with detailed descriptions of the technical aspects of these systems was implemented. To do that, vehicle structure, power supply, inlet system, air conditioning and instrumentation were evaluated. The chosen studies included only multi-parameter laboratories and road transport vehicles (van, truck, SUV, trailer). Trains, trams, airplanes and bicycles were excluded from the focus, since the field is broad and they need further detailed studies. The investigation was performed considering one of the main characterization parameters of mobile laboratories: the conditions of use. Indeed, vehicles can be designed to conduct measuring campaigns in mobile or stationary modalities. The first refers to measuring campaigns done when the vehicle is in motion while the second concerns stationary activities, and the two modalities usually work with different aims. In Tables 1-3, the main technical aspects of laboratories used in stationary, mobile or both modalities are shown. In particular, the vehicle type, monitored parameters, measurement technique and instruments, time resolution, detection limit and meteoclimatic parameters are reported. Generally, mobile campaigns are conducted on roads [80,81], railways [51] or specific urban and rural routes [82] along an entire path with a high instrumental temporal resolution. The processed data allow the definition of a concentration map along the paths performed in order to identify hot spots of specific pollutants, such as industrial areas [83], highways [49] or urban canyons [48]. Due to their configuration, these moving measurements require extremely high-time-resolution sampling to describe in an adequate way the evolution of concentration values. Furthermore, if the same path is frequently travelled, these kinds of acquisitions can lead to the analysis of the effects of external conditions, such as wind intensity and direction, on local sources [84] over time. However, these parameters are difficult to correctly detect while driving, since the vehicle speed is faster than the wind speed or because the car vibrates. Therefore, these measured data need specific processing in which boundary conditions, such as the vehicle speed, are considered. Mobile laboratories can also be deployed in a chase mode to specifically inspect vehicles' traffic emissions [80] or to sample the road dust emissions from tires [85]. As stated before, mobile laboratories can be also used in a stationary way. These vehicles have the significant advantage of allowing long-term campaigns in arduous places [53] or in areas normally not supervised by fixed institutional measuring stations. Differently from the acquisition of mobile measurements by means of moving devices, the stationary vehicles make it possible to obtain in-depth knowledge of the temporal variability of pollutant concentrations at the measuring location. This acquisition method can contribute to understanding the background trends and the impact of local sources on a specific site. Generally, inlet systems of mobile laboratories consist of a single pipe for all the instruments located in the vehicle [86] or of separate systems that allow independent sampling [87]. Otherwise, two inlets are mounted: one for gases and one for aerosols [55]. Due to its configuration, a single-inlet system has a set-up that allows the addition of new analysers in an easy way but, on the other hand, different inlets make it possible to entirely remove specific instruments in order to use them in other measuring campaigns [81]. In accordance with the aim of the study, a configuration with different inlets was chosen. A single-inlet set-up is especially used when the vehicle has to sample while travelling; otherwise, the measurement can be affected by turbulence or external conditions which quickly change. In such cases, the inlet system is based on isokinetic sampling to minimize the effect of vehicle speed on data acquisition [84] and the inlet walls are normally made of Teflon to prevent chemical reactions with atmospheric pollutants [47]. In contrast, vehicles used in stationary mode usually exploit the potential of maintaining separate inlets for each instrument, since the sampling conditions do not require a single sampling point. The inlet head is generally located at heights from about 2.5 m to 3 m a.g.l. when sampling has to assess background conditions or source effects in order to avoid ground particle resuspension [88], while laboratories that chase other vehicles to monitor exhaust gases have a lower sampling head at the front [89]. Pirjola et al. [48] describe a mobile laboratory with one inlet system at a height of 2.4 m and another one at 0.7 m that can be used in both stationary and chasing modalities. The exhaust gases of all instruments should be collected in a single drain pipe and conveyed in such a way that they do not contaminate the data acquisition of the inlet heads system [90,91]. Special attention should be given to nanoparticle studies. The typical horizontal inlet configuration for chasing and in-motion laboratories should be carefully designed to minimize the loss of the finest particles, which can attach onto the sampling pipe [50]. The collection of nanoparticles in mobile campaigns can be strongly affected by the complexity of this kind of sample acquisition due to various technical aspects, such as the flux velocity, inlet direction and instrument stability. In contrast, the generally vertical acquisition head of stationary vehicles allows correct sampling without underestimations. Electrical management is one of the hardest challenges in a mobile laboratory design. Besides the energy supply necessary for the full operation of instruments, air conditioning, required to avoid the overheating of the technologies, plays an important role in electrical procurement [47]. Pumps, instruments and ancillary elements generate exhaust heat that should be correctly dissipated [90] to preserve the analysers' temperature operating range. Electricity supply is usually provided to mobile laboratories in different ways, depending on the number of instruments simultaneously active and the campaign details. The principal supply system for vehicles used in stationary conditions is an electricity network typically combined with an uninterrupted power supply (UPS), which defend the system from temporary loss of electrical power [53]. In contrast, moving devices need an electrical range that can assure data acquisition while travelling. In some cases, lithium batteries can store the power needed to give power autonomy from a minimum of 4 h [83] up to about 8 h of sampling [77]. Otherwise, gasoline-fuelled generators can guarantee the system activity [86], but some precautions are required. Exhaust gases should be discharged far from inlet heads to avoid interferences with data collection [90]. Devices that normally conduct sampling campaigns, both stationary and in-motion, are equipped with alternators, but they also have the arrangements to be supplied by the electricity grid [47]. Considering their structures, data collection is another crucial aspect of mobile laboratories. Real-time measurements with high temporal resolution require a solid storage capacity design [90]. Also, the large number of monitored parameters, including pollutants, weather conditions and GPS, entails the necessity of an accurate data management system able to archive them in a database [89]. In fact, collected data are generally managed by an on-board datalogger in order to prepare them for subsequent processing and analysis [83]. Data often have to be downloaded in the field [92], especially if the instruments or datalog-ger do not have high memory capacities. To overcome this issue, mobile laboratories are in many cases equipped with an internet connection to guarantee remote data access. The information is shared inside the laboratory through USB [57] or RS232 serial ports [55] or LAN [89] or WLAN [52] connections, while worldwide access is assured by GPRS [53] or UMTS [52]. The Aim of the Project The mobile laboratory described in this article (the cc-TrAIRer) was designed by the Department of Environmental, Land and Infrastructure Engineering (DIATI, Turin, Italy) of the Politecnico di Torino as part of the climate_change@polito project (ministerial funding program "Dipartimenti di Eccellenza 2018-2022"). The main purpose of this laboratory is promoting research on air quality and climate changes through particle, gas and atmospheric conditions sampling. The need for this project emerged from the knowledge and the awareness acquired over the years by the research group thanks to studies of air quality in several circumstances (see Section 4). The laboratory consists of a non-motorised instrumented trailer that is able to conduct sampling campaigns in complex terrain, emergency situations, areas generally not assessed by the authorities and narrow roads. It can also deployed in operations for the control of diffuse emissions in extended areas where external activities are conducted (such as in construction sites or quarries). Thanks to its structure, it is easily transportable with another vehicle, which can be employed in different activities during long-term campaigns of the trailer. In general, fixed monitoring networks are used to obtain the spatial distribution of the concentration of pollutants due to the high number of sampling points that represent it. In contrast, the cc-TrAIRer can supply an evaluation of natural and anthropic emissions in different environmental realities with an extremely high temporal resolution. In fact, to be representative enough, several days in a monitoring campaign are needed to evaluate the pollutant trends in different weather and release conditions. To do this, gasand PM-certificated analysers were appropriately selected as measurement instruments. The collected pollutants included PM fractions (in particular PM 1 , PM 2.5 , PM 4 , PM 10 and PTS) due to their relevant significance for air quality. For their acquisition, two optical analysers (Fidas 200s, Palas GmbH, Karlsruhe, Germany and Comde Derenda APM-2, Comde Derenda, Stahnsdorf, Germany) and one gravimetric sampler (MicroPNS LVS16, MCZ, Bad Nauheim, Germany) were used. Despite the apparent redundancy, PM measurements were taken with different techniques (optical and gravimetric) because of the necessity of obtaining acquisitions with very high temporal resolution and in order to have a gravimetric measure available to validate the data in accordance with the law. Moreover, the trailer setting makes it possible to install two different optical instruments with different configurations and perform checks at the same time or to choose the most appropriate one for the monitoring campaign. Indeed, all the particle collectors have their own sampling heads and they can be installed or removed from the trailer as necessary. This arrangement permits their use as additional satellite monitoring points in the surroundings areas of the studied site to improve the analysis of the pollutants' spatial distribution. In addition, O 3 and NO x were monitored since they are the principal benchmarks of photochemical smog and anthropic sources, respectively, and act as precursor gases for secondary formation of fine particles. The air flow was collected by a single probe to guarantee the needed flow conditions and the instruments were located in a rack that is designed to hold two other possible analysers in the future. The exhaust gases were conveyed in a single discharge pipe that could be extended to move them away from the sampling heads. Since weather conditions strongly affect the atmospheric dynamics and the distribution of pollutants, the trailer is equipped with a meteorological station in order to identify their mutual correlation. Further possible evaluations concern the acoustic field. Even if it is not strictly related to air quality, noise is a strong parameter in the assessment of environmental standards and it also contributes to a global characterization of anthropic sources [93]. For this reason, sound-level meter systems are integrated within the trailer. Finally, the addition of a solar spectrometer is projected. Indeed, solar spectrum analysis can provide important data in pollutants research through the interpretation of spectral band responses [77,94]. In summary, the cc-TrAIRer is currently able to supply PM 1 , PM 2.5 , PM 4 , PM 10 , PTS, O 3 , NO, NO 2 , NO x , weather and noise measurements. The trailer instruments and equipment are described in the following sections and their technical specifications are reported in Table 4. To avoid possible damage to the instrumentation placed in the trailer while travelling, silent blocks to reduce vibrations are installed around the analyser rack and the more vulnerable instruments. The power range is a tricky aspect of this kind of laboratory. As already discussed in the previous section, the electricity supply needed for instrument operation should be accurately designed. Moreover, to guarantee the optimal thermal operational conditions the cc-TrAIRer is equipped with an air conditioning system, which prevents the overheating or excessive cooling of the technologies. The trailer can be powered by a traditional external power grid when feasible. Otherwise, a photovoltaic system ensures its independence from the electrical network, above all in isolated places or where a power line is not accessible. In this way, the trailer's functioning is guaranteed without a diesel generator, which is not sustainable from an environmental point of view and its emissions could significantly affect the data acquisition. To decrease the on-site staff interventions and to facilitate the data management, a datalogger with remote control was installed inside the cc-TrAIRer. It manages a single database for all the analysers. In this way, acquisitions can be consulted in real time and data processing is performed to better and more easily analyse the sampling results. Trailer Design The mobile laboratory was built in a Ranger Hero Camper (length: 4.82 m; width: 2.3 m; height: 2.32 m). The internal volume of about 6 m 3 is large enough to host all the selected instruments, even if an accurate placement study was needed (Figure 1). Nevertheless, thanks to its small dimensions, it can easily be moved or parked in almost any place. The frame and the off-road tires allow it to reach dirt tracks or isolated areas, which are not generally examined by the other, traditional moving laboratories. As the Hero Camper is originally intended for travelling, some structural changes were made to adapt it to the instruments' requirements for research purposes. The roof window was removed to improve the thermal insulation of the vehicle, decreasing the potential heat losses. For the same reason, all the holes that allow the electrical, hydraulic and cable connections through the walls were properly assembled. The kitchen facilities on the back side were taken off to obtain exploitable space where the batteries packs were placed. A support surface was built as a workstation and the personal computer and the datalogger placed there. Additionally, a recess for each side of the trailer was created to externally place different PM analysers. The symmetrical positioning of these two instruments with respect to the trailer's axis significantly helps to optimize the weight distribution. Indeed, a specific analysis of the load distribution was undertaken to ensure the vehicle stability, paying particular attention to the heaviest elements (such as the rack, the batteries and the air conditioning system). Finally, the majority of the apparatus is located at the centre of the back of the trailer in order to transfer the load to the tires and two rear stabilizers and not exceed the load limit on the hitch. The baggage rack was used as a support base for the photovoltaic system. The latter was placed horizontally to optimize the electricity generation and to contribute to the aerodynamic shape of the trailer while travelling. However, when solar power is needed to support the functioning of the instrumentation during sampling campaigns, the panels can expand to increase the exposure surface. To stabilize the trailer, they can also be fastened to the soil to avoid the raising of the entire vehicle due to external events. Instrumentation The cc-TrAIRer provides measurements of PM, gases and weather conditions. Table 4 gives an overview of all instruments' technical specifications. The measurement techniques are described below. Palas Fidas 200s The Palas Fidas 200s (Palas GmbH, Karlsruhe, Germany) is an optical analyser with a continuous monitoring system for fine particles in the size range of 180 nm-18 µm. The particle size diameter is computed using Lorenz-Mie scattered light analysis. The instrument provides PM 2.5 and PM 10 measurements in accordance with the law and others fractions that are useful for research purposes (PM 1 , PM 4 , PMtot, particle number concentration Cn and particle size distribution). The instrument has been tested and EN 15267-1 (2009) and EN 15267-2 (2009). The inlet system is made up of a Sigma-2 sampling head and an Intelligent Aerosol Drying System (IADS). A dried flow is ensured according to the external temperature, humidity and pressure measured by the weather station in order to avoid data acquisition errors due to condensation [95]. The number of monitored parameters and the high temporal resolution of this instrument justify its presence inside the cc-TrAIRer. APM-2 The Air Pollution Monitor APM-2 (Comde-Derenda GmbH, Stahnsdorf, Germany) is a particle optical analyser for the detection of PM 10 Then, the two flows move alternately through the scattered light photometer unit, where particles are illuminated by a laser diode. The particle size is obtained as a function of the scattered light detected from the photodetector [96]. MicroPNS LVS16 The MicroPNS Type LVS16 (Umwelttechnik MCZ GmbH, Bad Nauheim, Germany) is a PM sequential sampler. All of the instrument specifications are in accordance with EN 12341:2014. The particle collection takes place in an automatic way through 16 filters with diameters of 47 mm. An on-field operation by a qualified staff member is required after 16 days to replace the membranes. The sampler is able to collect just a single fraction (PM 10 , PM 2.5 or TSP) at a time, depending on the inlet head that is assembled [97]. The collected specimens are weighed and examined in order to perform chemical and gravimetric analyses. Serinus 40 NO x Analyser The Serinus 40 (Ecotech Pty Ltd., Knoxfield, Australia) is a gas analyser that uses the gas phase chemiluminiscence method to detect nitric oxide (NO), total oxides of nitrogen (NO x ) and nitrogen dioxide (NO 2 ) in the range of 0-20 ppm [98]. The instrument has been tested and found to comply with VDI 4202-1 (2010), VDI 4203-3 (2010), EN 14211 (2012), EN 15267-1 (2009) and EN 15267-2 (2009), according to its US EPA approval (RFNA-0809-186) and EN approval (TÜV 936/21221977/A). The ambient air is collected through the sampling probe described below. The auxiliary pump of the analyser is located outside the trailer to reduce the amount of power needed for air conditioning, while the instrument is placed inside the rack to guarantee suitable operation conditions. Serinus 10 O 3 Analyser The Serinus 10 (Ecotech Pty Ltd., Knoxfield, Australia) is a gas analyser for the detection of Ozone (O 3 ) in the range of 0-20 ppm that uses non-dispersive ultraviolet (UV) absorption technology [99]. The instrument has been tested and found to comply with VDI 4202-1 (2010) , EN 14625 (2012), EN 15267-1 (2009) and EN 15267-2 (2009) according to the US EPA approval (EQOA-0809-187) and EN approval (TÜV 936/21221977/C). As for the Serinus 40, the ambient air is collected through the sampling probe described below. Despite the instrument being equipped with an internal pump, the optimization of the trailer temperature required the relocation of the pump in an external arrangement. Due to the potential ozone influence on the other data acquisitions, the instrument was placed above the Serinus 40 in the analyser rack. Sampling Probe The gases sampling probe (Sartec-Saras Srl, Milan, Italy) collects the ambient air to send it to the specific pollutant analysers. The air is taken by the sampling head and it moves to the heating probe system, which heats the flux and keeps it over the dew sample point to avoid condensation formation. To prevent the gases' absorption on the inner wall, the internal channel is insulated and made of PTFE, and a suction fan is installed in the lower part to guarantee the required residence time. The distribution system, consisting of a manifold, allows the connection of up to 12 analysers by Teflon pipes. Davis Vantage Pro 2 Weather Station The Davis Vantage Pro 2 (Davis Instruments, Hayward, CA, USA) enables surveying of the main weather conditions. It includes several sensors and devices gathered in a versatile integrated suite. The anemometer is separately installed and it provides both the wind direction and wind speed. The main body of the station consists of a rain collector, temperature and humidity sensors placed inside radiation shields and solar radiation and UV sensors. The weather station also provides other relevant indices, such as the wind chill, heat index, THW index, YHSW index, evapotranspiration and dew point [100]. Sound-Level Meter (Noise) The sound-level meter used for the acoustic acquisitions is a Bruel Kjaer, Naerum, Denmark, type 2250. The device is class 1 according to international standards. The sound pressure is collected by the microphone, which is screened off by a windscreen, and is sent to the microphone preamplifier. The entire detection structure can be directly linked to the handheld analyser or it can be extended up to 100 m thanks to a small tripod. In this way, the sound-level meter can also be easily managed in the cc-TrAIRer design when a remote location for the microphone is needed. The sound-level meter records the A-weighted sound pressure level to obtain the time history of acquisition campaigns [101]. Power Supply and Air Conditioning System The power autonomy of the cc-TrAIRer is ensured by the possibility of its connection to the traditional power network, which should give the needed power consumption of 3 kW. When the electrical grid is not available, the photovoltaic off-grid system is sufficient for all the operational activities from a power point of view. To meet electrical safety standards, the installed contactor is able to immediately cut off the power supply in cases of system or single instrument failures. It can also intervene if the fire protection temperature limit is exceeded. The photovoltaic system is made up of seven top-efficiency solar panels (Sunpower Maxeon 3400 W) with a parallel connection. They can also be used in a limited configuration, which guarantees the proper functioning of some panels when the others are not expanded. The MPPT solar charge controller (SmartSolar MPPT 150/60-Tr, Victron Energy Almere, The Netherlands) manages the panels' inverter connections and maintains operation at the maximum power point in order to improve the system efficiency. The multifunctional inverter/charger (Multiplus-II 48/5000/70-50, Victron Energy Almere, The Netherlands) charges the battery pack to 48 V DC and produces 230 V AC to supply the device loads (including instrumentation and air conditioning). The power storage apparatus consists of four 48 V/50 Ah lithium batteries (Pylontech US2000B Plus, Pudong, Shanghai, China). Particles instruments are designed for external conditions, but gas analysers have an operational temperature range (15-30 • C) that demands an accurate air conditioning design. The cc-TrAIRer was designed to be capable of carrying out sampling campaigns in remote places with critical thermal conditions (such as mountains or particularly dry climate areas). A preliminary evaluation of the heating load to be removed from the inside of the trailer was undertaken, converting the energy absorbed by the devices into heat. This led to the estimation of a value in BTUs (British thermal units) that would be needed at a minimum. The internal volume of about 6 m 3 and the heat produced by the instruments led to the requirement of about 4000-4500 BTU in this study. During the trailer design, some expedient measures were adopted to reduce the device heat stress, which adds a burden to the total amount of needed BTUs. In fact, the analyser pumps were moved outside the trailer to a purpose-built section that protects them from external weather agents. The selected air conditioner (Fujitsu ASYG07KGTA, nominal power: 400-500 W) is quite small and it was placed in the front side of the trailer in order to provide the flow homogeneously all over the trailer, without directly affecting the analysers. The external unit is located outside, near the internal one. Data Management To optimize the analysis performance and to achieve a suitable parameter overview, a data management system was designed. The cc-TrAIRer is equipped with an operator station where a central data acquisition computer is placed. All the instruments are connected to the computer through a multiport device for serial RS-232 and LAN transmission. Acquired data are then shared online thanks to a Router RUT240 LTE, Teltonika, Kaunas, Lithuania, which exploits an UMTS system. It provides an internet connection with an external SIM to all the equipment through Ethernet cables or a wireless network. Therefore, the data simultaneously collected by all the instruments are generally sent in real time to a server to allow remote control of the parameter trend. Potential internet signal failure is not a limitation of the mobile laboratory, since all the measurements are always locally stored before being sent online. Data undergo initial pre-processing before reaching the server. Indeed, their structure is organised to guarantee smart visualisation and data are averaged across a time frame that can be set according to the operator's choice. A following tool is used to perform the diagnostic phase of the process, highlighting potential errors during the sampling operation. It also allows the insertion of the data in an integrated database and it displays graphical evaluations for the subsequent analysis. The datalogger also develops reports on potential instrument or system failures. An alert is sent to the operator on different occasions, such as in cases of power breakdown, temperature range errors, exceeding of critical thresholds or other operational condition faults. The possible implementation of other instruments in the trailer can be easily managed with the central computer of the mobile laboratory, since it can be quickly commanded by the system with the remote connection. Measurement Applications The cc-TrAIRer has several potential research applications. It can be used to promptly intervene in emergency situations or to identify the impact of local sources on air quality. Moreover, the laboratory can also monitor pollutants' temporal and spatial distribution in heterogeneous contexts in long term campaigns. These different uses have already been strongly documented by different sampling studies conducted by the research group in the last few years (e.g., [102]). These surveys led to an understanding of the necessity of a complete monitoring laboratory for the integrated study of air pollution. In fact, the conception of the cc-TrAIRer was strongly influenced by the aim of bridging the gaps in previous studies and implementing new investigations, such as the simultaneous detection of meteoclimatic conditions and gas concentration. In this section, the results of some of the most relevant surveys for this purpose are reported. The sampling studies described here analysed PM concentrations using the instruments that were chosen for the moving laboratory. Immediate-response air quality monitoring was carried out during an emergency situation in which an industrial site caught fire. The intervention was demanded by the local municipality and had to meet their requirements in addition to the conventional sampling points of public agencies. The study (5 April 2019-13 May 2019) was performed in two areas in Piedmont (Italy), at the foot of the Alps, with the aim of studying the phenomena in order to justify the hazes observed by the citizens, although the public institutions reported concentration values lower than limits. In fact, an assessment of the fire development and of the effects of boundary conditions on the daily variability of haze and pollutant concentrations was conducted with high frequency measurements. An APM-2 analyser was used to determine PM 2.5 and PM 10 trends, initially at a distance of about 3.7 km (days 1-2: Cantalupa) from the burnt site, as shown in Figure 2. Then, the instrument was progressively moved closer to the industrial area (2.2 km, days 3-8: A predominant daily trend influenced by the fire event was identified in the initial sampling period. In Figure 3, the most relevant days are shown. The concentration peak detected in the temporal range 10 AM-1 PM was the effect of the mountain and valley breeze, typical of that period of the day due to the local morphology (see Figure 2) [103]. This morning wind contributed to the transportation of the hazes from the burnt site to the sampling stations, which were at higher elevations with respect to the industrial site. This trend faded over the course of the time thanks to the progressive extinction of the fire. As a general remark, although the PM 10 daily mean concentration was within the EU air quality standards (2008/50/EC), the identified peaks entailed potential exposure to high values of PM. The analysis was performed in a preliminary way and only obtained the PM distribution, but other implementations for trace gas concentrations in comparable fire situations will be carried out with the future application of the cc-TrAIRer. Another purpose of the APM-2 campaigns was achieved at the mountain site of Salbertrand (Alps, Piedmont, Italy-coordinates: 45.070964, 6.887413). The area was supervised to detect the influence of an anthropic source on the PM local distribution. In fact, a wide area close to the monitoring point was designated for the activities of sand and aggregate industries, with a huge material handling. The instrument was placed at a location about 600 m from this area for 5 weeks (25 September 2020-2 November 2020). The extremely high time resolution of the analyser made it possible to accurately assess the concentration distribution during the day. Figure 4 shows the PM 10 trend of the entire period. The working activities had a strong influence on the coarser PM fraction, resulting in a repeated morning peak caused by the particle suspension. To evaluate the PM 2.5 and PM 10 concentration distribution in different environmental conditions, four monitoring campaigns were performed from June 2019 to March 2020. The sampling points were located in suburban areas at gradually increasing distances from the urban station of Politecnico di Torino, in the direction of the city of Milan (northeast). The array runs along the A4 highway in a flat area of the Po Valley megacity, far from mountains or other high ground. APM-2 analysers were located in the cities of Volpiano, Mazzè and Montanaro, as places identified to have comparable boundary conditions. The details of the single sampling points, concerning the time period and concentration values, are listed in Table 5. The results were also compared with the PM trends from the fixed station of Politecnico di Torino in order to observe the difference between urban and suburban areas. Without considering the absolute values, the overall distribution of PM concentrations shows similar trends in urban and suburban sites, suggesting a comparable response to the weather conditions also. Figure 5 shows the difference between the data acquired by the two APM-2s in the external sites and the fixed station. The almost null trend of PM 2.5 delta ( Figure 5) confirms the results of previous studies [104] regarding the homogeneous spatial distribution of the fine fraction in comparison with the coarse one. PM 2.5 values were, in fact, similar in both environmental contexts and in different seasonal conditions. On the contrary, the PM 10 trend was affected more by the local situation. As expected, the fixed station values were generally higher than the others, but the Volpiano campaign presented an opposite trend. In this case, the sampled data could not properly be considered as background conditions because of some nearby anthropic activities that had not been considered in the instrument positioning. The material handlings of aggregate industrial work about 400 m away and the people frequently in attendance at the site could have contributed to the high mean value of PM 10 . Finally, the results of this spatial assessment of the concentration distribution suggest a stronger dependence of the coarser PM fraction on the local sources. For this reason, a suitable preliminary study of sites should be performed to avoid issues in the identification of background trends. The PM assessments were carried out thanks to the great versatility, in terms of repositioning and independence from auxiliary elements, of the APM-2 analyser. The specific thermohygrometric operative conditions of gas analysers and the requirements for sampling accuracy have made gas measurement campaigns with easily movable instruments difficult, up to now, in remote areas. The cc-TrAIRer can guarantee the sampling characteristics needed for all instruments to perform combined gas and particle collections, even those concerning precursor pollutants. Figure 5. Differences between the data acquired by the two APM-2s in the external sites and the fixed urban station located in Politecnico di Torino (Turin, Italy). An almost null trend for PM 2.5 delta was obtained. The PM 10 trend was affected more by the local situation. PM 10 suburban station values were generally lower than the fixed one, but the Volpiano campaign presented an opposite trend because of boundary conditions. Conclusions We designed and developed a non-motorised, instrumented trailer, the cc-TrAIRer, for high-quality research on air quality and climate changes. The results shown in Section 4 proved the usefulness of this mobile laboratory for simultaneous assessment of several air pollutants along with a mutual correlation of meteoclimatic and environmental conditions. Thanks to its relatively small dimensions, the trailer is easily transportable to narrow areas, which are of scientific interest but generally not assessed since they are beyond the measurements needs of public agencies. The photovoltaic system encourages its versatility in terms of power supply, allowing sampling campaigns where a relevant infrastructure does not exist. The Alps are a clear example of an area where applications are laborious but, at the same time, key points of research, since they are more strongly exposed than other parts of the Earth to the effects of climate change. Due to the data and system remote management, the technical field interventions can be minimised. The instruments were selected by evaluating their adaptability to the trailer purpose, considering the measured parameters, dimensions, operational conditions, ordinary maintenance and the temporal resolution. Given the listed advantages, the cc-TrAIRer has potential uses for different research studies, such as those on air compounds' temporal and spatial distribution or source impacts or emergency situations with short-or long-term campaigns. After a preliminary design plan of the sampling points, an examination of the investigated area should be done to confirm the proper trailer location, taking into account the surrounding environmental dynamics and, at the same time, the accessibility and the safety requirements of the trailer. Further sampling surveys of the cc-TrAIRer will analyse the reciprocal influences of air compound concentrations, weather and atmospheric and environmental conditions in different land-use contexts, providing considerable support to the investigation of global aspects of climate change.
10,746.4
2021-08-04T00:00:00.000
[ "Environmental Science", "Engineering" ]
Measuring the Strength of the Evidence Many proponents of p-values assert that they measure the strength of the evidence with respect to a hypothesis. Many proponents of Bayes Factors assert that they measure the relative strength of the evidence with respect to competing hypotheses. From a philosophical perspective, both assertions are problematic because the strength of the evidence depends on auxiliary assumptions, whose worth is not quantifiable by p-values or Bayes Factors. In addition, from a measurement perspective, p-values and Bayes Factors fail to fulfill a basic measurement criterion for validity. For both classes of reasons, p-values and Bayes Factors do not validly measure the strength of the evidence. Introduction Many researchers, statisticians, and mathematicians have suggested that the probability of a finding (or one more extreme), given a hypothesis (the familiar p-value), can be used as a measure of the strength of the evidence provided by that finding. In fact, no less an authority than Ronald Fisher argued that position (e.g., 1925; 1973) [1]. Although Bayesians eschew p-values, they favor Bayes Factors, which also concern probabilities of findings given hypotheses. To compute a Bayes Factor, one divides the probability of the finding given one hypothesis, by the probability of the finding given a competing hypothesis. Although there are many differences between aficionados of p-values and aficionados of Bayes Factors, both camps share a basic assumption, which is that the strength of the evidence can be captured by conditional probabilities of data given hypotheses. Our goal is to question this widely held assumption. We present two categories of arguments. The first category contains arguments based on philosophical considerations. The second category pertains to the specific issue of measurement, and whether conditional probabilities fulfill basic measurement requirements. Philosophical Considerations A long-known but underappreciated aspect of theory testing is that scientific theories contain nono bservational terms. Consider Newton's famous equation: . force mass acceleration = . As Nobel Laureate Leon Lederman [2] indicated, these are non observational terms. Even mass is a non observational term that should not be confused with weight, an observational term. The difference becomes obvious upon considering that the same object would have the same mass on Earth or Jupiter, but would have different weights on the two planets. To make the connection between mass and weight, it is necessary to have auxiliary assumptions, that relate mass to weight on the planets of interest. In general, researchers who wish to test theories attempt either to falsify or verify them. In either case, it is necessary to address the fact that theories contain non observational terms. Somehow, non observational terms in theories must be brought down to the level of observation, to enable researchers to perform theory tests. This is accomplished by combining the theory with auxiliary assumptions, to derive empirical hypotheses with observational terms. Because, in contrast to theories, empirical hypotheses have observational terms, they are amenable to testing. Let us consider the traditional falsification perspective [3]. A naïve view might be that a single contrary finding disconfirms the theory, by the logic of as Lakatos [4] stated particularly clearly, a problem with this naïve view is that it starts from a premise that the empirical hypothesis derives from the theory, and only from the theory. But we have seen that empirical hypotheses derive from combinations of theories and auxiliary assumptions used to obtain observational terms in empirical hypotheses. As a logical matter, an empirical defeat disconfirms the conjunction of the theory and the auxiliary assumptions, which means that either the theory or the auxiliary assumptions (or both) are disconfirmed. There is no logically valid way to determine which alternative is the case, and as Duhem [5] and Lakatos [4] discussed in detail, it often is not straightforward to make the determination in practice. 2/7 other than the truth of phlogiston theory, as Lavoisier eventually demonstrated. The main problem in this case was not auxiliary assumptions (though there were problems there too that Lavoisier fixed) but rather that empirical victories fail to provide a valid proof of the theory they were designed to serve, as they could occur for a reason other than the theory. Of course, modern researchers are aware of this, but nevertheless insist that empirical victories increase the probabilities of the theories they serve. Under the condition that auxiliary assumptions are ignored, this latter insistence is valid. However, if auxiliary assumptions are considered, Trafimow [6] has provided detailed analyses showing that empirical victories can increase or decrease theory probabilities. The latter may seem non intuitive, but an example might be the death-thought-suppression-and-rebound assumption that is an auxiliary assumption of terror management theory in social psychology. The problem is that most terror management theory predictions only work when there is a delay between making mortality salient and a wide variety of dependent variables. The death-thought-suppression-and-rebound assumption is that people suppress mortality salience initially, but it rebounds to become much more important during a delay. Thus, because of the rebound, making mortality salient works well after a delay but does not work well without a delay. Thus, using this auxiliary assumption, the fact that terror management theory effects work well when there is a delay, but do not work well when there is no delay, seems to strongly support terror management theory. However, Trafimow and Hughes [7] showed this auxiliary assumption to be wrong; mortality salience is greater when there is no delay than when there is a delay. Therefore, terror management theory effects should work best when there is no delay, rather than when there is a delay-the exact opposite of what is found in the voluminous literature on terror management theory findings. The poorness of the auxiliary assumption rendered previous evidence allegedly favoring the theory instead to strongly militate against it. None of this is to say that researchers should not try for empirical victories for theories they wish to support, or for empirical defeats for theories they wish to disconfirm, only that the strength of the evidence such empirical victories or defeats provide depends heavily on the worth of the auxiliary assumptions used to derive empirical predictions from theories. Neither p-values nor Bayes Factors measure the worth of these auxiliary assumptions, and therefore cannot provide a good measure of the strength of the evidence. To see clearly that neither p-values nor Bayes Factors can measure the worth of auxiliary assumptions, consider an example where a researcher is interested in whether attitudes cause behavioral intentions. Attitudes and behavioral intentions are non observational terms, so it is necessary to make auxiliary assumptions to bring attitudes down to the level of a manipulation (e.g., that the persuasive essay used in an experiment really does manipulate relevant attitudes) and to bring behavioral intentions down to the level of a measure (e.g., that the items used in the behavioral intention scale really measure relevant behavioral intentions). Note that the essay used and the items used are reasonably observable, as they can be read by anyone with passable vision who knows the language. Additional auxiliary assumptions might be that the sample used is randomly sampled from the population of interest, the randomization process is successful, a large assortment of nuisance factors does not matter (e.g., the time of day does not matter, the color of the experimenter's clothing does not matter, and so on), and many others. Clearly, the worth of auxiliary assumptions is crucial for the strength of the evidence, yet p-values and Bayes Factors are incapable of measuring their worth. But perhaps an argument can be made in a more sophisticated way. For example, Chow [8] has suggested that theory testing can be considered in a cascading manner. There is a theory to be brought down to the level of an empirical hypothesis. In turn, the empirical hypothesis needs to be brought down to the level of a statisotical hypothesis. The statistical hypothesis, though far from definitive, is a necessary precursor to testing the theory. Thus, if one believes that p-values or Bayes Factors do a good job of measuring the strength of the evidence with respect to statistical hypotheses, they might be said to have value with respect to assessing the strength of the evidence more broadly. As will become clear in the following section, p-values and Bayes Factors fail to meet standard measurement criteria. Therefore, they are not good measures of the strength of the evidence, even with respect to statistical hypotheses (never mind empirical hypotheses or theories). But for the present, let us accept the wrong premise anyhow. Returning to the example of attitudes causing behavioral intentions, suppose the researcher performs an experiment using a persuasive essay to manipulate attitudes and anticipates an effect on behavioral intentions, measured using items on a behavioral intention scale. The empirical hypothesis is that randomly assigning participants to read or not read the persuasive essay, should influence scores on the behavioral intention scale. The statistical hypothesis is that the population mean for the behavioral intention scale in the persuasive essay condition is greater than the population mean for the behavioral intention scale in the control condition. Note how far the statistical hypothesis is not only from the empirical hypothesis, but especially from the base theory that attitudes cause behavioral intentions. Worse yet, the researcher who computes a p-value does not even test the researcher's statistical hypothesis, because the p-value is based on the null hypothesis that the populations for the two conditions are the same. We emphasize that the null hypothesis is not the researcher's statistical hypothesis, but rather a different statistical hypothesis. The poorness of the logic in making inferences about the researcher's statistical hypothesis, based on a p-value testing the null hypothesis, has been covered by many others and need not be elaborated here. Let us pause and summarize. There is a theory with non observational terms and auxiliary assumptions are used to bring it down to the observational level expressed via an empirical hypothesis. In turn, the empirical hypothesis is transformed into a statistical hypothesis for increasing specificity. But the researcher who computes a p-value does not even test the statistical hypothesis. Instead, she tests the null hypothesis. Thus, she does not measure the strength of the evidence for her statistical hypothesis, nor her empirical hypothesis, nor her theory. A counter argument might be that the researcher could specify a 3/7 range hypothesis that is closer to the researcher's actual empirical hypothesis, and a one-tailed p-value can be computed based on the range [9,10]. An obvious problem here is that there is no way to calculate the probability of the finding, given a range null hypothesis, unless one knows the prior probability distribution. The Bayesian way around this problem is to impose an arbitrary or subjective prior probability distribution, and integrate across it; whereas the NHST way is to maximize [11]. Maximization has the advantage of guaranteeing that the resulting p-value is not smaller than it should be, but maximization has the disadvantage that the resulting p-value may be slightly larger than it should be, or immensely larger than it should be, or anywhere in between. If the goal were to control the Type I error rate, maximization might make sense because the researcher could be assured of not committing a Type I error more than 5% of the time; however, the present issue is not about Type I error but rather about using p-values to measure the strength of the evidence. Because the researcher who maximizes has no way of knowing how far off she is from the true value, it is immediately obvious that p-values for range null hypotheses fail to validly measure the strength of the evidence. Maximizing constitutes an admission that one does not have a precise measure of the strength of the evidence. What about Bayes Factors? In some ways, Bayes Factors are superior to p-values. For example, suppose that one obtains p = .05. There is no logical way to make an inverse inference about the probability of the null hypothesis, given that p = .05, and so the p-value is not particularly useful. According to Kass and Raftery [11], the probability of the data given a hypothesis is useless information if one does not know the probability of the data with respect to a competing hypothesis. In contrast, a Bayes Factor gives the probability of the evidence with respect to two competing hypotheses, so that at least the researcher knows that evidence is more likely under one hypothesis than under a competing hypothesis. In addition, in the Bayesian scheme, it is possible to handle statistical hypotheses that are not specified precisely without resorting to a null hypothesis. For example, a researcher could test competing statistical hypotheses that the effect of the essay manipulation will be positive (experimental condition mean > control condition mean) or negative (experimental condition mean < control condition mean). However, a disadvantage of Bayes is that one needs to know the prior probability distribution to compute Bayes Factors for continuous data. For a Bayesian, this is a subjective or arbitrary process, with different Bayesians suggesting different types of prior distributions (uniform, Cauchy, and so on). This disadvantage, arguably, is partially mitigated by the possibility of performing sensitivity analyses. Another disadvantage is that Bayes Factors are very sensitive to precisely how the competing statistical hypotheses are stated [12]. In addition to the foregoing example of a positive versus a negative statistical hypothesis, there could be positive versus zero, extremely positive versus mildly positive, extremely positive versus everything else, and so on. And within each of these general possibilities, there are varieties of ranges for both statistical hypotheses that can be specified. Seemingly small differences in how statistical hypotheses are specified may strongly influence the Bayes Factor that is obtained. More generally, then, Bayes Factors necessitate two largely arbitrary or subjective decisions. Which prior probability distribution should be used and how should competing statistical hypotheses be specified? Our argument is not that these are fatal for using Bayes, or even for using Bayes Factors. Rather, it is that these arbitrary or subjective decisions are problematic for Bayes Factors being a valid measure of the strength of the evidence. The best one could say (and we will see later that even this does not work) is that Bayes Factors give the strength of the evidence with respect to: a) One way of stating a statistical hypothesis, Basic Criteria for Valid Measurement The focus of this section is on the reliability and consequent validity of p-values. Subsections presented below concern attenuation of validity due to unreliability and the increase in statistical regression due to unreliability. There also will be a subsection showing that the reliability of p-values is low, thereby calling their validity, as a measure of the strength of the evidence pertaining to the statistical hypothesis, strongly into question. Attenuation of Validity Due to Unreliability It is a truism that measures should be valid and reliable. A measure is valid if it measures that which it is supposed to measure, but determining this is epistemologically complex, particularly as there is much debate about different ways to conceptualize validity, especially construct validity. Fortunately, such complexity is unnecessary at present. Everyone can agree that, whatever else matters for validity, minimum requirements are a) The measure correlates with something (commonly termed as predictive or concurrent validity, depending on the time frame of the measures of the two variables) and b) The measure is reliable. There is a classic theorem that relates validity, in the minimal correlative sense (concurrent or predictive), with reliability [13]. It is provided below as Equation 1 where is the observed correlation that can be expected between measures of two variables (observed validity), is the correlation between true scores of the measures of the two variables (true correlative validity, imagining perfect reliability), is the reliability of the measure of the variable designated as X, and is the reliability of the measure of the variable designated as Y: 4/7 Before continuing the main thread of the argument, it is important to consider two points with respect to the classical theory and Equation 1 [14,15]. First, a person's "true score" on a measure is the expectation across a hypothetical set of many test-taking occasions. In this hypothetical scheme, the person takes the test, is mind-wiped to return to the same state as before taking the test, takes the test again, and so on, ad infinitum. Thus, a correlation between true scores is not the correlation between latent variables, across participants; but rather the correlation between expectations on two measures, across participants. It is not necessary to assume anything about latent variables. Thus, Lord and Novick [14] described the classical theory as "weak" in the sense of making minimal assumptions, relative to more powerful modern measurement theories such as generalizability theory and item response theory. However, an advantage of the weak assumptions of the classical theory is that they are subsumed by more modern measurement theories [1] and so any conclusion drawn from the classical theory also would be drawn from one of the more modern and powerful theories, whereas it is not necessarily the case that conclusions drawn from a more powerful theory would be drawn either from the classical theory or from another powerful theory. Consequently, in those cases where the weak assumptions of the classical theory nevertheless suffice, an advantage is that there is no necessity to use stronger assumptions that are more likely to be wrong, misapplied, or not to fit with alternative measurement theories. Second, in the context of the classical theory, validity of a measure is correlative. An unavoidable consequence is that there is no way to obtain a pure validity coefficient of a measure as the correlation inevitably will depend on the reliability of the measure of concern, the reliability of the other measure, and the relationship between the two measures. However, it is possible to imagine that the other variable has perfect reliability, so that the product of the reliabilities of the two measures equals the reliability of the measure of concern. And it also is possible to assume various true validities and use Equation 1 to map out the consequences of unreliability of the measure of concern on the observed validity. Again, we emphasize that validity in this sense is correlative, and concerned with measures rather than with latent variables. Thus, it is a minimal type of validity that should not be confused with construct validity. Equation 1 shows how the observed validity (in the correlative sense) attenuates from the true validity as the reliability of the measure decreases (if the reliability of the other variable is set at 1). As an extreme example, suppose that the measure has reliability = 0. In that case, the correlation one can expect to observe also will equal 0. Clearly, then, reliability sets an upper limit on validity. Although a reliable measure may or may not be valid, it is certain that an unreliable measure is not valid. In Figure 1, the product of the reliability of measures varies along the horizontal axis, from 0 to 1. In addition, the true correlation is set at .2, .4, .6, .8, and 1.00. Thus, the observed validity, along the vertical axis, is a function of the product of the reliability of the measures (or just the reliability of the measure if the reliability of the other measure is set at unity) and the true correlation. As one considers each curve in Figure 1, going from right to left, Figure 1 illustrates how unreliability attenuates observed validity. Because of this, substantive researchers usually set 8 or 7 as lower limits for "acceptable" reliability. We shall see later that p-value reliability is much less than 8 or 7. Increased Statistical Regression Due to Unreliability Many have pointed out that p-values have a sampling distribution, just like any other statistic [16,17]. A consequence of this fact is that p-values are subject to the phenomenon of statistical regression, sometimes termed regression to the mean. Because obtaining p-values less than .05 is tantamount to being a requirement for publication, the phenomenon of statistical regression renders replication problematic. Low p-values in original published research should be expected not to replicate, because of regression to larger p-values in replication attempts [16,17]. Obvious as the foregoing argument is, it nevertheless has not had much effect on statistical practice in the sciences. One reason 5/7 might be that nobody has ever taken the trouble to calculate the extent of the effect, thereby rendering the argument too abstract to induce substantive researchers to change their scientific practices. The regression calculations are performed here using Equation 2-the standard formula describing statistical regression-where represents an individual score, the mean score of the population, and the reliability of the dependent variable at the population level: To apply Equation 2 to p-values, it is necessary to consider the reliability of p-values. It is helpful to imagine a population of possible original studies, as well as a second population of replication studies, with p-values associated with each original study and with each replication study. In this ideal universe, where each replication study corresponds to an original study, it would be theoretically possible to obtain a correlation coefficient representing the strength of the relationship between p-values associated with the cohort of original studies and p-values associated with the cohort of replication studies. In short, we would have an estimate of the reliability coefficient of p-values (estimated ). In addition, in this ideal universe, there is no bias towards either high or low p-values, so the mean p-value is .5. If we substitute .5 into Equation 2, Equation 3 follows: The main difficulty with applying Equation 3 to p-values is that it is unclear what the reliability of p-values happens to be. There are two obvious ways to address the difficulty. First, it is possible to let the reliability vary between 0 and 1 to determine the effect of statistical regression, in general. Second, we can make use of actual data, to be described later. To commence with a general demonstration, imagine that the p-value obtained in a study that has just been published is .05 (this is Z in Equation 3). The goal is to use Equation 3 to make the best prediction of the p-value that can be expected to be obtained in a replication study. Figure 2 illustrates how p-values much larger than .05 can be expected, upon replication, if the reliability of p-values is low, and that the problem is increasingly alleviated as the reliability of the p-values increases. However, even if we assume, unrealistically, that the p-value reliability is .9, statistical regression nevertheless implies that the best prediction for the p-value to be obtained in a replication study is .095 rather than the hoped for .05. We hasten to add that there is no implication that lower p-values are impossible in replication studies, only that the expected value is .095. And matters worsen very substantially as one moves from right to left in Figure 2. Thus, one would have to be an extreme optimist to assume that p-values in replication experiments would be likely to be close to original p-values. The Open Science Collaboration Reproducibility Project and the Reliability of p. The most systematic data that are available on the issue of replication can be obtained from the Open Science Collaboration Reproducibility Project. Researchers associated with this project replicated many studies published in top psychology journals, and anyone can download an EXCEL file from their website. From the present perspective, one complication is that although exact p-values were presented for the replication cohort of studies, inexact p-values were presented for the original cohort of studies (e.g., p < .05 rather than p = .023). Fortunately, the data file included test statistics (F, t, and so on) and degrees of freedom, so that EXCEL could provide exact p-values. With exact p-values having been obtained for both cohorts of studies, it only remained to have EXCEL provide the correlation between the two columns of p-values. The correlation is .004. This is consistent with the general tenor of their article in Science (Open Science Collaboration, [18] 6/7 indicating skepticism about whether psychology is a replicable science. More to the present point, with reliability of .004, Equation 2 renders obvious that the regression value for an original p-value of .05 is close to whatever the mean p-value is [19,20]. In the idealized universe where there is no bias, and so the mean population p-value is .5, the regression p-value is extremely close to that (.4982). If we do not imagine an idealized universe, Equation 2 renders obvious that the regression p-value will be extremely close to the mean, and very little information is provided by the obtained p-value. And referring to Equation 1, the extremely low p-value reliability indicates that correlative validity is near zero, regardless of anything else. To be fair, the publication process induces factors that likely lowered p-value reliability, such as restriction of range, statistical regression, not having truly random samples, and others [21,22]. However, even making this concession, it seems unavoidable that at least as far as published p-values are concerned, reliability is low, whatever the reason. And if the reliability of p-values in published studies is low, as it clearly is, there is no reasonable way to support that they validly measure the strength of the evidence even with respect to null hypotheses. Possibly, the reliability of p-values would be raised if p-values played no role in the probability of acceptance of manuscripts for publication, as this would mitigate restriction of range as a problem [23]. But this solution, though it might improve the optics concerning the reliability of p-values, admits that p-values should not influence decisions of journal reviewers and editors. This would be quite an admission! Nor do matters improve if we consider Bayes Factors. If conditional probabilities fail, then quotients of conditional probabilities also fail. In fact, matters become even worse, as the unreliability of two conditional probabilities, rather than only one, becomes relevant. Given that attenuation due to unreliability and regression due to unreliability matter for a single conditional probability, they also matter for a quotient of two conditional probabilities [24][25]. Conclusion In basic science, the goal is to propose and test theories. It is impossible to test theories without making auxiliary assumptions that connect non observational terms in theories with observational terms in empirical hypotheses. Consequently, the strength of the evidence depends strongly upon the worth of auxiliary assumptions, which is assessed by neither p-values nor Bayes Factors. A watered-down argument might be that p-values or Bayes Factors are at least good for assessing the strength of the evidence with respect to statistical hypotheses that admittedly are very far away from the theories they are used to test. But even this watereddown argument fails. This is because p-values are computed with respect to the null hypothesis, and not the researcher's empirical hypothesis. Ubiquitously, the empirical hypothesis is inexact, and so it is impossible to form a point statistical hypothesis that can be tested with a p-value. Nor can the problem be solved with range hypotheses because this requires maximizing the p-value, which is an implicit admission that the computed value is not a precise measure of the strength of the evidence. Nor do Bayes Factors solve these issues. To use Bayes Factors, the researcher must make arbitrary or subjective decisions about prior probability distributions, how to express one of the statistical hypotheses, and how to express the other of the statistical hypotheses. In addition to these considerations, a basic requirement of valid measures is that they must be reliable, but the foregoing section demonstrates that p-values and Bayes Factors fail there too. Conditional probabilities are unreliable, and consequently are strongly subject to attenuation due to unreliability and to regression due to unreliability-two ways of making the same point. Thus, if researchers are to continue to use p-values or Bayes Factors, they cannot justify that use by arguing that they are measuring the strength of the empirical evidence. Other justifications are needed.
6,533.8
2018-07-11T00:00:00.000
[ "Philosophy" ]
Cloudy in the microcalorimeter era: improved energies for Si and S K$\alpha$ fluorescence lines The upcoming X-ray missions based on the microcalorimeter technology require exquisite precision in spectral simulation codes in order to match the unprecedented spectral resolution. In this work, we improve the fluorescence K$\alpha$ energies for Si II-XI and S II-XIII in the code Cloudy. In particular, we provide here a patch to update the Cloudy fluorescence energy table, originally based on Kaastra&Mewe (1993), with the laboratory energies measured by Hell et al. (2016). The new Cloudy simulations were used to model the Chandra/HETG spectra of the High Mass X-ray Binary Vela X-1 previously presented in Amato et al. (2021), showing a remarkable agreement and a dramatic improvement with respect to the current release version of Cloudy (C17.02). INTRODUCTION Inner-shell ionization is responsible for some of the most important transitions in the X-ray domain. If one of the inner shell electrons of an atom or ion is hit by a photon (photoionization) or, to a lesser extent, by an electron (collisional ionization) with energy equal or higher than its ionization energy, it can be removed from the shell. The vacancy created in this way can be filled by an electron from a higher shell in two ways. The electron can lose energy either giving it to another electron (Auger transition which is radiation less) or by fluorescence, i.e. a radiative transition. The X-ray spectra of a variety of astrophysical sources are rich in fluorescence emission lines of all elements and ions of all stages. The launch in the early 2000s of the Chandra and XMM-Newton observatories provided for the first time highresolution spectra in X-rays. The upcoming microcalorimeter-based missions are expected to represent a giant step forward, starting the era of high-precision X-ray spectroscopy. The Hitomi mission (Takahashi et al. 2016) demonstrated the breakthrough capabilities of this technology (see e.g. Simionescu et al. 2019). The next X-ray missions to be launched with microcalorimeters on board, XRISM (Tashiro et al. 2018) and Athena (Barret et al. 2013), will have an energy resolution of a few eV in all the X-ray band, coupled with a large effective area. The analysis and interpretation of the X-ray spectra provided by these new missions will present unprecedented challenges. Therefore, we have started a process to update the spectral simulation code Cloudy (Ferland et al. 2017), in order to keep up with the spectroscopic requirements of these new X-ray missions (Chakraborty et al. 2020a;Chakraborty et al. 2020bChakraborty et al. , 2021. In this work, we present a first attempt to update the Kaastra & Mewe (1993) database, used by Cloudy for fluorescence emission. In particular, we consider the experimental data for fluorescent Kα energies for Si ii-xi and S ii-xiii taken by Hell et al. (2016). Figure 1. Visually co-added MEG ±1 order spectra of the HMXB Vela X-1 at the orbital phase φ orb = 0.75 (see Amato et al. 2021). Top: Best fit model with Cloudy (grey solid line), using the improved energies for the Si fluorescence lines, as described in this work (dubbed C17.02+). The specific contributions of each gas component are labelled (red dashed line and blue dot-dashed line), together with the best-fit parameters and 90% confidence level uncertainties (see text for details). Bottom: As above, but with the current version of Cloudy, C17.02. The low ionization component is here labelled in green, together with the adopted Si Kα lines (from Kaastra & Mewe 1993). For ease of comparison, the improved energies from Hell et al. (2016) are in blue, as in the top panel. Cloudy update The current version of Cloudy, C17.02, uses Table 3 of Kaastra & Mewe (1993) as the main database for fluorescence yields, energies and number of ejected Auger electrons for all elements and ions from Be to Zn. Their calculation were in reasonable agreement with more detailed computations available at that time, but they are now unsuitable for current and future high-resolution spectroscopy. For the update reported in this work, we decided to use the experimental data reported in Hell et al. (2016). In particular, Kα line energies 1 from O-like to Be-like Si and S (i.e. Si vii-xi and S ix-xiii) were taken from their Table 3, which gives centroids for unresolved blends. On the other hand, for lower ionization values, the line energies are taken from Table 5, where individual values are listed for Si ii-iv and Si v-vi and S ii-vi and S vii-viii. An applied case: Vela X-1 We used our updated version of Cloudy to compute an Xspec (Arnaud 1996) tabulated additive model to model the emission line spectrum of the High Mass X-ray Binary Vela X-1. In particular, we applied the model to the Chandra/HETG spectrum of Vela X-1 at the orbital phase φ orb = 0.75 (ObsID 14654, see Amato et al. 2021, for details on the data reduction and the input spectrum for the Cloudy simulations). Here, we restrict the fit to the 6.1-7.2Å band, where Si lines are present. The resulting best fit is shown in Fig. 1 (top panel). The spectrum is very well modelled by two gas components, with different parameters 2 : the higher-ionization component (in red) produces the Si xiii and Si xiv recombination lines, while the lower-ionization component (in blue) mostly reproduces the Si fluorescence lines. The latter, whose energies were updated in this work, nicely match the observed data. For comparison, the bottom panel of Fig. 1 shows the same data, with the same model, but with Xspec tables produced with Cloudy C17.02. While the larger ionization gas component is roughly unchanged, the lower ionization component is basically unconstrained, and most of the Si fluorescence lines are not reproduced by the Cloudy additive model. This is due to the inaccurate line energies taken from Kaastra & Mewe (1993) (in green), significantly different from the values reported in Hell et al. (2016) (in blue). Though attempts to model the accreting wind of Vela X-1 with a multi-component plasma have been made in the past (see, e.g., Lomaeva et al. 2020;Amato et al. 2021), this is the first time where two contributions are clearly distinguished. At the specific orbital phase accounted in this work, the neutron star (NS) in the binary system is about to enter the eclipsing phase, moving further along the line of sight. The observer has, hence, a privileged view on the medium which has just been perturbed by the passage of the NS and photoionised by the X-ray radiation coming from its surface, the so called photoionization wake. The different ionization parameters (log ξ 1 /erg cm s −1 = 3.94 ± 0.04 and log ξ 2 /erg cm s −1 = 3.28 ± 0.05), as well as turbulence (σ t1 = 100 ± 20 and σ t2 = 65 ± 25 km s −1 ) and bulk velocities (v 1 = 60 ± 40 and v 2 = 180 ± 60 km s −1 ), clearly point to the coexistence of two media, with different ionization and kinematic properties, very likely given by the photoionization wake embedded in the surrounding wind. CONCLUSIONS We presented an update of the Cloudy fluorescence energy table, originally based on Kaastra & Mewe (1993), with the Si and S laboratory energies measured by Hell et al. (2016). The update can be applied to the current release version of Cloudy (C17.02) via the patch 'Camilloni2021KMupdate.diff', which is posted to the Cloudy user group 3 . This work should be considered as a pathfinder to demonstrate the urgent need for a systematic update of the fluorescence line energies in Cloudy for all elements and ions, since the inaccurate values in C17.02 already affect the modelling of current gratings spectra, and will soon become obsolete with the advent of microcalorimeter based X-ray missions.
1,818.6
2021-07-05T00:00:00.000
[ "Physics" ]
Metallocene Polyolefins Reinforced by Low-Entanglement UHMWPE through Interfacial Entanglements By introducing low-entanglement UHMWPE, the mechanical properties of polyole fi ns are improved to varying degrees. For polypropylene, the lack of interaction between UHMWPE and polypropylene results in an unsatisfactory reinforcement e ff ect, and the disentangled state makes it easier for the particles to form defects driven by a chain explosion. In contrast, regarding polyethylene and elastomer containing ethylene segments, low-entanglement UHMWPE plays a better role in reinforcement. A series of measurements including scanning electron microscopy (SEM), rheological measurements, di ff erential scanning calorimetry (DSC), and mechanical measurement were used to investigate the mechanisms for the di ff erent enhancement e ff ects. It originates from interdi ff usion and entanglement forming of polyethylene segments across the interface, endowing the material with di ff erent aggregated and defect structures. For instance, EPDM possesses a higher optimal dosage of UHMWPE particles re fl ected in good interfacial interdi ff usion with UHMWPE particles, leading to signi fi cant optimized mechanical performance. Introduction Metallocene-catalyzed polyolefin represents a revolutionary generation of petrochemical products with good toughness, impact resistance, transparency, and low odor [1]. Compared with the commonly used Ziegler-Natta catalysts, the metallocene catalysts possess the advantages of high catalytic activity, more applicable monomers, single active site, providing the polymer with uniform comonomer distribution, and reduced fraction of low-molecular-weight chains [2,3]. Owing to the narrow molecular weight distribution (Mw/ Mn = 2 to 3) of metallocene polyolefin, it exhibits good melt processability, high tensile strength, impact strength, and puncture resistance [4]. The metallocene polyolefin family includes different types of polyethylene (PE), polypropylene (PP), and ethylene copolymer. A variety of novel ethylene copolymer elastomers is invented, including metallocene EPDM, ethylene-octene copolymer (POE), and olefin block copolymers (OBCs), with the advantage of good affinity with conventional PE [5,6]. Ultrahigh molecular weight polyethylene (UHMWPE) is a superior material concerning its exceptional toughness, low abrasion, and high impact resistance [7][8][9]. Nowadays, it is applied into the polymer composites to enhance the mechanical properties, such as toughness and tensile strength. The advantage of using UHMWPE particles is that the macromolecular chains at the surface are capable of reptating and entangling with the polymer matrix, in which traditional inorganic particles are hard to bring about [10]. Interfacial bonding between a matrix and reinforcing particles is critical to determine the final mechanical properties of polymer blends. Usually, a commercial UHMWPE is synthesized by a Ziegler-Natta catalyst at a high temperature (>60°C), where the chain growth rate is greater than the chain crystallization rate leading to the formation of many entanglements in the amorphous region [11]. An entangled UHMWPE (weight average molecular weight of 10 6 g/mol) chain exhibits a terminal relaxation time of 15 h at 180°C according to the reptation theory and the tube model [12], which shows that it is difficult for entangled UHMWPE segments to diffuse well into the matrix in the very limited shear rate and processing time. In contrast, the diffusion model of the lowentanglement material is different from that of entangled UHMWPE, which is presented in a chain explosion mode and is companied by a fast sideways motion [13,14]. In our research group, Yang et al. has found that UHMWPE with the different entangled state has different effects on the structural and mechanical properties of HDPE/ UHMWPE blends. UHMWPE with a low-entanglement state is much easier to relax and overlap the adjacent HDPE chains, leading to more excellent mechanical behaviors [15]. For metallocene polyolefin, the use of ultrahigh molecular weight polyethylene for reinforcement is a very attractive attempt. To the best of our knowledge, there are rarely any reports concerning the reinforcement of metallocene polyolefins by low-entanglement UHMWPE particles. In addition, there is no study on the enhancement mechanism of UHMWPE-reinforced material from the perspective of interfacial interdiffusion. Therefore, we select three kinds of metallocene polyolefins containing different fractions of polyethylene segments (0%, 70.5%, and 100%) as PP, EPDM, and LLDPE. The synthesized nascent UHMWPE particles with a disentangled state based on our previous work is used for enhancement [16,17]. We aim to observe the entanglement formation by the low-entanglement UHMWPE particles and the evolution of microstructures and mechanical properties through a series of investigations including scanning electron microscopy (SEM), rheological measurements, differential scanning calorimetry (DSC), and mechanical measurement. Blend Preparation. The weight fractions of Dis-UHMWPE particles melt-mixed into the metallocene poly-olefin matrix are selected as 0 wt%, 1 wt%, 3 wt%, 5 wt%, 10 wt%, 20 wt%, 30 wt%, and 40 wt%. In addition, 0.6 wt% of antioxidant 1010 is added to prevent the oxidative degradation in subsequent experiments. The polymers were blended in the torque rheometer (HAPRO MIX-60, China) at 190°C for 5 minutes with a speed of 60 rpm. These blended samples are denoted as PP/U x , PE/U x , and EPDM /U x , where x represents that the weight fraction of Dis-UHMWPE particles in the polymer blends is x wt%. Afterwards, the blends were compressed under 10 MPa at 190°C for 5 minutes to produce samples by using the compression machine (XLB-HD, Dongfang Machinery Company, China). Samples were cooled down to the room temperature, and compressed dumbbell samples and disk samples were used for mechanical tests and rheological tests, respectively. The schematic diagram of blend preparation is shown in Figure 1. Characterizations 2.3.1. Laser Particle Size Analysis. The dimensions of Dis-UHMWPE particles were determined by laser particle size analyzer (LS-230 Coulter, USA) with ethanol as dispersion medium. The principle of measurement was laser diffraction. The intensity of scattered light represents the number of particles with this particle size. In this way, the particle size distribution of the sample can be obtained by measuring the intensity of scattered light at different angles. The test results of particle size were D10, D50, and D90, which, respectively, indicated the particle size when the cumulative distribution was 10%, 50%, and 90% in the particle size volume distribution curve, and D50 was also called the average particle size. Microscopic Observation. The morphology of Dis-UHMWPE particles was investigated by optical microscope (Leica DM500) with HD digital camera (ICC50W). The Dis-UHMWPE particles were spread evenly on the glass slide and then photographed by the aid of the digital camera. The characteristics of fractures were performed by scanning electron microscopy (SEM, JEOL JSM-7500F). The three kinds of blends mentioned above were immersed into the liquid nitrogen for 15 minutes and then fractured into two pieces. To enhance the conductivity of the fracture surface of the material, the fractured sections were goldsputtered before examining surface morphology. Rheological Measurements. MFR measurement was conducted on the melt flow tester (HS-XNR-400A, Hesheng Company, China). The load during the experiment was selected as 2.16 kg. The piston and the tested polymer were preheated for 5 minutes and the temperature was kept at 230°C. The MFR value was recorded with the unit g/ 10 min. The melt flow index of each sample was tested at least two times, and the average values were taken. The viscoelasticity of the blends was analyzed on a strain-controlled rheometer (HR10, TA Instruments, USA) with a parallel plate with a diameter of 20 mm. The dimension of disks for rheological tests were fixed at a diameter of 20 mm and a thickness of 2 mm. The disk between the 2 Advances in Polymer Technology parallel plates of the rheometer was heated to 190°C under a nitrogen environment, and the rheological measurements were performed on the oscillation mode. After waiting for 200 seconds to ensure the thermal stability of the sample, the rheological measurements were started. Frequency sweep tests of the blends were performed at a frequency range from 100 to 0.01 Hz with the strain amplitude of 1.0% in the linear viscoelastic regime. The shear rate applied to the samples was linearly increased with the time from 0 s -1 to 10 s -1 and 18 s -1 , respectively. All samples were treated for 360 s. For the sake of convenience, the samples were named by the content of Dis-UHMWPE particles and the termination shear rate. For instance, PP/U 3 -10 denotes that the content of Dis-UHMWPE particles was 3 wt% and the termination shear rate was 10s -1 . Oscillation time sweep was carried out by the strain amplitude of 1.0% and the frequency of 5 Hz after the shear modification to track the reentanglement process of blends. Thermal Behaviors. Thermal behaviors of the samples were recorded on differential scanning calorimetry (DSC) (TA DSC25, USA) in a nitrogen atmosphere. The samples were heated from 30°C to 190°C at a ramping rate of 10°C/ min. Afterwards, five minutes were held to eliminate thermal history, and the samples were cooled to 30°C at the rate of 10°C/min. The temperature scan was repeated in the heating and cooling range between 30°C and 190°C at 10°C/min as the second cycle. As for EPDM and LLDPE, crystallinity of LLDPE (X cPE ) was calculated by where ΔH m is the melting enthalpy of the samples and Δ H LLDPE0 is the melting enthalpy of the fully crystalline polyethylene (293.0 J/g) [18]. As for PP blends, the crystallinity of PP was estimated by the second cooling curve using where ΔH mPP is the enthalpy of cooling crystallization of PP during the second cooling process, ΔH PP0 is the melting enthalpy of 100% crystalline PP, which was taken to be 209 J/g [19], and ω PP is the weight fraction of PP component in the blend. Similarly, the crystallinity of UHMWPE in PP blends was estimated by using where ΔH mUHMWPE is the enthalpy of cooling crystallization of UHMWPE during the second cooling process and ω UHMWPE is the weight fraction of UHMWPE component in the blend. Mechanical Tests. The tensile tests were carried out on an electromechanical universal test system (Model 5566, Instron, USA) at room temperature (25°C) with a tensile speed of 50 mm/min. The dumbbell splines were produced with the length, width, and thickness of 30 mm, 5 mm, and 2 mm, respectively. For each group of tests, more than four samples were tested, and the average values and standard deviations were recorded. The hardness of EPDM/U, PP/U, and LLDPE/U was measured by using a Shore hardness tester (Shore "D"). Advances in Polymer Technology The specimens were placed on a flat plane. The indenter of the hardness tester was then pressed on the samples without any vibration, making sure that it was parallel to the surface. The tested values must be recorded within 1 second after the indenter and the samples were fully touched. Each type of samples was measured for five times at different position and the average values were taken. Results and Discussion 3.1. Morphology and Distribution of Dis-UHMWPE Particles. The mean diameters of Dis-UHMWPE particles investigated by laser particle size analysis and their morphology are shown in Figure 2. The mean diameter D50 of Dis-UHMWPE particle is close to 135 μm, which corresponds well with the images taken under an optical microscope. Dis-UHMWPE nascent particles of different particle sizes are typically "grape-shaped," which are directly composed of loosely stacked nodular particles [20]. Compared with highly entangled UHMWPE, the size of nodular particles of Dis-UHMWPE is generally smaller, and different numbers of nodular particles agglomerate to form new particles of different sizes. These UHMWPE particles were incorporated into different metallocene polyolefin matrices. The SEM images of fractured sections of the melt-processed blends are illustrated in Figure 3. After the melt processing, the Dis-UHMWPE particles have undergone a significant evolution. It reflects the biphasic miscibility of the blends and distribution of UHMWPE particles. When the concentration is low, a relatively homogeneous system is formed without observable particles. As for PP/U, obvious particles appear when the UHMWPE content is 3 wt%. As the content of UHMWPE increases, tremendous particles gradually appear on the surface indicating deteriorated miscibility, especially when 10 wt% of Dis-UHMWPE particles are incorporated, with the volume average diameter of 0.9 μm. In contrast, the frequency of appearance of UHMWPE particles in EPDM and LLDPE is significantly lower compared with PP/U samples, even if the addition content of Dis-UHMWPE particles reaches 10 wt%. Meanwhile, the volume average diameter of the UHMWPE particles is increased in EPDM reaching 4.1 μm due to varied surface tension and rheological properties of different blending systems. The fracture section of LLDPE/U in Figure 3 presents a clear three-dimensional network structure. When the addition content of Dis-UHMWPE particles reaches more than 5 wt%, as shown in Figures 3(c 5 ) and 3(c 10 ), obvious particles begin to appear on the fractured surface. 90% of UHMWPE particles are concentrated in 0.5-1.0 μm with the volume average of 0.6 μm exhibiting good mixing characteristic. SEM results intuitively provide a judgment of biphasic miscibility. Based on the above-mentioned observation, it is concluded that when the Dis-UHMWPE particles are mixed with the polyolefin, the nodular particles are separated and physically interacted with the matrix, of which the schematic figure is shown in Figure 3(e). It exhibits different particle sizes of UHMWPE based on the choice of polymer matrix and addition content of Dis-UHMWPE particles. Furthermore, from the perspective of the macromolecular chain segments, the interfacial interdiffusion between the UHMWPE particles and the polyolefin matrix needs to be revealed. Interfacial Interdiffusion. The melt flow rates (MFR) and its normalized values of the polymer blends are illustrated in Figure 4. EPDM performs relatively weak fluidity compared with PP and LLDPE. In addition, the introduction of Dis-UHMWPE particles gradually decreases the fluidity of the polyolefin matrix. The content of Dis-UHMWPE particles should be controlled within a low range to ensure sufficient melt processing performance. When the addition amount of UHMWPE is 10 wt%, the fluidity reduction ratio of LLDPE and EPDM is very similar, approximately approaching 64%. Meanwhile, the PP blend shows 30% attenuation of melt flow index. It hints different interfacial affinity with UHMWPE of polypropylene and polyolefin containing polyethylene segments. The miscibility and interdiffusion behaviors of blends with different weight fraction of UHMWPE are evaluated by high-temperature dynamic frequency scanning. Taking the samples with 1 wt% UHMWPE as an example, as shown in Figure 4(c), in the high frequency regime, the storage modulus (G ′ ) of the blends is greater than the loss modulus (G ″ ). Materials tend to behave more like solid under high frequency stimulation due to oscillatory shear deformation, where it is over the crossover frequency with inability in the motion of entire molecular chains. By comparing the change of the crossover frequency of the blends, we understand the effect of the UHMWPE content on relaxation behavior. As shown in Figure 4(d), with the increase of UHMWPE content, the crossover frequency gradually decreases and longer relaxation time is required. LLDPE matrix is more sensitive to UHMWPE changes, and the liquid-like to solid-like transition shifts quickly to low frequencies. It originates from the formation of networked structures with the incorporation of Dis-UHMWPE [21][22][23]. As shown in Figure 5, the viscosity of the blends increases gradually with the increment of UHMWPE fraction, which results from the hindrance by the ultralong molecular chain of UHMWPE [18]. EPDM exhibits higher viscosity compared with PE, corresponding well with the MFR results. The LLDPE/U and EPDM/U show a relatively high complex viscosity compared with the pristine ones, demonstrating that the introduction of Dis-UHMWPE particles improves the intermolecular entanglements of the blends and shows long-period relaxation behavior [24]. The LLDPE/U and EPDM/U exhibit a typical shear thinning behavior in contrast to the apparent Newtonian behavior regime of pure LLDPE, which is beneficial for melt processing at high shear rate. Figures 5(b)-5(e) present the evolution of storage modulus of EPDM/U and LLDPE/U blends with different UHMWPE content. The storage modulus represents the elasticity of the melt and the relaxation behaviors of the molecular chains. As shown in the storage modulus versus frequency curves, the blended samples have larger modulus at low shear frequency compared with pristine samples, which is ascribed to the longer relaxation time. 4 Advances in Polymer Technology The change of modulus for EPDM/U is negligible compared to that of LLDPE/U when the UHMWPE fraction is less than 10 wt%. With the further increase of UHMWPE concentration, the storage modulus is improved significantly at low frequency region ranged from 0.01 Hz to 0.1 Hz, which means that a large amount of entanglement points and mechanical network between UHMWPE chains and the adjacent chains are formed. Loss tangent (tan δ) is an important parameter to measure the viscoelastic transition of melt, illustrating the balance between energy loss and storage. The smaller tan δ corresponds to the better elasticity [25]. A relative apparent positive slope occurs in the EPDM/U blend at low shear frequency ( Figure 5(c)), which is attributed to the interface strength effect between EPDM and UHMWPE by forming tight entanglement networks [26]. Compared with EPDM/ U, tan δ of LLDPE/U decreases more obviously with the increase of frequency, indicating viscous fluid behavior and less pronounced elasticity [27]. In summary, UHMWPE has a great influence on the viscoelasticity of EPDM and LLDPE by forming a certain degree of entanglements. To gain a deeper understanding of the miscibility of the two polymers, the above-mentioned data is remapped in Figure 6. The log-additivity rule is a common method to 5 Advances in Polymer Technology analyze the miscibility of biphasic blends [28]. Figures 6(a)-6(d) show the variation of complex viscosity and storage modulus at 0.01 Hz versus UHMWPE content at 190°C for EPDM/U and LLDPE/U. Compared with LLDPE/U, EPDM/U exhibits obviously higher linearity in the curves concerning the linear variation of log G ð0:01HzÞ ′ and log η ð0:01HzÞ versus UHMWPE content. It indicates that the biphasic miscibility of EPDM/U blend is more impressive than that of LLDPE/U. Cole-Cole curve is an empirical correlation tool to analyze the miscibility of the blends, which illustrates the relationship between real viscosity (η ′ ) and imaginary viscosity (η″). The degree of downward bending of Cole-Cole curves represents phase separation and relaxation process of dispersed particles [29], of which the smooth semicircle represents good miscibility [30,31]. As shown in Figure 6(f), the LLDPE matrix and UHMWPE with the content below 3 wt% exhibit good miscibility reflected in observable semicircles. The shape of the curve sharply deviates from the semicircle to a straight line with an upturn tail at the concentration of 10 wt% due to deteriorated biphasic miscibility. In contrast, the Cole-Cole curve of pristine EPDM is slightly bent due to its intrinsic structure of hard blocks of polyethylene segments and soft blocks of polypropylene segments. There is violent phase separation with an upturn tail in the curve when the addition fraction of Dis-UHMWPE particles exceeds 20 wt%. The variation of Han curves with different compositions also indicates the phase separation behavior based on molecular viscoelasticity theory [32]. The obvious difference between heterogeneous polymer system and homogeneous polymer system lies in whether there exists composition dependence. If there is no composition dependence, the polymer melt is homogeneous. Therefore, it proves that there is no distinctive phase separation in melt state when the fraction of UHMWPE in EPDM/U is below 10 wt%. In contrast, for the LLDPE/U system, the Han curve has a significant deviation within the fraction of 10 wt%, which shows that the obvious phase separation occurs in the melt state. When the content of UHMWPE exceeds 10 wt%, LLDPE/U blends exhibit perceptible composition dependence, which is ascribed that a large amount of UHMWPE is difficult to be well miscible with the matrix. Meanwhile, regarding better miscibility, the maximum addition threshold of Dis-UHMWPE particles for EPDM reaches 20 wt%, which corresponds well with Cole-Cole curves. In terms of rheology, the maximum addition amount of blending 10 Advances in Polymer Technology modification is 10 wt% and 20 wt% for LLDPE and EPDM, respectively. Different from nanoparticles, the molecular chains of UHMWPE are disentangled under the shear modification, and the disentangled chain segments tend to reentangle to equilibrium state in the form of random coil originated from the entropy driven in the molten state [33]. Therefore, we use rheological methods to track the recovery process of entanglement, reflecting the situation of biphasic interfacial interdiffusion. Figures 7(a) and 7(b) record the recovery process of storage modulus of the biphasic blends versus time after reaching the terminal shear rates of 10 s -1 and 18 s -1 , in logarithmic form, respectively. The recovery process is generally divided into two stages including chain explosion stage and reptation stage, respectively [34]. In the chain explosion stage, the chain segment moves and forms large number of physical entanglement nodes rapidly. Subsequently, the motion of the chain segment is greatly limited in the reptation stage, and it takes a long time to reach the platform. We define the time for the storage modulus to recover to half equilibrium and total equilibrium as half reentanglement time and reentanglement time, respectively, and the related data are illustrated in Figures 7(c) and 7(d). PP/U blends only takes 318 s to reach the equilibrium state, which shows that the chain segments of Dis-UHMWPE particles do not effectively diffuse into the chain segments of PP matrix existing in the form of agglomeration. After shear modification, the chain segments of PP quickly return to equilibrium under the action of entropy drive. The interfacial interdiffusion between UHMWPE and PP is very weak compared with the other two polymer blends, and it is difficult to form an effective entanglement network to retard the recovery process which is still dominated by the PP molecular chains. Two different shear rates are selected to track the differences, and the order of recovery time among different materials remains unchanged, reflected in significantly shorter recovery time of PP/U and prolonged time of LLDPE/U and EPDM/U. It is attributed that the motion of these two polymers is greatly constrained by UHMWPE, and the longer recovery time probably corresponds to better interfacial interdiffusion. Molecular chains of UHMWPE diffuse into the molecular chains and form entanglement networks with EPDM and LLDPE. In detail, when the terminal shear rate is 10 s -1 , EPDM/U 3 takes 1168 s to reach the semiequilibrium state, almost four times that of LLDPE/U 3 . It gives evidence that the interaction between EPDM and UHMWPE is strong enough to form entanglements between polyethylene segments. Based on these findings, the structure diagram of the UHMWPE-containing blends with different polyolefin is given in Figure 8. There are weakly any strong intermolecular forces between PP and UHMWPE with slightly entangled interface. For LLDPE and EPDM, there exists some degree of intermolecular diffusion of UHMWPE towards the polyolefin matrix forming entanglements between molecular chains. Although there are relatively larger particles of EPDM in the matrix, the interdiffusion force of UHMWPE into EPDM matrix is still very strong due to the presence of a large fraction of polyethylene segments. Figure 9 illustrates the second heating and cooling curves of the blends in the DSC tests, where the related data regarding crystallinity are presented in Figures 9(d)-9(f). The cooling curves of the first cycle and the second cycle almost coincide, indicating that the thermal history of the blends has been effectively eliminated. The melting enthalpy of PP/U is gradually increased with the addition of Dis-UHMWPE particles. It is mainly due to enhanced crystallization ability and high melting enthalpy for 100% crystalline polyethylene segments. The crystallinity of PP (low-temperature peak) is estimated from the cooling process (Figure 9(g)) ranging from 13.3% to 14.5%, while the crystallinity of UHMWPE (high-temperature peak) is from 31.9% to 35.8%. Although the crystallinity of PP does not change much, its crystallization temperature is significantly advanced from 70.0°C for pristine PP to 73.2°C for PP/U 1 , where UHMWPE plays the role of a nucleating agent. The two melting peak temperatures of pristine PP (Figure 9(a)) are 114.3°C and 127.2°C, representing two kinds of conformations in PP. The low-temperature peak and high- 13 Advances in Polymer Technology temperature peak represent isotactic PP and syndiotactic PP, respectively. Two peaks of PP/U samples have 1°C shift to a higher temperature. Meanwhile, there appears a shoulder peak of UHMWPE at 134.5°C when the addition amount of UHMWPE is greater than 3 wt%. As a nucleating agent, UHMWPE does not participate in the cocrystallization process of PP and only slightly improves the aggregated structure of PP. It increases the melting point of PP without significantly impacting its crystallinity. Crystallinity. EPDM with an ethylene content of more than 65% is often classified as crystallizable products as semicrystalline. When there is no UHMWPE in the EPDM matrix, the polymer does not crystallize without obvious melting points, because methyl groups from propylene units interrupt crystallization of EPDM. The situation is changed when Dis-UHMWPE particles are incorporated. The melting points of UHMWPE usually exceed 130°C [35]. When Dis-UHMWPE particles are incorporated into the matrix, the 14 Advances in Polymer Technology melting point of UHMWPE content in the blend is approximately 133.8°C. In addition, there appears a shoulder peak at 128.2°C, which is ascribed to the crystallization of polyethylene segments from EPDM. The incorporation of Dis-UHMWPE particles promotes the nucleation process and growth of the crystallization process [36,37]. During the cooling process, the blend cocrystallizes at 118.3°C, which gives evidence of good interfacial interdiffusion between EPDM and UHMWPE. The overall crystallinity of the EPDM/U is slightly increased with the addition of UHMWPE, remaining a low level within 5%, ensuring that EPDM retains the characteristics of rubber. In Figure 9(i), there is no obvious cocrystallization between LLDPE and UHMWPE. However, the melting peak of LLDPE/U blends is shifted from 122.3°C to 128.2°C (Figure 9(c)), which is illustrated that the crystallization behaviors are greatly affected by the addition of UHMWPE. Similarly, the melting point of UHMWPE is close to 133.8°C. It is worth noting that a small amount of UHMWPE results in great change for crystallinity of LLDPE. When 1 wt% of Dis-UHMWPE particles are incorporated, the crystallinity is increased from 24.9% to 30.4% as shown in Figure 9(f). When the addition amount is further enhanced, the increment in crystallinity becomes very small, which is distinct from the crystallization behavior of EPDM. As shown in Figure 9(e), EPDM has a stable increase in crystallinity within a larger content range, which is due to its high filling threshold and good interdiffusion with UHMWPE. 3.4. Mechanical Performance. The typical stress-strain curves of three series of blends are shown in Figure 10. PP and LLDPE exhibit the mechanical performance of ductile materials with very high breaking elongation. At the beginning of stretching, stress-strain curves show a sharp slope which represents general elastic deformation till the yield point, followed by necking and cold drawing during which LLDPE exhibits the strain hardening behavior. As for EPDM, it acts as a typical elastomer without necking. The mechanical properties of the three blends, including tensile strength, breaking elongation, Young's modulus, hardness, yield stress, and breaking work, are also presented in Figure 10. The mechanical properties of PP/U blends are enhanced only when the amount of UHMWPE is tiny (1 wt%). When the addition amount is further increased, there is a sharp decline in the tensile strength, breaking elongation, and breaking work. Its Young's modulus, hardness, and yield strength basically keep unchanged. From the radar chart, it shows that in the biphasic system, UHMWPE is not suitable for enhancing the mechanical properties of PP. It is mainly ascribed that the crystalline structure of PP remains basically unchanged, and the addition of UHMWPE presents in the PP matrix in the form of defects leading to stress concentration, which corresponds well with the SEM results. In contrast, as shown in Figure 11, Dis-UHMWPE particles have significantly better reinforcement effects on LLDPE and EPDM. As for LLDPE/U blends, 5 wt% of UHMWPE successfully improves the comprehensive mechanical performance compared with pristine LLDPE, especially tensile strength by 11%, yield strength by 11%, and hardness by 5%. Meanwhile, the breaking elongation is slightly decreased compared with LLDPE/U 3 , which may be ascribed to deteriorated miscibility as the UHMWPE content increases. As for EPDM, when the addition amount of UHMWPE into EPDM is 5 wt%, the enhancement effect in mechanical performance is significant, including tensile and yield strength, breaking elongation and work, Young's modulus, and hardness. When the addition amount reaches 10 wt%, Young's modulus is dramatically enhanced from 3.1 MPa to 12.4 MPa by 3 times. Meanwhile, its yield strength, hardness, and breaking work are continuously enhanced, and the related values are increased by 26%, 30%, and 19%, respectively. It hints that UHMWPE plays an excellent role in enhancement in EPDM, originated from the good interfacial interdiffusion and promoted aggregate structure caused by cocrystallization. In order to highlight the difference between lowentanglement UHMWPE and high-entanglement one, we also tried to add high-entanglement UHMWPE into the polyolefin matrix for comparison. As shown in Figure 12, compared with the high-entanglement UHMWPE, lowentanglement UHMWPE has a more significant effect on improving the mechanical properties of polyethylene. This is because the structure of the low-entanglement UHMWPE entanglement network is looser, and the chain segments of LLDPE are easier to enter the inner part of the lowentanglement UHMWPE chain. From the previous analysis, UHMWPE exists in the PP matrix in the form of defects, and the low-entanglement UHMWPE is more prone to chain explosion than the high-entanglement UHMWPE. Therefore, the low-entanglement UHMWPE will form larger defects during the blending process, resulting in the degradation of the mechanical properties of PP/U. In contrast, highly entangled fillers form smaller-scale defects. So for low-entanglement UHMWPE, we have to distinguish its application scenarios. It is aimed to form better entanglement between UHMWPE and the matrix, and we need to ensure that the filler does not reentangle itself during the blending process. Conclusion The interfacial interdiffusion and mechanical evolution of metallocene polyolefins by introducing low-entanglement UHMWPE particles have been demonstrated. PP has very poor miscibility with UHMWPE with tremendous UHMWPE particles and exhibits poor interfacial interdiffusion without effective entanglement network. From the perspective of the macromolecular chain segments probed by rheological measurement, the interfacial interdiffusion force of UHMWPE is more significant regarding the two polyolefins containing polyethylene segments, leading to enhanced mechanical properties especially EPDM. EPDM possesses a higher maximum addition threshold concerning rheological and mechanical behaviors. UHMWPE cocrystallizes with EPDM with promoted aggregate structure, and the blend shows superior comprehensive mechanical properties especially Young's modulus. Therefore, low-entanglement UHMWPE particles can be regarded as an ideal reinforcing filler for metallocene polyolefins containing polyethylene segments to broaden their application fields. The key to enhancement is forming entanglements through efficient interfacial interdiffusion of polyethylene segments. This research provides a reference for designing the UHMWPEreinforcing polymer. For example, polyethylene segments are expected to be introduced into the matrix by copolymerization or blending, helping UHMWPE perform a better enhancement effect. Data Availability The data that support the findings of this study are available on request from the corresponding authors. The data are not publicly available due to privacy or ethical restrictions.
7,196
2022-10-19T00:00:00.000
[ "Materials Science" ]
Subwavelength optical localization with toroidal excitations in plasmonic and Mie metamaterials Since the performance of electronic circuits is becoming rather limited in face of intensively increasing of amount of information and related operations, all-optical processing offers a promising strategy for future information system. It would benefit a great deal if the all-optical processing could be implemented within the developed electronic chips of nanoscale structures. In that it is highly desirable to break the diffraction limit of light for achieving effective light manipulations with deep subwavelength structures compatible with the state-of-the-art nanofabrication processes. It is of fundamental importance to get subwavelength optical localization, | INTRODUCTION Investigation on metamaterials [1][2][3][4][5][6][7][8][9][10][11] and plasmonics [12][13][14][15][16][17][18][19] has stimulated rapid development in subwavelength optics, 20,21 which is emerging as the frontier of modern optics. The key issue in subwavelength optics is to break the diffraction limit of light wave and develop technologies for extreme manipulation of optical field in subwavelength scale, which is compatible with the welldeveloped nanolithography in modern nanoelectronics. Optically resonant modes are of fundamental importance in realizing enhanced light-matter interactions for their coupling ability with light fields in both the spatial and time domain. For the spatial domain coupling, it is highly desirable to localize light in a possibly subwavelength volume with strong fields for applications in, for example, nonlinear processing. 22,23 Plasmonic excitations in metallic structures have been exploited for realizing light localization at deep subwavelength scale. Metamaterials [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43] and its two-dimensional (2D) counterpart metasurfaces made from plasmonic resonant [69][70][71][72][73] or Mie resonant building blocks [74][75][76][77][78][79][80][81][82][83][84] are especially promising for the enhancement of light-matter interactions 19,85,86 at a subwavelength scale. Various plasmonic and Mie metamaterials have been proposed for achieving high Q factor response, [87][88][89][90][91][92][93][94][95][96] for example, the trapped mode, which is kind of magnetic mode weakly coupled to free space, was suggested to be excited by introducing symmetry breaking in the shape of structural elements for realizing sharp spectral response. 87 Recently, the excitation of toroidal moment in felicitously designed plasmonic and Mie metamaterial is suggested as a new route for achieving strong optical localization and high-Q response. 26,32,[97][98][99][100][101][102][103] The toroidal current configuration was first considered by Zel'dovich to account parity violation interaction. 104 Later, Dubovik found the possibility to introducing the new class of moment, namely toroidal moment with different time-space symmetry. [105][106][107] Since then the toroidal moments have already been intensively studied in nuclear, atomic and molecular physics, solid state physics, and electrodynamics. 32,[105][106][107][108][109][110][111][112][113] A static toroidal moment can exist in various materials including metals, 114 glasses, 115 boracites, 116 pyroxens, 117 olivenes, 118 bulky crystals, 119 and biological and chemical macromolecules. 120,121 In addition to static moments, dynamic toroidal moments, also called as toroidal excitations, can be induced by interacting with incident optical fields and make contributions over the entire electromagnetic spectrum. 32,105 In general, electric multipoles are produced by separating the positive and negative charges over a distance (oscillating charge density), whereas magnetic multipoles are created by the closed circulation of electric current (oscillating current density). Toroidal multipoles are not a part of the standard multipole expansions and originate from the decomposition of the momentum tensors with currents flowing on the surface of the torus along its meridians (oscillating radial components of current density from radiating fields). The electric excitations are strongly coupled to free space with large radiative loss. On the other hand, the generation of electrical and magnetic excitations is always accompanied by prominent induced currents (conduction current in metals or displacement current in dielectric medium), which would inevitably result in large nonradiative loss. However, the toroidal excitations are weakly coupled to free space and the magnetic fields are strongly confined in dielectric surroundings or free space. 122 The weak free-space coupling and unique light localization are crucial for achieving higher Q response and enhanced light-matter interactions. The radiation patterns of these multipoles are shown in the far right column of Figure 1. 123 To take the toroidal resonance apart from the electric and magnetic resonance, the multipolar radiation powers should be calculated first. The structure design should be optimized before experiment to make sure that the toroidal response dominates the far-field scattering power at desired frequencies, where other multipolar radiation powers are significantly suppressed. The radiation power of induced multipoles can be calculated using the following formula 26 : I = 2ω 4 3c 3 P j j 2 + 2ω 4 3c 3 M j j 2 + 2ω 6 3c 5 T j j 2 + ÁÁÁ ð1Þ with the electric dipole moment the magnetic dipole moment and the toroidal dipole moment where j is the current density, ω is the circular frequency, r is the displacement vector, and c is the speed of light in vacuum. The toroidal excitations are different from electrical and magnetic excitations in traditional multipole expansions, and it is of higher-Q response (in comparison with common electric/magnetic dipolar mode), which is due to its weak free-space coupling. 26,32 Toroidal excitations provide opportunity to further increase the light field localization and high-Q response for potential applications in low-power nonlinear processing and strong localized field-based-sensitive photonic applications. In that, toroidal excitations are of fundamental importance for freely controlling the optical signal in deep subwavelength scale. However, it is challenging to excite and detect toroidal excitations in media since toroidal dipoles are mostly weak coupled to free space. Metamaterial with freely tailorable functions was first introduced to excite toroidal moment in 2007, the negative refraction and backward wave properties were studied in such a toroidal metamaterial. 124 Later in 2009, a toroidal dipole excitation was first reported experimentally in the microwave regime. 125 However, this excitation was hindered by other electric and magnetic multipoles. The first spectrally isolated toroidal dipole dominated resonance was observed in 2010, with metamolecules formed by ringshaped microwave resonators, where toroidal response was enhanced to a detectable level. 26 Later, plasmonic metamaterials with toroidal excitations were also observed at terahertz 126 and optical frequencies 98 by scaling down the size of metamolecules. To simplify the fabrication of three-dimensional (3D) plasmonic metamaterials, planar metamaterials or metasurfaces and less challenging patterns were also studied for simplified excitation of toroidal moments. 122 Although the performance of plasmonic metamaterials is restricted by the ohmic damping when reaching the higher frequencies, dielectric metamaterials with low-loss and high refractive index building blocks were also proposed to excite high-Q toroidal resonances for extreme strong subwavelength optical localization by exploiting the Mie resonances. 99 Multipole decomposition of the optical scattering of toroidal structure also illustrates that the interference of multipole modes involving toroidal mode plays an essential role in nanoscale manipulation of light. 127 A kind of nonradiating dark state, namely anapole mode, created by the nontrivial destructive interference between antiphased electric and toroidal dipoles due to their similar far-field scattering patters is also excited in metamaterials. 111,128 The destructive interference between oscillating electric and toroidal dipoles also provides a new approach for the electromagnetically induced transparency with narrow transparency lines. 129,130 These studies again show that toroidal excitations in metamaterials have great potential for the enhancement of optical light localization. In this article, F I G U R E 1 Electric multipoles result from the separation of positive and negative charges. Magnetic multipoles originate from the closed circulation of electric current. Toroidal multipoles cannot be easily considered as electric or magnetic multipoles while they are produced by the current that flows along the meridians of torus. Every type of multipole member (dipole, quadrupole, octupole) has its identical far-field power radiation patterns as shown in the far right column. Reproduced with permission. Copyright 2014, American Physical Society 123 we will review the progress in the development of toroidal excitations in both plasmonic and Mie resonant dielectric metamaterials for subwavelength optical localization. We will discuss various toroidal excitation configurations with subwavelength toroidal modes in 3D and 2D metamaterials or metasurfaces in a wide frequency range. The emerging toroidal excitation that involved scattering of optical wave and actively tunable toroidal metamaterials will also be investigated. Furthermore, a survey of the novel toroidal resonant mode-based applications for example, spaser and high-quality sensing, is conducted. Finally, we will discuss and envision the promising future of toroidal excitations. Hopefully this review can promote the research on subwavelength optical localization associated with toroidal excitations for high-efficient trapping of light, strongly enhanced nonlinear nanophotonics, and all-optical information processing on a chip. | TOROIDAL EXCITATIONS IN PLASMONIC METAMATERIALS Plasmonics have become one of the most vibrant areas in research with technological innovations impacting fields from telecommunications to medicine. Many fascinating applications of plasmonic nanostructures employ electric dipole, magnetic dipole, and higher-order multipole resonances for the enhancement of light-matter interaction. Besides these multipolar modes that easily radiate into free space, some other types of electromagnetic resonances also exist, such as toroidal modes generated from the decomposition of the momentum tensors, which have been largely overlooked historically. Unlike electric and magnetic multipoles, toroidal multipoles are not a part of standard multipole expansions. Toroidal multipoles with currents flowing on the surface of torus along its meridian have great capability to enhance the light-matter interactions for the unique light field localization, which is originating from the weak or nonradiating feature of the toroidal modes. In particular, it has been shown that the strength of their interaction with electromagnetic fields depends not only on the strength of the fields, but rather on their time derivatives. The rapid development of plasmonic metamaterials provided new ideas and methods for the research of toroidal multipoles. By rationally designing the symmetry of metallic resonators and their space arrangement, we can selectively suppress the fundamental electric and magnetic dipolar modes and increase the toroidal dipole responses to dominate the optical properties of metamaterial. Herein, a collection of recent progress on toroidal excitations in plasmonic metamaterials is reviewed. Firstly, we intend to discuss the toroidal excitations in 3D plasmonic metamaterial structures. 9,97,98,131,132 In the next section, the planar designs for toroidal excitations are reviewed, which greatly simplifies the fabrication of toroidal metamaterials. 100,102,122,129,[133][134][135][136][137][138] Finally, we expatiate the research proceeding on toroidal excitations in plasmonic cavities. 101,[139][140][141][142] 2.1 | 3D plasmonic structures for the toroidal excitations In 2010, the resonant toroidal response was first experimentally observed in metamaterials by Kaelberer et al. 26 The toroidal metamolecule was composed of four rectangular, electrically disconnected metallic wire loops embedded into a low-loss dielectric slab. The loops were located in two mutually orthogonal planes and separated by a distance r (Figure 2A). It is observed two peaks in the metamaterial's reflection spectra and two deeps in transmission spectra corresponding to two modes in which excitations are manifested as resonant features I ( Figure 2B) and II ( Figure 2C). The radiation powers as a function of the frequency are plotted in Figure 2H, where one can see that the strongest contribution of the metamaterial response at resonance I is provided by the magnetic dipole and that at resonance II is provided by the toroidal dipole. The toroidal dipole scatters more strongly than any other multipoles by almost two orders of magnitude. Compared to the value of quality factors in these two modes, resonance I is located at 16.1 GHz, with a quality factor Q of $80 and resonance II is located at 15.4 GHz with the Q factor reaching 240 ( Figure 2F,G). The higher Q factor of toroidal dipole is due to its strong confinement and weak free-space coupling. In addition to achieving a high-quality factor, toroidal multipoles provide opportunities to further increase the field localization in subwavelength scale due to the weak coupling of the toroidal dipole mode to the free space. Toroidal metamaterial provides a convincible method to further increase the Q-factor and enhance the field localization at a subwavelength scale. Based on the structure of metallic wire loops, toroidal excitations in plasmonic metamaterials were also studied in optical frequency region. 98 Limited by the dimension and resolution in the method to fabricate metamaterial structural units, the experimental realization of the resonant toroidal response was still full of challenges in higher frequency region. Up to now, a few methods have been developed for the fabrication of vertical split-ring resonators (SRRs) in micro-or nanostructure, such as double exposure e-beam lithographic process, 143 multilayer electroplating, 144 metal stress-driven self-folding method, 145 self-aligned membrane projection lithography, 146,147 two-photon polymerization process, 148,149 ion beam induced folding. 150,151 A new method for the fabrication of folded 3D metamaterials, which excited the toroidal response in the mid-infrared regime, was proposed by Liu et al. (see Figure 3A,B). 132 In this work, the adoption of metal patterns on dielectric frameworks could greatly expand the fabrication capability, showing great design flexibility and controllability on size, position, and orientation at nanometer level. Compared to the state-of-the-art techniques, not only the weakness of short connection between etch unit and the substrate was overcomed, but also the diversity of 3D structures could be greatly expanded from just metal structures to various combinations of dielectric and metal structures. Progress in fabrication technology realizes the extension of the toroidal response frequency into optical region while maintaining high-Q property, which opens a horizon of optical applications including sensing, lasing spaser, and optical force. Besides the achievement of high quality factor, toroidal response is also used to enhance the field localization. As is well known, the localized spoof surface plasmons (LSSPs) 152-154 is a special surface wave mode, propagating on a periodically subwavelength structured metal surface, which has enhanced energy confinement and dispersion as well as has physical significance to the light-matter interaction in longer wavelength regimes. Realization of the LSSPs with toroidal dipole moments has physical significance to enhancement of the field localization. Toroidal dipole moments excited in LSSPs was detected in a compact planar Figure 3C,D). 155 On the one hand, the near-field distributions of the toroidal LSSPs' resonance mode were successfully observed both in simulated and experimental results. On the other hand, the miniaturized device volume of the structure could make a vast difference to the integrated photonic circuits. | Planar plasmonic designs to excite toroidal moment Limited by the dimension and resolution in the method to fabricate metamaterial structural units, the experimental realization of the resonant toroidal response was still full of challenges in higher frequency region. Quasi-planar structures were designed to simplify the fabrication of toroidal metamaterials. The planar-structure-based scheme is not limited to microwave bandwidths but also shows good performance at terahertz bands and even in the optical regime. Compared with 3D metamaterials, the 2D structures offer relatively poor confinement of circulating magnetic field. Nevertheless, we could suppress the undesired multipoles and reveal the toroidal dipole contribution through the careful selection and design of the metamaterial geometry. A planar structural toroidal metamaterial with the unit cell consists of four asymmetric split-ring resonators (ASRRs) was proposed in microwave region (see Figure 4A). 122 This work has demonstrated that toroidal metamaterials could be constructed through arrangement of planar ASRRs as meta-atoms via manipulating structural symmetry among the meta-atoms. Toroidal geometry together with Fano resonance of the ASRR made an even higher Q responsed metamaterial, in which lightmatter interaction would be significantly amplified. A few studies also demonstrated the toroidal dipole responses based on different planar structural metamaterials in higher frequency region. The planar-structure-based scheme simplifies the fabrication of toroidal metamaterials and also shows good performance at terahertz bands and even in the optical range. A metamaterial design with two joint metallic loops equipped two capacitive gaps in each loop was used in solution to excitations of the sharp toroidal dipolar response in the terahertz region ( Figure 4C). 135 A resonant mode in which the currents in the two loops of each metamolecule oscillated in opposite directions was excited by controlling the position of the gaps and the polarization of the incident field. This mode further made the realization of suppressed electric and enhanced toroidal dipole response. By tailoring the asymmetry in the structure and the line width, the amplitude and quality factor of toroidal resonance could be tuned. The enhancement of the field localization is also realized by excitation of toroidal response in planar metamaterials. On the basis of this work, the authors demonstrated a toroidal metamaterial switch that could dynamically transform from toroidal dipole moments to electric or magnetic dipole excitations ( Figure 4D). 102 The realization of this dynamic switch was attributed to the control of the optical properties of the metamaterial through ultrathin silicon layer as a dynamic material excited in the form of nearinfrared femtosecond pulses. By illuminating fabricated sample at different pump powers, photoactive-mediated switching of toroidal resonance was realized. Except the structure of asymmetric split-ring, toroidal response can be excited in other planar structures. For example, in a planar metamaterial that was composed of gold hexamer and bottom gold mirror separated by a layer of silicon dioxide, the toroidal response was realized ( Figure 4B). 133 The toroidal dipolar moment could be strongly excited in a metal-dielectric-metal combination in the optical region. The proposed structure could suppress the components of electric and magnetic dipole moment, and the toroidal moment could be formed by a closed loop of the magnetic dipoles that were excited in the top and bottom gold disk under the incident radially polarized light. The toroidal moment gave the dominant contribution in the scattering spectrum. | Toroidal excitations in plasmonic cavities In addition to the 3D and planar designs mentioned earlier, there are some other structures to excite toroidal moment such as plasmonic cavities. The toroidal systems based on plasmonic cavities could also enhance the field localization and may constitute novel approaches to waveguides and resonators. One recent literature, toroidal modes were demonstrated experimentally and theoretically at visible frequencies based on plasmonic cavities. The investigated structures were consisted of seven round holes of 60 nm diameter. The central hole was surrounded by a six-membered ring of holes, and these holes were made by drilling in a free-standing 60-nm-thick silver film ( Figure 5A). 140 In this structure, the realization of quadrupolar mode, magnetic dipolar mode, and toroidal mode was induced, respectively, by the radially, azimuthally polarized far-field radiation, and radiation emitted by an electric dipole placed in the central hole at different electric fields. When an electric dipole excitation was placed in the central hole, toroidal mode was induced in the sixmembered ring of holes at 2.5 eV and 3.7 eV. As shown in the spectrum of electric and magnetic field distributions, magnetic field distributions encircled the central hole and electric fields flew in radial loops between the central and surrounding holes. The sample volume between the central and surrounding holes acts as a ring, around which toroidal moments could build up. The other work of toroidal moment excitation in optical region was investigated in a structure of circular V-groove array by angle-resolved reflection ( Figure 5C). 142 This work showed that a plasmonic toroidal mode around wavelength 700 nm could be excited in nanostructure for incident angles larger than 20 . Besides these planar-array cavity structures, toroidal moments could also be excited in vertical-array cavity structures, which was made in a thin metal plate and resembled a meridianal cross-section of a toroidal void ( Figure 5D). 156 Radiation suppression for metamaterials was achieved in a system based on dumbbell-shaped aperture elements. The scattering contribution from multipolar current mode could be effectively suppressed in an aperture-based structure of higher rotational symmetry. In this article, the proposed toroidal response as an out of the ordinary and nonradiating charge-current excitation realizes the enhancement of field localization and produces very narrow isolated symmetric Lorentzian transparency lines with Q factors reaching 300. In parallel, toroidal dipole moment was also achieved in a metallic metamaterial comprising pair of bars 139 | TOROIDAL EXCITATIONS IN MIE METAMATERIALS Plasmonic metamaterials with metallic resonators utilizing toroidal designs can effectively couple with external fields to localize light at subwavelength scale. However, when operating in the higher frequencies, the inevitable ohmic loss in plasmonic structures would hinder the toroidal multipole excitations. To eliminate the dissipation loss, all-dielectric metamaterials with high refractive index and low absorption loss resonators are exploited to further localize light. 91,93 The high refractive index enables strong confinement of optical localization at a subwavelength scale and the low material loss is necessary for the low dissipation loss. 30 These resonators support volumetric Mie-type resonance modes associated with strong displacement currents, which are induced by the incident electromagnetic wave for the enhancement of optical localization. 10 By specially engineering alldielectric metamolecules of toroidal topology, electric and magnetic excitations can be suppressed. Spectrally isolated strong toroidal dipole responses can be excited and enhanced to a detectable value. 103,157-166 A toroidal response can be excited either in a single dielectric particle with larger size or in a cluster (referred as metamolecule) formed by several dielectric particles with a proper arrangement, and the latter excitation often has a stronger toroidal moment. Different from plasmonic metamaterials, only the displacement current rather than conduction current can be induced in Mie metamaterials. The displacement currents can be extracted from the electric near-field distribution inside the dielectric cylinders by utilizing the following formula: The multipole moments can be calculated by replacing the displacement currents into the multipole moment formulas. In the next, recent progress on Mie metamaterials for the subwavelength optical localization with strong toroidal excitations will be reviewed. | Toroidal responses excited by the all-dielectric metamaterials The first all-dielectric metamaterial with strong toroidal resonances was considered by Basharin et al. in 2015. 99 They proposed a kind of dielectric toroidal topology (see Figure 6A). The cluster is composed of four symmetric dielectric cylinders close to each other made of ionic crystal LiTaO 3 with high permittivity and negligible dissipation loss at terahertz frequency region. In the simulation model, the cylinders are assumed to be infinitely long and the polarization of incident wave is parallel to the axes of the cylinders. Each dielectric cylinder is excited by the incident electric field E parallel to the axes of the cylinders through near-field coupling, generating Mie-type resonance. The displacement current rather than conduction current is induced and spatially confined in each cylinder circulating along a closed loop. The magnetic moments m, oscillating perpendicular to the axes of the cylinders, are created by the oscillating displacement currents j in a narrow range of frequencies. Once the magnetic moments are aligned head to tail, a toroidal dipole T of dynamic vortex state with closed loops of the magnetic field would be excited inside the metamolecule. The toroidal dipolar resonance is observed with a full transmission at 1.89 THz (see Figure 6C). The toroidal excitation is confirmed by calculating the local field maps and displacement currents ( Figure 6B), where a magnetic vortex field is induced by the displacement currents oscillating along a closed loop in the four cylinders. To further confirm the toroidal dipolar response in the metamaterials, the multipole moments are calculated (see Figure 6D) based on the displacement current distributions inside the metamolecule. As can be seen, the far-field scattering is dominated by the toroidal dipolar excitation around 1.9 THz, where the power scattered by other multipoles are significantly suppressed. The metamaterial with subwavelength clusters of high-index all-dielectric cylinders is operating in the Mie resonant mode, showing the capability for suppressing all other standard multipoles because of the strong toroidal excitations. This work with the special designing strategy paves the avenue to the toroidal resonance in all-dielectric metamaterials with Mie-type resonance. The toroidal responses excited by the incident wave with electric field parallel to the axes of cylinders in 3D metamaterials were also studied in other works. Inspired by this work, further efforts have been paid to study toroidal excitations in clusters of dielectric cylinders. For example, Tasolamprou et al. presented a thorough investigation of the electromagnetic modes in metamolecules with clusters of dielectric cylinders number from 2 to 8. 165 They found that the metamolecules with an odd number of dielectric cylinders could exhibit enhanced spectral isolation for the toroidal mode. In 2015, a design of a simplified polaritonic LiTaO 3 microtube was proposed to excite the dominant toroidal dipolar response in the terahertz regime by Li et al. 162 A dominant toroidal dipolar excitation in a broadband frequency range with high-Q response and strong concentrated field at a deep subwavelength scale was found. The design strategy is promising for inspiring some applications, such as high-sensitive sensors, nonlinear optics, and particle trapping. In 2017, the toroidal dipolar response in dielectric metamaterials based on clusters of cylindrical particles was firstly measured in experiment in microwave band by Stenishchev et al. 166 These findings of all-dielectric toroidal metamaterials that taking advantage of low-loss Mie-type resonance are significant to further improve the research towards subwavelength optical localization. | Toroidal excitations inside the alldielectric metasurfaces Although the toroidal excitation in 3D metamaterials is strong enough to a detectable value, the fabrication of the all-dielectric 3D toroidal metamaterials is quiet difficult, especially in higher frequencies. On the other hand, toroidal excitations in 3D dielectric metamaterials often require incident electric field parallel to the axes of dielectric cylinders, making it challenging to measure these responses. To address these issues, recently, planar designs (metasurfaces) are considered to simplify the fabrication and measurement of toroidal metamaterials. In a metasurface, the strong toroidal response can be excited by a normally incident wave under frontal excitation. 158 This requires a special configuration for metasurface. For example, Zografopoulos et al. experimentally demonstrated a single layer all-dielectric metasurface with strong toroidal response at subterahertz frequencies. 159 The metasurface is formed by dodecagonal prismatic elements made of high-resistivity floating-zone silicon with appropriately selected thickness standing on a substrate. The scattering efficiencies of multipolar mode contributions show that a strong dominant toroidal excitation appear at frequency of 93.2 GHz, where other multipole scattering efficiencies are significantly suppressed (see Figure 7A). The electric and magnetic field distributions in one of the metamolecules further confirm the toroidal excitation in the metasurface, where a pair of electric field circulating in a close loop with opposite directions induces magnetic fields oscillating arranged head-to-tail along a loop ( Figure 7B). Some other dielectric metasurface designs are also proposed to excite strong toroidal dipolar responses, such as mirrored asymmetric silicon SRRs 167 ( Figure 7C) designed by Liu et al. and silicon-based E-shaped metasurface 160 (Figure 7D) designed by Han et al, which support extremely high Q factor due to the weak free-space coupling and strong optical localization at subwavelength scale. | Toroidal dipole resonances in alldielectric oligomer metasurfaces Since the unique electromagnetic properties of the toroidal excitations that are different from the electric and magnetic multipole modes, toroidal responses in metasurfaces have attracted growing attention in recent years. All-dielectric oligomer metasurfaces are also employed as proper platforms for the excitations of strong toroidal responses and high efficiency trapping of light. Xu et al. proposed a Mie metasurface composed of trimer clusters of high-index dielectric disks exhibiting strong toroidal dipolar responses in the microwave frequency range (see Figure 8A). 158 They directly identified the toroidal modes by the near-field intensity mapping of the electric field in experiment, and two distinct toroidal dipole modes are observed in this metasurface design ( Figure 8B). Mie metasurfaces composed of different types of dielectric oligomer disks exhibiting strong toroidal dipolar excitations were further considered by Zhang et al. 157 They systematically studied the metamolecules constructed by low-loss silicon trimer, quadrumer, pentamer, and hexamer disks in the near-infrared band ( Figure 8C). All these meta-molecule configurations support strong toroidal dipolar resonance when the polarization of the normally incident plane wave is directed to one of the symmetry axes of the oligomers. In particular, they found that the toroidal dipolar resonances would spectrally disappear when the meta-molecules were protected by even order symmetry, such as C4-symmetry and C6-symmetry. Such works are promising for enriching the diversity of all-dielectric electromagnetic systems with strong toroidal excitations and providing an effective flat-optics platform | Anapole excitations in metamaterials The far-field scattering could be significantly suppressed through the complete destructive interference between antiphased toroidal and electric dipolar moments owing to their similar far-field radiation patterns, which leads to a dark state called anapole. 128 Anapole means "without poles" in Greek and has been employed as a classical model of elementary particles for the description of dark matter in the Universe. The current distribution of an anapole mode is associated with a toroidal dipole moment pointing outward along a torus symmetry axis (see Figure 9A). The oscillating currents are flowing on the surface of a torus along its medians. These poloidal surface currents can induce a set of magnetic dipoles m arranged head-to-tail along a loop, resulting in a toroidal dipole T. The radiationless properties of anapole can be achieved by exciting a second electric dipole P that oscillates out-of-phase with the toroidal dipole T, resulting in a complete scattering cancellation of far-field radiation since their scattering patterns are identical to each other. The total far-field scattering contributions can be written as The far-field radiation will vanish if E sca = 0, when the electric and toroidal dipolar moments are out-of-phase with P = − ikT. This is the necessary condition for the 168 They calculated the z component of amplitude and phase of Cartesian dipole moment P z and toroidal moment ikT z inside a dielectric nanosphere, along with the electric field distributions at anapole and dipole wavelengths ( Figure 9D). At the wavelength of anapole excitation, the condition P = − ikT is satisfied in that the electric and toroidal dipolar moments have the same strength but are out of phase, which leads to the total scattering cancellation of the far-field radiation. The study of nonradiating anapole mode may enrich our understanding toward the nonradiating sources and nonscattering objects. Anapole has a nonzero potential but does not generate field outside and may result in the violation of reciprocity and Aharonov-Bohm like phenomena. 128,169,170 Such a nonradiative excitation originating from the interference of electric and toroidal dipole moments can also provide a new direction for realizing invisibility cloak based the cancellation of radiation scattering. To experimentally confirm the existence of anapole mode in all-dielectric metasurfaces, Miroshnichenko et al. fabricated silicon nanodisks on a substrate by standard nanofabrication techniques. 128 The dark anapole excitation was observed in the silicon nanodisk through the measurement of scattering spectra and near-field distribution, where a pronounced minimum appeared in the scattering spectra and an associated maximum was found in the near-field energy ( Figure 9B). Such a radiationless excitation makes the nanodisk almost invisible in far field and thus provides a new way for the realization of invisibility condition based on the scattering cancellation. response inside a quasi-planar plasmonic metamaterial. 171 The metamaterial driven by normally incident plane waves could simultaneously excite the antiphased toroidal and electric dipolar moments with nearly same magnitude of amplitude, leading to the anapole mode with strong localized fields but weak far-field radiation ( Figure 9C). The fine control strategy for the radiationless anapole modes associated with near-field enhancement would promise important applications in nonlinear optics, sensing and cloak. | Tunable toroidal dipole based on metamaterials With the rapid and impressive progress in toroidal metasurface technology, active and efficient control over induced toroidal resonant modes is increasingly concerned in recent years. The dynamic modulation on toroidal dipole resonance could greatly broaden its application fields. In recent literature studies, many methods were proposed to realize the tunabilities on toroidal dipoles. Recent studies have demonstrated that toroidal moments showed more sensitivity to the incident wave power and variations of the refractive index in contrast to the classical resonant modes. Based on the strong dependence between response of toroidal plasmonic metamodulator and terahertz incident wave power, a toroidal plasmonic metamodulator was proposed by Gerislioglu et al. (see Figure 10A). 172 It was illustrated that the quality of the toroidal resonance would be significantly decreased with the reduced power of incident beam, which could be employed in some applications and devices such as metaswitches and sensors. Besides, effective modulation on toroidal dipole resonances could also be realized by changing the geometric parameters of the metamolecules and the polarization distributions of the incident wave (see Figure 10B). 173 Three different samples with different gap distances between resonators (3/4/5 μm) were fabricated and analyzed in this work. Both experimental and numerical results showed a strong polarization sensitivity and a large modulation depth, which is promising for the development of advanced terahertz applications with polarization-dependent and high-Q properties. Moreover, the tunability of inductive toroidal response was also achieved by changing the electromagnetic parameter of materials. For example, the realization of tunability was obtained by the phase change of silicon. 174 It was found that the currents flew along metallic parts of metamolecules at the dielectric state of Si, but after the transition to the metallic state, currents flowing along the silicon inclusions were dominating. The silicon conductivity variations would lead to the blueshift of toroidal dipolar frequency. In practice applications, the tunability occurs to switch between "invisible" mode and "visible" dipole mode. To switch among these responses, Tian Figure 10C. 175 With the phase of GST material changed between amorphous and semicrystalline states, the mode can be switched between a radiative electric dipole resonance and a radiationless anapole state. The arbitrary control of radiation states may promise applications towards tunable meta-devices with scattering on demand. | Toroidal-based applications The formation of toroidal moments with extraordinary properties and tunabilities in electromagnetic configuration opens a horizon of potential applications such as ultrasensitive sensors, 176 metaswitch, 102 molecule detection, 112,177,178 lasing spaser, 179 and toroidal circular dichroism. 131 In the sections that follow, we will review specific examples of applications for toroidal moments in metamaterials. The toroidal excitations with strong light localization were used to achieve a laser spaser by Huang et al. (see Figure 11A). 179 The paper demonstrated that the toroidal dipole in a near-infrared metamaterial could be capable of lowering the levels of gain threshold for loss compensation, laser emission, and optical magnification. In this way, the authors realized an optical amplifier of coherent radiation induced by toroidal dipoles. Compared with the magnetic dipolar response, toroidal mode guarantees more excellent collective response of the metamaterial such as better coherency and narrower diversion on the beam. Considering the strong high-Q characteristics based on the toroidal response in the planar metamaterials, it was recently verified that a minute quantity of coated analyte on the toroidal dipolar metasurface caused spectral shifts of toroidal resonance, which allows the detection of the dielectric or biochemical environment near the metasurface. High Q toroidal resonances supported strong interaction between the electromagnetic wave and a specific analyte (see Figure 11B). Toroidal responses with high Q property offered a promising platform for sensing devices and detecting applications. 176 Analogously, high Q toroidal metasurfaces, acting as excellent photonic devices for high sensitive sensors, have wide applications in the field of dielectric, chemical, liquid, and biological detections. 177 Rapid detection of infectious envelope proteins taking advantages of sharp toroidal moment in a plasmonic metasensor was demonstrated for the high sensitivity, repeatability, reliability, and accuracy (see Figure 11C). In another research of molecular detection, 178 toroidal dipole was experimentally proved to be highly sensitive to molecular concentrations (see Figure 11D). | CONCLUSION AND OUTLOOK Realizing the localization of light in subwavelength scale is of fundamental importance for free manipulation of light locally and with enhanced light-matter interactions. Its integration with the current developed nanoscale lithography technique is promising for all-light optical information processing on a chip. Toroidal excitations in artificial micro/nano-structured metamaterials provide a novel way for high-quality subwavelength light localization. Progresses in this field include toroidal excitations with strong light localization and high-Q response in plasmonic resonant metamaterials and Mie resonant dielectric structures, toroidal excitation associated optical anapole mode, and actively tunable high-Q toroidal mode are reviewed. It is shown that the light localization platform based on toroidal excitations can be exploited for effective and smart manipulation of light in deep subwavelength scale. We also discussed some new development related to applications of the optical localization based on the toroidal mode, such as toroidal metamaterial spaser and toroidal mode based sensing of environment. It is worth noting that the toroidal excitations based subwavelength optical localization and local optical field manipulation are still in the elementary stage on the whole, although having acquired some novel achievement from theory to some promising applications in, for example, sensing in the past decade. Its applications in optics is still localized, there are many problems that need to be further studied for achieving key technologies in designing freely controlling localization features and radiation of the subwavelength scale light fields and fabrication of high-quality metamaterial-based toroidal configuration that integrate to modern nanophotonic devices and systems.
8,538.6
2021-03-02T00:00:00.000
[ "Physics" ]
Imaging through noise with quantum illumination Full-field imaging using quantum illumination distinguishes the true image from a structured thermal background. This PDF file includes: Supplementary Text Fig. S1. The quantum illumination advantage as a function of η and T plotted with d = 0.0016; p r = 0.0016; ε = 0.5. Fig. S2. The quantum illumination advantage as a function of η and ε (plotted with d = 0.0016; p r = 0.0016; T = 0.0016. Fig. S3. Imaging using quantum illumination within an increasing thermal background. Fig. S4. Quantum illumination advantage A calculated over a range of increasing levels of thermal illumination. Fig. S5. Plot of the quantum illumination advantage A for the system under differing levels of optical loss. Fig. S6. The bit error rate P err of detecting a target calculated over a range of thermal light levels. Fig. S7. The bit error rate P err of detecting a target calculated over a range of thermal light levels using the second method. Table S1. Table of the average s weight values calculated over a range of thermal light levels using the second method for each of the different levels of thermal illumination. Supplementary Text Light Level The regime in which is used as a baseline when no additional environmental losses or thermal light are introduced is the regime defined by Tasca et al. (2013) (28) in which the thresholded events per pixel per frame from the detection of SPDC events matches the thresholded event rate due to the clock induced charge of the camera. The clock induced charge of the camera using the aforementioned settings is ~0.0016 events per pixel per frame and therefore the event rate for the regions within the SPDC beams is set to be ~ 0.0032 events per pixel per frame. For this system the AND-efficiency, given as  below, has been estimated to be 0.0021056. This value is calculated to be the proportion of events that occur in the reference beam that have an anticorrelated event in the probe beam corrected for randomly correlated events. This value is determined from the data used to generate the correlation peak as displayed in Fig. 3 in the main text. The quantum efficiency of the EMCCD camera (Andor iXon ULTRA 888 DU-888U3-CS0-#BV) is quoted as 77.75% when cooled to -100°C while we operate the camera at -90°C. However, as we describe above, we threshold each frame and this further reduces the detection efficiency as not all photons detected by the camera are registered as events in the thresholded frames. Theory on contrast enhancement We establish here an expression for the expected quantum illumination advantage as a function of experimental parameters. We show how the advantage depends on the losses in the target or probed arm, and as a function of the amount of thermal light. We note r p the rate of SPDC photons detected in the reference arm,  is the apparatus arm efficiency i.e. including losses occurring in the quantum illumination arm at the exclusion of target losses such that the efficiency of the reference arm from the crystal to the camera included is r   and of the probe arm from the crystal to the camera included is = , where  is the probe arm efficiency. This gives a detection rate of SPDC photons in the probe arm of pr pp   . We note d the dark count rates and T the thermal light rate. One can then write the AND event rate with the quantum correlated light as follows is the AND event rate detected in the bright part of the object and Q Dark R is the AND event rate detected in the dark parts of the object. One can as well express the rates in the classical simple average as follows From these equations one can express the quantum illumination advantage A as a function of the experimental parameters by using We have plotted on following figures the contrast advantage A as a function of  and T (plotted with fig. S1, and as a function of  and  (plotted with fig. S2. One can observe that the advantage not only increases with increasing T but also when losses are added i.e. when  decreases. Note however that adding too bright thermal light T will result in the saturation of a single photon pixel detector and therefore result in the failure of the protocol and the loss of any advantage. This technical issue can however be solved by using a detector array such as a SPAD array with a higher temporal resolution therefore reducing the likelihood of it being saturated by the thermal light. Quantum illumination advantage in the presence of thermal noise Figure S3 compares the results under increasing thermal light levels. It is seen that despite the introduction of increasing environmental noise events into the probe arm, the quantum illumination advantage, A, increases as the level of thermal illumination increases. This trend may also be seen in the plot of the quantum illumination advantage, A, plotted against the thermal illumination level in fig. S4. This trend is the expected behaviour from the equations we presented in the preceding section. . Imaging using quantum illumination within an increasing thermal background. Images of the UoG object with the classical image created by averaging all frames (second column), and the quantum illumination AND-image built from the sum of results of performing an AND-operation to select correlated events in the reference and probe beams (third column). The UoG object illuminated by the probe beam is imaged under the conditions of an increasing thermal background level (see the ratio of thermal illumination to SPDC illumination at 710 ± 5 nm on the left). The quantum illumination advantage A under these thermal illumination levels for images constructed over 1.5 million frames is displayed on the right. The given uncertainty is the standard error on the mean calculated using blocks of 100,000 frames. Images are 45 x 45 pixels. Figure S4 shows the quantum illumination advantage A under increasing levels of thermal noise introduced into the system. Points additional to those for which results are shown in fig. S3 are included. Fig. S4. Quantum illumination advantage A calculated over a range of increasing levels of thermal illumination. The ratio of thermal background to SPDC illumination is given for the range of levels of thermal illumination. The given uncertainty is the standard error on the mean calculated using blocks of 100,000 frames. Quantum illumination advantage in the presence of losses Figure S5 shows the quantum illumination advantage A under identical thermal light levels ~0.0016 thermal events per pixel per frame with additional optical losses introduced into the probe beam after interaction with the object. It is seen that despite the introduction of losses into the probe arm, the quantum illumination advantage, A, increases as the level of losses increases as may also be seen in Fig. 5 in the main text. This trend is the expected behaviour from the equations we presented in the preceding section. Fig. S5 . Plot of the quantum illumination advantage A for the system under differing levels of optical loss. The quantum illumination advantage A is assessed under these levels of optical losses for images constructed over 1.5 million frames. The given uncertainty is the standard error on the mean calculated using blocks of 100,000 frames. Bit Error Rate determination method and analysis In contrast to the theoretical considerations reported in (1,2), that assume the quantum illumination apparatus is ideal in that the idler arm efficiency is unity, we cannot make such an assumption. As a consequence we have to distinguish two strategies in the way one can guess the presence of a target. The first one where we do not use any knowledge about the noise or the target and for which the use of the AND-image alone will surpass a classical strategy. The second where the system is assumed to be fully calibrated and in which we know the noise levels and the transmittance of the target and for which one needs to use both the correlated AND-image data and the classical image data acquired through the quantum illumination protocol to obtain an advantage. We show here in both cases that observing a contrast advantage in the AND-images implies that one would have also an advantage in guessing the presence or absence of the target. 1) With a blind strategy We describe here the method used to determine the bit error rate in estimating the presence or absence of a target, through both a quantum illumination and a classical illumination protocol. The bit error rate err P is determined both by the rate of predicting the presence of the target when it is absent (1) Dark P and the rate of prediction of its absence when it is present This can be written under the hypothesis of equal probability to have the target present or absent 11 (1) (0) 22 To determine the error rate experimentally, one simply recognises the detection of zero events on a pixel of the image that comprises the target as revealing the absence of the target object, and the detection of one event or more as revealing the presence of the target object and try to find the optimal classical strategy in doing so (1). Knowing the ground truth (i.e. the target shape) one can assess the error rate in determining the presence or absence of the target object through this method. In order to find the optimal classical strategy and show that the quantum strategy exhibits an advantage we make the classical prediction on an image that is composed of the sum N thresholded frames and search the value of N that minimises the classical error rate C err P for our particular experimental conditions and when no assumption of the object or the noises present can be made. In the case of quantum illumination the image is the sum of the result of the AND-operation between the reference and probe beams and therefore a signal event is added to the resulting AND-image only when a detection event occurs on the correlated idler pixel, thus post selecting the prediction. For fair comparison we use and sum the same number of post selected events in the images used for quantum illumination as were present in the sum of N thresholded frames. Which means that because the fill factor of the idler events in the detected reference beam is given by () pd  we use () N pd  frames to compose an image using the sum of post-selected events only, and make our prediction based on these images. From knowing the ground truth one can again evaluate the bit error rate. It is important to note that because N is optimised for the classical scheme only, the quantum illumination protocol may be sub-optimal under certain conditions, nevertheless we find a systematic advantage. Practically err P is evaluated by applying a mask of the UoG target object and finding the number of background pixels that feature a detection event (false positive) and the number of pixels that comprise the UoG object but have no detected events (false negative). From our data the bit error rate across a range of thermal light levels was determined One can predict the theoretical values of the error rate in the classical C err P and in the quantum illumination Q err P cases. Starting first with the classical case, under the hypothesis of equal probability to have the target present or absent and when one makes the prediction on an image composed by the sum of N frames one can write 11 (1 ( (0)) ) (0) 22 Where (0) 1 ( ) C Dark P d T    is the probability to detect no photons in a single frame when the target is absent within a particular pixel. And is the probability to detect no photons in a single frame when the target is present within a particular pixel. The quantum illumination strategy consists of guessing the presence or absence of the target, based on post selected detection i.e. by counting the probe events or non-events happening at a particular pixel only when a reference event is detected on the corresponding correlated pixel. When making the prediction of presence or absence on images composed of N added such post-selected events per pixel one can write is the probability to detect no photons within a particular pixel of a single frame when an idler photon has been detected on the correlated pixel position and when the target is absent. And is the probability to detect no photons within a particular pixel of a single frame when an idler photon has been detected on the correlated pixel position and when the target is absent. We used these equations (S5) and (S6) to fit the experimental data reported in fig. S6. Importantly one can note that the advantage in estimating the presence or absence of the target is based on the same mechanism as for the advantage obtained in contrast. It can be understood by realising that for equal values of accumulation of N events used in the quantum illumination and the classical case one would obtain the same background in case of the absence of the target and therefore the same error rate in wrongly predicting the absence of the target (the two first terms in equations (S5) and (S6) are equal). This means that the quantum illumination advantage in determining the presence or absence of the object in such conditions is based on a higher number of detected events in the bright parts of the images compared to the classical images. This higher rate of detected events means that the guessing the presence of the object is more accurate with quantum illumination. Therefore when the quantum and classical images have the same background, it is the higher intensity of the bright parts of the quantum image that explains the increased prediction accuracy, an increased intensity for the pixels in the frame that comprise the object with an equivalent background means also that the quantum image will exhibit a higher contrast than the classical image. 2) When the target transmittance and noises are known Following (2) one could try to make predictions on the presence or absence of the object when the transmission of the target  and the noise levels are known. In such a case the best strategy is to accumulate events and use a threshold level that minimises the error rate. This can be found theoretically by knowing the system parameters or by calibrating the system with such parameters. In such a context and in contrast with the theoretical considerations reported in (2) that assume that the idler arm efficiency is ideal, we cannot make such an assumption. A consequence is that if in our case we simply use the AND-images to perform our estimations of the presence of the target, the results will be worse than with the classical image, because our classical image contains a greater number of events and therefore less shot noise than the quantum illumination AND-image. However with the complete set of data acquired through quantum illumination, one can still improve the bit error rate compared to the classical case. To do so, one needs to use a combination of both the classical data (classical images) and the non-classical data (the AND-images) in a similar way that the optimal strategy for sub-shot noise measurement is to use both correlated and uncorrelated data (20). To understand that one can observe a quantum illumination advantage means that the ensemble of post selected events detected within the quantum illumination AND-image image are more valuable than the same number of non-correlated events in predicting the presence of the object. Indeed for an equal number of events the noise in both the bright parts and the dark parts of the image will be the same in both the AND-image and a classical image, however the contrast is higher in the AND-image than classical image with the same number of events. This is because a higher contrast means that the places where the object is present and absent are further apart in intensities, and therefore for same shot noise level the optimal guess will be more accurate in the AND-image. However because of the non-ideal idler arm efficiency  one will have more events in the total classical image than in the AND-image. The best strategy then is to use a combination of both images in order to give more weight to the correlated events than to the uncorrelated ones. And as long as the AND-image exhibits a contrast advantage, this will lead to an improvement of the bit error rate over the simple use of the classically acquired image. Practically this means that one would have to use an optimal image Where the weight s (0≤ s ≤ 1) is in particular dependent on the value of  , the greater the efficiency the more events AND I are kept and the greater s should be, It also depends on any other parameters affecting the contrast advantage. The higher the contrast advantage, the more useful AND I will be and therefore the greater s should be. In particular a greater timing resolution in the detection of the correlations can further reduce the probability of detecting false coincidences due to the thermal light at a given light level. This means that a better timing resolution such as those accessible with SPAD arrays can improve the contrast advantage and will therefore mean that the value used for s in such circumstances should be greater. Finally, we would like to conclude this paragraph by remarking again that a contrast advantage in AND I implies that it can be used to build a strategy that is better than the classical strategy, which means that a contrast advantage through quantum illumination implies an error rate advantage. Bit error Rate Results Here we present an analysis of how this protocol may be used in an application where the presence or absence of an object needs to be assessed. This is the context in which Lloyd (1) originally proposed the quantum illumination protocol and in which it has clear applications in realising quantum LIDAR or quantum radar applications. The error rate in detecting the presence or absence of an object in the probe beam path is assessed over a range of light levels using the 'blind' strategy in which no prior information is assumed in fig. S6. The advantage in the probability of successfully determining the presence or absence of an object for quantum illumination may be seen over the range of thermal light levels. The points lie below the curve for the quantum illumination AND-image due to the thermal illumination not being entirely flat as may be seen in the images presented in fig. S3. Fig. S6. The bit error rate Perr of detecting a target calculated over a range of thermal light levels. The classical data is represented by the black crosses and the quantum illumination AND-image by the red crosses. The curve in black represents the theoretical optimum bit error rate for an image constructed from coherent state illumination. The red curve represents the equivalent curve for the quantum illumination AND-image calculated using experimental parameters. These theoretical curves are valid under the assumption of an unknown background and target object and assuming Poissonian camera dark noise and thermal light. Error bars are the standard error on the mean for the bit error rate. In the case of the second strategy a weighted sum (see equation S7) of the classical image and the quantum illumination AND-image is used to calculate the bit error rate. For both the classical image and also the weighted sum of the classical and AND-image the mean and standard deviation of both the background pixels (μbg and σbg) and also of the pixels that comprise the UoG object (μUoG and σUoG) are found. The threshold T is then set such that The images are then thresholded appropriately and the bit error rate calculated. This is performed over a range of weights s (0 ≤ s ≤ 1) and the minimum value for the bit error rate determined. It may be seen in fig. S7 that the advantage in the bit error rate is decreased compared to that in the 'blind' strategy. This is due to non-ideal idler arm efficiency resulting in a nonunity value for the weight value s. Fig. S7. The bit error rate Perr of detecting a target calculated over a range of thermal light levels using the second method. The purely classical data is represented by the black crosses and the weighted sum of the classical image and the quantum illumination AND-image by the red crosses. The given uncertainty is the standard error on the mean calculated using blocks of 100,000 frames. The values of s calculated to optimise bit error rate of the compound images as per method 2 of the bit error rate calculation for the UoG objects under a range of differing thermal illumination conditions are shown in table S1 accompanying fig. S7. The average value of s for this set of data is 0.7702 +/-0.0401, and the optimal values found for each light level seems consistent (within the error bars). The fact that these values for s are non-unity indicates that a greater advantage may be achieved should the system efficiency be further increased, but also with a better correlation timing resolutions that could lead to an improved contrast advantage.
5,051.2
2019-07-22T00:00:00.000
[ "Physics" ]
Space-Time in Quantum Theory Max Born, not Werner Heisenberg as is usually assumed, created the original version of Quantum Theory,"Matrix Mechanics". The fundamental laws, commutation relations and quantum equations of motion, resulted from Born's recognition of the basic principle of quantum physics: To each change in nature corresponds an integer number of quanta of action h. Action variables may only change by integer values of h, requiring all other physical quantities to change by discrete steps,"quantum jumps". The mathematical implementation of this principle led to commutation relations and quantum equations of motion, published by Born and Jordan in September 1925. Most importantly, the classical notion of"time", as one common continuous time variable and nature evolving continuously"in time", has to be replaced by an infinite manifold of transition rates for discontinuous quantum transitions. The notion of a point in space-time looses its physical significance. Quantum uncertainties of time, position, just as any other physical quantity, are necessary consequences of quantization of action. The essential differences of Born's discontinuous quantum physics to the standard interpretation, relying on classical space-time concepts, will be described. Introduction When Max Born and Pascual Jordan published the fundamental equations of Quantum Theory in September 1925 [1], their highly peculiar mathematical form was met with widespread skepticism and misunderstanding. Physical quantities were no longer represented by continuous variables, but by Hermitian matrices; the familiar differential equation of classical physics were replaced by mysterious equations relating different matrices to each other. Before the scientific community had the time to analyze the rationale leading to these equations and their physical content thoroughly, Schrödinger published the stationary Schrödinger equation [2] in late January 1926, five months later the time-dependent Schrödinger equation [3]. Their mathematical form was more familiar, partial differential equations. But the central quantity to be determined, the wave-function ψ(r, t), was equally mysterious. Both Matrix and Wave Mechanics originated from the conviction that the classical concepts of Bohr's Old Quantum Theory had to be abandoned to understand quantum phenomena; radically new concepts were required. But Born-Jordan on one side and Schrödinger on the other had very different ideas concerning the physical content to be described. They agreed that radical changes away from the foundations of classical physics were required; but the directions into which they proceeded were opposite. Matrix Mechanics was built on the particle concept. That was still similar to Newtonian mechanics. But here the similarity ends; already several years before the final version of Matrix Mechanics was published, Born had The Fundamental Principle of Quantum Physics The basic laws were not only published by Born and Jordan, they were also constructed from the new mode of thought, which Born deemed was necessary 1 . Already well before 1925, Born became convinced that the entire system of basic concepts in physics, which had been used to describe the macroscopic world of common experience, would have to be rebuilt radically. The classical space-time continuum as basis for all understanding must be abandoned on the elementary quantum scale. In December 1919 he wrote to Wolfgang Pauli [6]: "For quite some time already I am pursuing this idea, although without success so far: The solution of all quantum problems must be based on very fundamental principles. One should not transfer the concept of space-time as a four-dimensional continuum from the macroscopic world of common experience to the atomistic world; manifestly the latter requires a different type of manifold". 2 In lectures during the winter semester of 1923/24 (published in November 1924 [7]) Born specified what he had in mind: 1) "The systematic transformation of classical mechanics into a discontinuous atomic mechanics". 3 2) "The new mechanics replaces the continuous manifold of (classical) states by a discrete manifold, which is described by "quantum numbers". 4 3) "Transitions between different states are determined by probabilities". 5 Already at that time, Born must have had a precise idea, a fundamental principle, how to implement the program he had defined. This could not be Bohr's Old Quantum Theory, which was constructed from heuristic arguments based on classical concepts. The lectures of 1923/24 [7] demonstrated their narrow limits. As Born had stated in 1919 [6], something radically new was required. It was clear to mostly everyone, that quantization of action contained the key; the main question was: Is there a general principle to be drawn, applicable to all physical phenomena? Born had recognized this fundamental principle of quantum physics: As the term "action" suggests, the dynamical behavior of all physical systems is quantized: At the elementary level, all changes in nature consists of discontinuous steps, "quantum jumps" ("Quantensprünge"). All elementary changes correspond to integer numbers of quanta of action. Action variables may only change by integer multiples of h. This general quantization condition provides the basis for a logically consistent Quantum Theory; all further conclusions are direct consequences of this quantization condition. The key is contained in "discrete manifold described by quantum numbers": Different quantum states n and m are to be distinguished by different sets of integers n = (n 1 , n 2 , n 3 , ...) and m = (m 1 , m 2 , m 3 , ....). The integers n characterize the action variables J n of the corresponding state. Transitions between quantum states n and m correspond to changes of action variables ∆J n,m by integer multiples of Planck's quantum of action: ∆J n,m = ((n 1 − m 1 )h, (n 2 − m 2 )h, (n 3 − m 3 )h, .....). ( Discontinuous behavior of all elementary processes requires a new concept of space-time at the atomic and subatomic level. The most important consequence concerns "time". Classically, it is assumed that there exists one common time variable t and all changes in nature 2 "Gerade diesen Gedanken verfolge ich seit längerer Zeit, allerdings bisher ohne positiven Erfolg, nämlich, dass der Ausweg aus allen Quantenschwierigkeiten von ganz prinzipiellen Punkten aus gesucht werden muss: man darf die Begriffe des Raumes und der Zeit als ein 4-dimensionales Kontinuum nicht von der makroskopischen Erfahrungswelt auf die atomistische Weltübertragen, diese verlangt offenbar eine andere Art von Mannigfaltigkeit als adäquates Bild". 3 "Die systematische Verwandlung der klassischen Mechanik in eine diskontinuierliche Atommechanik". The book "Vorlesungenüber Atommechanik, 1. Band" (Lectures on atomic mechanics, 1st volume) [7] is primarily devoted to describe Bohr's "Old Quantum Theory" and to demonstrate its deficiencies. At the very end on page 341, Born defines the path towards the "final atomic mechanics" ("endgültige Atommechanik"). Born had used this term in the preface, the intended "2nd volume" should contain the "endgültige Atommechanik". 4 "Diese neue Mechanik ist dadurch gekennzeichnet, dass an Stelle der kontinuierlichen Mannigfaltigkeit von Zuständen eine diskrete Mannigfaltigkeit tritt, die durch Quantenzahlen beschrieben wird". (page 18) 5 Wir schreiben jedemÜbergang zwischen zwei stationären Zuständen eine a priori Wahrscheinlichkeit zu.'' (page 10) are continuous in this time variable t. The differential equations of motion of classical physics rely on this assumption. Continuity in time suggested that nature behaves deterministically, at least in principle. For given initial conditions at some point in time, the solution of the differential equations seemingly determine the behavior at any time in the past or future. Discontinuous quantum behavior eliminates the justification for the notion of a continuous time; the classical time variable t has no physical relevance at the quantum scale. Furthermore, the replacement of the differential equations of motion of classical physics by quantum mechanical difference equations eliminates the justification for determinism. Born concludes "Transitions between different states are determined by probabilities". The continuous time of classical physics has to be replaced by an infinite manifold of transition rates of discontinuous and statistical quantum transitions. The Early Born-Einstein Debate Born's intention to replace the classical space-time continuum by a discrete manifold did not come from sudden inspiration, but grew out of discussions with Einstein. Even before Born, Einstein questioned the relevance of the space-time continuum on atomic and subatomic scales. Not only his contributions to Quantum Theory of radiation [8,9], but also to Relativity Theory played a decisive role. In the lecture "On the Theory of Relativity" [10], Einstein explains his motives: "The abandonment of certain notions connected with space, time, and motion hitherto treated as fundamentals must not be regarded as arbitrary, but only as conditioned by observed facts..... It is in general one of the essential features of the theory of relativity that it is at pains to work out the relations between general concepts and empirical facts more precisely. The fundamental principle here is that the justification for a physical concept lies exclusively in its clear and unambiguous relation to facts that can be experienced". 6 This same reasoning also raised the question whether the classical concept about space and time could be maintained at atomic scales. How could the concept of a "point" and extremely small distances in space-time be clearly and unambiguously defined by measurements? On scales of common use, rigid rods and clocks could be used to measure lengths and times; on very large and cosmological scales, the light path replaced rigid rods. On subatomic scales, however, these measuring tools failed. Discontinuous behavior was not alien to Einstein, either. In 1905, he had postulated that radiation consists of elementary objects, photons, which can only be created and absorbed as finite entities [8]. In late 1916 and early 1917, Einstein's "Quantentheorie der Strahlung" (Quantum Theory of Radiation) [9] used the photon concept to describe the necessary conditions for thermal equilibrium between matter and radiation. The transfer of energy and momentum between matter and radiation occurs in discontinuous and statistical steps by emission and absorption of photons. The question whether a continuum theory could still be maintained on the quantum scale arose and had to be answered. 6 "Das Aufgeben gewisser bisher als fundamental behandelter Begriffeüber Raum, Zeit und Bewegung darf nicht als freiwillig aufgefasst werden, sondern nur als bedingt durch beobachtete Tatsachen...... Es istüberhaupt einer der wesentlichsten Züge der Relativitätstheorie, dass sie bemüht ist, die Beziehungen der allgemeinen Begriffe zu den erlebbaren Tatsachen schärfer herauszuarbeiten. Dabei gilt stets als Grundsatz, dass die Berechtigung eines physikalischen Begriffes ausschließlich in seiner klaren und eindeutigen Beziehung zu den erlebbaren Tatsachen beruht." So it is not really surprising that Einstein questioned the relevance of classical space-time concepts on atomic scales already before Born did. Einstein's letter of 1917 [11] testifies that he struggles with the problem of how to maintain a continuum theory and how to define lengths and times on the quantum scale. "Strictly speaking, even the concept of the ds 2 evaporates into an empty abstraction, in that ds 2 cannot be construed strictly as a measurement result..... If the molecular interpretation of matter is the correct (practicable) one, that is, if a portion of the world must be represented as a finite number of moving points, then the continuum in modern theory contains much too multi-farious possibilities. I also believe that this multifariousness is to blame for the foundering of our tools of description on quantum theory. The question seems to me to be how one can formulate statements about a discontinuum without resorting to a continuum (space-time); the latter would have to be banished from the theory as an extra construction that is not justified by the essence of the problem and that corresponds to nothing "real"." 7 Einstein admits that he has no solution; he decides to tentatively retain the continuum as mathematical tool and let eventual success decide its usefulness. "A logically more satisfactory description is obtainable (a posteriori) by relating the theory's more complex individual solutions to observed facts. A standard could then be correlated with a certain type of atomic system that could not claim a privileged position in the theory. Thus a four-dimensional continuum can still be maintained and, in upholding the postulate of general covariance, it then has the advantage of circumventing the arbitrariness in the choice of coordinates". 8 In a lecture "Geometry and Experience" ("Geometrie und Erfahrung") [12] in January 1921, Einstein again discusses the problem of space-time on the quantum scale. A mathematical, i.e. purely axiomatic, geometry of space-time must be distinguished from "practical geometry". Whereas the first constitutes an abstract mathematical formalism, the latter is meant to be a physical science, which includes the possibility of measurements. Whereas mathematics as such is an exact science, its relation to "physical reality" should be viewed critically. While Relativity Theory constitutes the primary topic of the lecture, the problem of space-time on atomic scales is addressed as well. In particular the notions of "point" and "line" loose their physical significance on subatomic scales. "As far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality..... In axiomatic geometry the words "point", "straight line", etc., stand only for empty conceptual schemata. That which gives them content 7 "Streng genommen verflüchtigt sich auch der Begriff des ds 2 in eine leere Abstraktion, indem ds 2 nicht strenge als Messresultat aufgefasst werden kann.... Wenn die molekulare Auffassung der Materie die richtige (zweckmäßige) ist, d. h. wenn ein Teil der Welt durch eine endliche Zahl bewegter Punkte darzustellen ist, so enthält das Kontinuum der heutigen Theorie zu viel Mannigfaltigkeit der Möglichkeiten. Auch ich glaube, dass dieses zu viel daran schuld ist, dass unsere heutigen Mittel der Beschreibung an der Quantentheorie scheitern. Die Frage scheint mir, wie manüber ein Diskontinuum Aussagen formulieren kann, ohne ein Kontinuum (Raum-Zeit) zu Hilfe zu nehmen; letzteres wäre als eine im Wesen des Problems nicht gerechtfertigte zusätzliche Konstruktion, der nichts "Reales" entspricht, aus der Theorie zu verbannen." 8 "Eine logisch befriedigendere Darstellung lässt sich dadurch (a posteriori) erzielen, dass man die einzelnen komplexeren Lösungen der Theorie mit Beobachtungsthatsachen in Beziehung setzt. Ein Maßstab würde dann einem Atomsystem von gewisser Art entsprechen, welches in der Theorie keine Sonderstellung beanspruchen könnte. Dabei kann man immer noch an dem vierdimensionalen Kontinuum festhalten und hat dann bei Festalten an dem Postulat der allgemeinen Kovarianz den Vorteil, der Willkür einer Koordinatenwahl zu umgehen." is not relevant to mathematics......All length-measurements in physics constitute practical geometry........It is true that this proposed physical interpretation of geometry breaks down when applied immediately to spaces of submolecular order of magnitude". 9 Nevertheless, Einstein does not rule out that a mathematical field theory might still be of use in quantum physics, even if the mathematical variables do not have their classical significance. "The attempt may still be made to ascribe physical meaning to those field concepts which have been physically defined for the purpose of describing the geometrical behavior of bodies which are large as compared with the molecule. Success alone can decide as to the justification of such an attempt, which postulates physical reality for the fundamental principles of Riemann's geometry outside of the domain of their physical definitions. It might possibly turn out that this extrapolation has no better warrant than the extrapolation of the concept of temperature to parts of a body of molecular order of magnitude." 10 During these years, Born and Einstein met and discussed regularly. And Born explicitly refers to Einstein when he questions the relevance of precise coordinates at atomic and subatomic scales. "Relativity Theory emerged because Einstein recognized the impossibility in principle to determine absolute simultaneity of two events occurring in different locations". And he concludes "The true laws of nature are determined only by such quantities, which are observable in principle [13] 11 ..... If magnitudes lacking this property occur in our theories, it is a symptom of something defective. In order to determine lengths or times, measuring rods and clocks are required. The latter, however, consist themselves of atoms and therefore break down in the realm of atomic dimensions.....it appears justified to give up altogether the description of atoms by means of such quantities as "coordinates of an electron" at a given time" [14]. If "exact" is taken to have mathematical significance, neither position nor time nor any other physical quantity may be measured or known "exactly". Both Einstein and Born had reached the conclusion, that the traditional concept of spacetime of macroscopic physics cannot simply be transferred to quantum physics. Basic notions such as a point in space-time and a precise coordinate system are mathematical constructs, but cannot be defined experimentally; arbitrarily small intervals are unmeasurable in principle. The discontinuities occurring in the interaction of radiation with matter indicate that the notion of "continuum" "would have to be banished from the theory as an extra construction that is not justified by the essence of the problem and that corresponds to nothing "real"." [11] Nevertheless, Born and Einstein chose different mathematical routes to attack the quantum puzzle. Einstein retained differential equations; he accepted that the functions to be determined and the continuous variables used might not have clear physical significance, hoping that "(a posteriori) the theory's more complex individual solutions could be related to observed facts" [11]. Born, on the other hand, aimed for a mathematical representation, which should express the discontinuous and statistical behavior of nature explicitly. The space-time continuum should no longer be part of the theoretical formalism. Quite generally, the classical differential equations should be transformed into quantum mechanical difference equations. In his letter of Jan 27, 1920, Einstein reacted to Born's suggestion: [15] "I do not believe that one must abandon the continuum in order to solve the problem of quanta.... In principle, of course, the continuum could be abandoned. But how one should describe the relative motion of n points without the continuum?....I believe as before that an overdetermination ought to be sought with differential equations for which the solutions no longer have continuum properties. But how??.... 12 Similarly several weeks later: [16] "I dont believe that the theory can dispense with the continuum. But my attempts at giving tangible form to my pet idea of interpreting quantum structure through an overdetermination with differential equations refuse to succeed." 13 During the following decades, Einstein will continue his attempts, based on overdetermination of differential equations. His main aim will be a unified field theory, encompassing General Relativity, electromagnetism, and Quantum Theory; without success, however. Mathematical Implementation of the Fundamental Principle The transformation of Born's quantization principle into a mathematical theory is contained in three publications. The first by Born in 1924 [16] presented the basic concept. It was in this paper that Born coined the term "Quantenmechanik". Differential equations of classical mechanics are transformed into difference equations of "Quantum Mechanics". Classically all physical quantities are represented by continuous variables, the underlying assumption being that all changes in nature occur continuously in space and time. Actionangle (J i , w i ) variables of the classical Hamilton-Jacobi equations 14 provide the starting point. A particular advantage is that the transformation from usual variables and momenta to action-12 "Daran, dass man die Quanten lösen müsse durch Aufgeben des Kontinuums, glaube ich nicht.....Prinzipiell könnte ja das Kontinuum aufgegeben werden. Wie soll man aber die relative Bewegung von n Punkten beschreiben ohne das Kontinuum?....Ich glaube nach wie vor, man muss eine solcheÜberbestimmung durch Differentialgleichungen suchen, dass die Lösungen nicht mehr Kontinuumscharakter haben. Aber wie??.... 13 "Ich glaube nicht, dass die Theorie das Kontinuum wird entbehren können. Es will mir aber nicht gelingen, meiner Lieblingsidee, die Quantenstruktur aus einerÜberbestimmung durch Differentialgleichungen zu verstehen, greifbare Gestalt zu geben." 14 Full acquaintance with the Hamilton-Jacobi formalism was not commonplace then, nor is it today. Born himself had obtained his doctorate and habilitation in mathematics. His book "Vorlesungen ber Atommechanik, 1. Band [7] provided very detailed discussions of the Hamilton-Jacobi formalism, action-angle variables, and canonical transformations. Similarly, ref. [1] contains full details of matrix calculus. angle variables does not have to be know to define the action variables. Each pair (q i , p i ) of original coordinate q i and canonically conjugated momentum p i is associated with its action variable J i , given by The integral is to be taken at constant energy, not over the classical motion. The action J i is the new momentum; the new coordinate w i is the "angle" around the closed path of integration. All other physical quantities g become functions of the new momenta and coordinates g = g(J i , w i ). Due to periodicity in angle variables w i , physical quantities g may be expanded in a Fourier series g = τ g τ (J)e 2πiwτ , where τ = (τ 1 , τ 2 , ...), J = (J 1 , J 2 , ...), and wτ = i w i τ i . Quantum mechanically, Born's quantization principle requires that only quantized action intervals ∆J i = τ i h (all τ i integer) are possible; replacing the classical differentials dJ i by discrete action intervals, the classical differential equations are transformed into quantum mechanical difference equations. Eq. (1) takes the form ∆J Two papers by Born and Jordan followed. The June 1925 paper [13] applies the explicit discretization procedure to the interaction of radiation with atoms. This paper contains the first fully quantum theoretical treatment of this crucial problem, combining Born's Quantum Mechanics with Einstein's Quantum Optics [8,9]. Exchange of energy between atoms and radiation occurs by photon absorption and emission. Einstein's transition probabilities for spontaneous photon emission and field induced absorption and emission are obtained. The September 1925 paper [1] finally arrives at commutation relations and quantum equations of motion. From Born's Quantization Principle to Commutation Relations The relation between Born's quantization principle and commutation relations contains the key to understanding Quantum Theory. The formal steps are the following. The discussion is restricted to a single degree of freedom; again, classical differential equations provide the starting point. Continuous variables p and q represent classical momentum and canonically conjugated coordinate; J and w the corresponding action and angle. The Fourier expansions for p = τ p τ (J)e 2πiwτ and q = τ q τ (J)e 2πiwτ are inserted into the classical definition J = p dq. The is the integral over w from 0 to 1, yielding This classical differential equation is transformed into a quantum mechanical difference equation. The classical differential dJ is replaced by the quantum mechanically allowed discrete action intervals τ h and the classical Fourier components (q τ , p τ ) are replaced by "matrix elements" 15 . The integers n and n ± τ characterize different quantum states; the q(n, n ± τ ) and q(n, n ± τ ) represent the changes of coordinate q and canonically conjugated momentum p, caused by the discontinuous transition from state n to state n ± τ . 16 Slightly changing the notation, the general quantization condition takes the form: This is the diagonal element of the commutation relation. 17 Born and Jordan then show that all non diagonal elements vanish. Generalizing to arbitrary numbers of degrees of freedom, the general commutation relations are obtained. where I is the unit matrix with elements I n,m = δ n,m . General physical quantities g (e.g. q and p) are represented by matrices G with matrix elements G n,m . Let state n be represented by the set of integer quantum numbers n = (n 1 , n 2 , ...), state m by m = (m 1 , m 2 , ...). Transitions between states n and m correspond to quantized action intervals ∆J n,m = ((n 1 − m 1 )h, (n 2 − m 2 )h, ....). Non-diagonal matrix elements G n,m are related to discontinuous changes of physical quantity g caused by corresponding transitions. Diagonal matrix elements are interpreted as average values; e.g. G n,n = g n is related to the average value of the physical quantity g in the corresponding state. Remark that the general quantization condition refers to action intervals of transitions, not to any physical quantity within a quantum state. Quantization is about how things change, not about how things are. And Born recognized that quantization of action requires that all things change discontinuously. In summary: The commutation relation, "the refined quantization condition, which provides the basis for all further conclusions" [1] 18 , represent the mathematical implementation of the fundamental principle of quantum physics: To each change in nature corresponds an integer number of quanta of action, independent of the system of reference. Quantum Uncertainties Quantum uncertainties are integral parts of discontinuous quantum physics. Matrix Mechanics originated from Born's conviction that mathematically "exact" values of positions and times cannot constitute physically relevant notions. The same reasoning applies to all other physical quantities. The mathematical implementations of discontinuous quantum transitions contained quantum uncertainties from the beginning. Already in 1924, when Born replaced classical differential equations by quantum mechanical difference equations [17], quantum 16 It is implied that there exists a "ground state" corresponding to n0 = 0. Furthermore, action values J(n) cannot take negative values (J(n) ≥ 0), which implies that the q(n, m) and p(n, m) containing negative indexes are defined to vanish. 17 Concerning the factor 2πi: There is no profound physical reason; the factor 2πi is due to the representation of physical quantities by their Fourier coefficients. A mathematical representation of Quantum Theory without the factor 2πi is perfectly possible. 18 "die "verschärfte Quantenbedingung", auf der alle weiteren Schlüsse beruhen". uncertainties were part of discretization. While action differences of discontinuous transitions were quantized, all physical quantities within any given quantum state were obtained from averaging procedures over classical angle variables and discrete action intervals. And when discontinuous changes of action variables had found their compact form in commutation relations [1], general uncertainty relations for canonically conjugated quantities followed as mathematical consequence: The commutation relations require that -for any quantum state n -physical quantities cannot have perfectly sharp values: Let Q n,n = q n and P n,n = p n be the average values of two canonically conjugated quantities in state n; the product of their mean square deviations has a lower bound imposed by Planck's quantum of action. This inequality is a necessary consequence of Born's quantization condition; its mathematical implementation, the commutation relation, contains the inequality (6) as straightforward mathematical consequence. If "exact" is understood to have its mathematical significance, then no physical quantity may take on "exact" values. A compromise relating the uncertainty of the quantity considered to the uncertainty of its canonically conjugated partner has to guarantee that the inequality (6) is fulfilled; perfectly precise and infinitely imprecise values are excluded. Perfect accuracy of particle position would require infinite momentum uncertainty, implying (P 2 ) n,n = ∞, i.e. infinite energy. Similar conclusions forbid other physical quantities to take on perfectly precise values; any assumption of exact value of a physical quantity will invariably lead to conclusions incompatible with the quantum laws themselves. Similar to position and momentum, time and energy are affected by quantum uncertainties. The universal time of classical physics has no place in discontinuous quantum physics, where an infinite manifold of time scales may be defined via transition rates between two states. The only notion of a specific time associated with a particular state a of a quantum system, is its average lifetime, which is related to the energy uncertainty of state a. A detailed discussion of time in quantum physics is given in the following chapter. Quantum Equation of Motion The most important difference between classical and quantum physics concerns the notion of "time". Quite generally time is connected with change: Physical objects change their state as a function of time. The equations governing these changes are the equations of motion. In accord with the classical assumption of continuity in time and space, the classical equations of motion are differential equations, i.e. relations between infinitesimally small changes of physical variables. Relying on commutation relations (eq.5) as general quantization principle, Born and Jordan transform the differential equations of classical physics into quantum mechanical difference equations [1]. They describe how general physical quantities g, e.g. p or q or any other physical quantity g(q, p), change by discontinuous and statistical quantum transitionsĠ The quantum equations of motion do not contain time explicitly; the classical time variable t has no place in quantum physics.Ġ is not obtained by usual differentiation of G with respect to a continuous time variable. Eq.7 is a difference equation, not a differential equation; the right hand side of eq.7 defines the matrixĠ. The Hamiltonian H(Q, P) determines which transitions are allowed and how physical quantities are affected by the discontinuous quantum transitions. There remains the question of the physical significance of the matrixĠ, and, more generally, what type of limiting procedure should relate the new quantum laws to those of classical physics. According to Born's reasoning, discontinuous quantum physics is fundamentally different from supposedly continuous classical physics. While the classical differential equations suggest fully deterministic behavior "in principle", discontinuous quantum physics is inherently probabilistic. A remark is in order concerning the so-called classical limit of letting h go to zero: The limit h = 0 does not exist! Recall that commutation relations result from the implementation of the principle: "Action variables may only change by integer multiples of h". And there is no way that the infinite set of natural numbers may continuously be transformed into a continuum. Similarly, there is no way that inherently statistical behavior may be transformed into fully deterministic behavior continuously. Classical differential equations cannot constitute "true" laws of nature, but may only provide approximate descriptions of averages, which ignore the underlying physical discreteness. Quite logically, the physical significance of matrix elementsĠ n,m is obtained from the requirement, that classical results must be recovered on average. The probability for a discontinuous change of physical quantity g, caused by a transition between quantum states n and m, is shown to be proportional to |Ġ n,m | 2 . Thereby each particular quantum transition is associated with its particular time scale, defined by "transition probability per unit time". The new concept of time, adapted to discontinuous quantum behavior, is contained in an infinite manifold of transition rates. Compare to the classical concept of time: A clock consisting of some macroscopic oscillator (e.g. a specific vibration mode of a quartz crystal; or an electromagnetic mode in a microwave cavity) defines time via the number of oscillations per unit time. The discrete counting process used to define classical time necessarily introduces finite (classical) uncertainties. As usual in classical physics, it is implicitly accepted that -in practice -these finite uncertainties cannot be avoided, but it is assumed that -in principle -they may be reduced to be infinitesimally small. Although not directly relevant for individual quantum systems, the classical time variable defined by clocks may be taken as external parameter, serving as scale for the transition rates of discontinuous quantum behavior. It has to be kept in mind, however, that classical time defined by clocks cannot be defined exactly; quantum uncertainties pose a lower limit to all measurements. Quantization of action guarantees that all physical quantities (time, energy, position, momentum, etc....) are affected by quantum uncertainties. General Remarks: Time as Operator and Time as External Parameter In §22 of their book "Elementary Quantum Mechanics" ("Elementare Quantenmechanik") [18], Born and Jordan distinguish between time as external parameter, defined by clocks, on one side, and the concept of time relevant for the discontinuous evolution of individual quantum system on the other: "The description of a physical system by a time-dependent Hamilton function, where time is used as external parameter, cannot constitute an exact representation of its physical properties, but only an approximate calculation procedure, which contains fundamental omissions." 19 For closed systems, the classical Hamilton-Jacobi formalism considers energy and time to be canonically conjugated; the negative energy takes on the role of canonical "momentum", time its canonically conjugated "variable". Born and Jordan conclude: Quantum theoretically for a closed quantum system, time t is not commuting with energy W : "Classical theory teaches that energy H = W and time are canonically conjugated in closed systems. In analogous and corresponding implementation in Quantum Mechanics the W, t have to be represented by certain non commuting symbols, which are governed by the rules analogous to the canonical commutation relations of p, q". 20 Similarly, for two interacting quantum system with conserved total energy, time is not commuting with energy: "Time is not commuting with energy of these systems (canonically conjugate to it)." 21 For open systems, however, coupled to surroundings, such that back coupling effects to the surroundings are weak and can be neglected, an external time variable t may be admissible as parameter. "The energy of the partial system may be considered to be approximately commuting with the total energy. Therefore the use of t as parameter may be justified." 22 Further details are contained in §61: "Already in §22 it was pointed out, that the time canonically conjugate to the energy of a closed system cannot be represented by a real parameter, but itself constitutes a quantity, which does not commute with other measurable quantities of this system. As far as the time so defined is concerned, it is generally false to state that any other quantity A(p, q) is measured at a specific point in time." 23 Again the distinction to open systems with negligibly weak back coupling to an external system (a "clock") is stressed; the variable t defined by the external clock may be used as external parameter. "This time t may be considered to be defined by a system (a "clock"), not coupled or very weakly coupled to the system considered." 24 Time as Operator First several preliminary remarks about the notation used below. The symbolst,Ê,r,p are used for time-, energy-, position-, and momentum-operators. Similarly,q,p,ĝ indicate operators. The operators might be differential or integral operators. Latin or Greek letters will indicate mathematical variables (not to be mistaken for physical notions), e.g. t, E, r, p, denote continuous mathematical variables, representing time-, energy-, position-, and momentum-operators. The first representation of energy and time by non commuting symbols is contained in the paper by Max Born and Norbert Wiener [19]. After achieving his original aim of a discretized Quantum Mechanics, Born quickly realized that field theoretical representations are easier to handle mathematically. Even slightly before Schrödinger, Born and Wiener replaced the discrete mathematical forms of matrix mechanics by field theoretical methods. Matrices are replaced by integral and/or differential operators, and different quantum states are represented by functions of continuous variables. Let us use the symbolic notation G mn = (m|ĝ|n); a general physical quantity g is represented by the operatorĝ. Operators representing canonically conjugated quantities obey the commutation relationspq −qp =h/i. Similarly, the quantum equations of motion takes the formġ = 2πi h (Ĥĝ −ĝĤ). For closed systems with conserved total energy, time and energy operators fulfill the commutation relation The corresponding time-energy uncertainty relation takes the form where τ n = (n|t|n) and ǫ n = (n|Ê|n) are average lifetime and energy of state n. A few weeks later, the representationsr = r andp =h i ∇ r were introduced in Schrödinger's "stationary" equation [2]. Let us call this the "time-position representation", which represents quantum states by normalized functions ψ(t, r) of the continuous variables t, r. A note of caution is in order: The variables t and r were introduced as mathematical representation of operatorst andr. Instead of t and r , we might choose any other sym-bols; all physically relevant quantities have to be independent of the particular choice taken. In particular, the t-dependence of the function ψ(t, r) must not be interpreted to represent the continuous evolution as a function of time of an individual physical system. ψ(t, r) is nothing more than a mathematical tool, useful to perform calculations of matrix elements. Physical relevance is contained in averages and probabilities obtained from matrix elements. 25 In the chosen representation, the equation of motionġ = 2πi h (Ĥĝ −ĝĤ) is a differential equation with respect to t. Formal integration yields: But again, the note of caution already mentioned above applies: Do not mistake the mathematical variable t to indicate continuous physical behavior. The variable t has been introduced as representation of the time operator. Mathematically, the commutation relations may be satisfied by infinitely many different representations; no physically relevant and observable quantity may depend on a specific representation. All of physics is contained in probabilities and averages obtained from matrix elements. And all representations fulfilling the commutation relations yield identical matrix elements. Energy-Momentum Representation Section 4.1 showed that the requirement of discontinuous action intervals resulted in commutation relations. The following example demonstrates that the reverse conclusion holds as well; commutation relations indeed imply discontinuous quantum behavior, resulting from discontinuous action intervals. The essential differences between classical and quantum behavior are particularly perceptible, if the interactions between two quantum systems become small, such that lowest order perturbation theory is applicable. Classically, small interactions cause small changes in physical quantities; quantum mechanically even small interactions may cause large changes in physical quantities, the corresponding probabilities tend to vanish for vanishing interactions. As example, I discuss scattering processes of particles by crystals. 26 Classically, the interaction between particle and crystal is taken to be a time dependent potential V (r, t), where the t dependence describes the classical crystal dynamics. Fourier transformation V (r, t) = dω d 3 kṼ (ω, k) e i(ωt−kr) will be helpful in the following. For the quantum mechanical treatment, I consider the combined system of particle and crystal to constitute a closed system. Following the Born-Jordan perspective, all physical quantities -energy, time, momentum, position -are represented by operators. The commutation relations for 25 The standard interpretation of Schrödinger's wave function ψ(t, r) is different: The variable t is interpreted as classical time variable, and, contrary to Born's understanding, ψ(t, r) is interpreted to describe the continuous evolution with time of an individual physical system. I restate Born's understanding: If t is interpreted as classical time defined by clocks, then t is external parameter and the remark above applies: "The description of a physical system.....cannot constitute an exact representation of its physical properties, but only an approximate calculation procedure, which contains fundamental omissions." The section "Time as external parameter" discusses this point in detail. 26 Scattering processes of particles by atoms played a crucial role in Born's statistical interpretation of the wave function [20] Assuming the atomic spectrum to be given, he used wave functions to compute the relevant matrix elements. time-energy and position-momentum may equally be satisfied by the "energy-momentum rep- Replacing the classical variables r and t by the operatorsr = −h i ∇ p andt =h i d dE , the classical interaction V (r, t) = dω d 3 kṼ (ω, k) e i(ωt−kr) becomes the interaction operator V = dωd 3 kṼ (ω, k) eh (ω d dE −k∇p) . The particle is taken to be structureless and without internal dynamics. In the chosen representation, the particle state may be represented by the function f (E, p). The interaction operatorV is applied as perturbation to the uncoupled free particle state f (E, p). In lowest order we obtain The physical interpretation is obtained from the quantum equation of motion: The Fourier componentṼ (ω, k) may cause a discontinuous energy transferhω and/or momentum transfer hk. The transition probability is proportional to |Ṽ (ω, k)| 2 [20]. For ω = 0, the sign used in the Fourier transform is chosen such that positive ω correspond to energy transfer from the crystal to the particle (e.g. by absorption of a phonon); negative ω to energy transfer from the particle to the crystal (e.g. creating a lattice excitation). The classical definition of the action variables (eq.(2)) allows to relate the discontinuous energy and momentum transfers to their respective change in action variables ∆J E and ∆J p . ∆J E resulting from the Fourier componentṼ (ω, k) is given by the product of periodicity time t ω = 2π |ω| and energy transfer ∆E =hω. The discontinuous momentum transfer produced by the Fourier componentṼ (ω, k) results from discrete translation symmetry in direction the momentum transfer ∆p =hk. Translation in direction of ∆p by the length 2π |k| leaves the Fourier componentṼ (ω, k) invariant. The corresponding ∆J p is given by the product of periodicity vector k |k| 2π |k| and momentum transfer ∆p =hk. The change in action variables due to any one of the various Fourier componentṼ (ω, k) is equal to the smallest value allowed by the fundamental laws, Planck's quantum of action h. Time and energy have been represented by non-commuting symbols, implying that total energy of particle and crystal combined is conserved. Implicitly, the crystal is treated quantum theoretically, too. A discrete transition of the particle changing its energy and momentum has to be coupled to a discrete transition in the crystal. This fact is used experimentally, to study the elementary excitations of crystals, e.g. lattice excitations (phonons) or magnetic excitations (magnons). These results were obtained in lowest order perturbation theory, which is particularly suited to demonstrate the differences between quantum and classical physics. Classically, lowest oder is adequate for infinitesimally small perturbations causing equally infinitesimally small changes in the perturbed system. For vanishing strength of the interaction, the classical response tends towards zero continuously. The quantization condition, implemented by the commutation relations, predicts very different results. Small, even extremely weak perturbations may cause large and discontinuous changes of physical quantities; the corresponding probabilities tend towards zero for vanishing strength of the perturbation. Furthermore, if the interaction is weak enough, such that lowest order effects only have to be considered, all types of geometries may be treated. The total scattering probability is obtained from the sum over the contributions of the various Fourier coefficients. Elastic transitions (∆E = 0 and ∆J E = 0), i.e. the scattering contributions due to Fourier componentsṼ (ω = 0, k), are of particular interest for diffraction phenomena. Systems of discrete translational symmetries produce particularly large scattering probabilities for special momentum transfers ∆p i =hQ i , the "Bragg peaks". In crystalline materials a large fraction of the total elastic scattering intensity is concentrated in Bragg scattering, easily distinguishable from the usually structureless background of inelastic processes. The special momentum transfers contributing to Bragg scattering are representative of translational symmetries; translation in direction of any one of the ∆p i by the corresponding length 2π |Q i | represents a crystalline symmetry operation. Crystallography relies on this correspondence; identification of a large enough number of Bragg peaks permits the identification of the crystal structure. Remark that the occurrence of Bragg peaks does not rely on any intrinsic wave property of the scattered particles. The special momentum transfers ∆p i in Bragg scattering are due to the fundamental quantization condition ∆J p = |∆p i | 2π |Q i | = h, not to any alleged wave property of the scattered particles. Time as External Parameter, Time Dependent Perturbation Theory Let us recall Born's remarks, concerning the use of explicitly time dependent Hamiltonians: "The description of a physical system by a time-dependent Hamilton function, where time is used as external parameter, cannot constitute an exact representation of its physical properties, but only an approximate calculation procedure, which contains fundamental omissions." 28 [18]. "Time-dependent" perturbation theory is a standard example, where the time variable t must be treated as external parameter. An explicitly time dependent perturbation V (t) = dωṼ (ω)e iωt is applied to a quantum system. Just as in the preceding section, lowest order effects only are considered. At t = 0 the system is taken to be in the state φ i of energy ǫ i ; under the influence of the time dependent external perturbation V (t), the time dependent Schrödinger predicts φ i to develop into ψ(t) = ν c ν (t)φ ν . Typical textbooks identify |c ν (t)| 2 as "probability to find the system at time t in the state φ ν ". Detailed calculations are textbook material, I directly address the physically relevant result, "Fermi's Golden Rule": In the limit of large t, the transition probability from state φ i of energy ǫ i to the state φ f of energy ǫ f is proportional to |Ṽ (ω)| 2 δ(ǫ f − ǫ i +hω) t. The limit of large t has led to the delta-function, guaranteeing energy conservation. The transition rates |Ṽ (ω)| 2 δ(ǫ f − ǫ i +hω) reproduce the results obtained by the (much simpler) method of the preceding section. Only these transition rates are physically relevant and experimentally measurable. Typical textbook claims, that ψ(t) = ν c ν (t)φ ν describes the temporal evolution of the state of an individual quantum system for arbitrary values of "time" t, are incorrect. It impossible in principle to back up these claims experimentally; time-energy quantum uncertainties pose a lower limit to time and energy accuracies. Although the t-dependent wave function does not describe the continuous evolution of an individual physical system, Ψ(t) does constitute an approximate description, provided that the variable t is interpreted as external parameter, defined by a classical clock, and Ψ(t) is interpreted as representation of an ensemble, i.e. a very large number of equivalent quantum systems. At t = 0, the "system" to be described by Ψ(t = 0) consists of a macroscopically large number N of equally prepared quantum systems in the initial state φ i . According to the time dependent Schrödinger equation, Ψ(t = 0) develops into Ψ(t) = ν c ν (t)φ ν . Applying the Born(-Einstein) ensemble interpretation (described extensively in the following section), |c ν (t)| 2 = n ν (t) for ν = i then is the number of individual quantum system having made a transition from the intial state φ i to the state φ ν , where ν n ν (t) = N . To lowest order in 1 N ν =i n ν (t), the result given by Fermi's golden rule reproduces the transition rate of Born's statistical interpretation, described extensively in the following section. The Born(-Einstein) Ensemble Interpretation When Born derived the statistical interpretation of the wave function in 1926 [20], their was no time variable t involved. He obtained the transition probabilities directly from matrix elements, which he determined using time independent perturbation theory. Nevertheless, the paper [20] indicates how ψ(t, r) may constitute an approximate procedure, provided that the variable t is interpreted as external parameter, and ψ(t, r) is not interpreted as representation of a single quantum system, but of an ensemble, i.e. a very large number of equivalent quantum systems. The name of Einstein is included in parenthesis in the title above, because Einstein adopted the ensemble interpretation of ψ(t) [21] in exactly the sense intended by Born. Born's statistical interpretation [20] consists of two papers, the "Preliminary Announcement" of June 1926 was followed by the extended version one month later. The subject of the two papers is indicated by their title: "Quantum Mechanics of Collision Processes" ("Quantenmechanik der Stoßvorgänge"): Particles (e.g. electrons) are scattered by atoms. Consistent with the basic postulate of discontinuous quantum physics, there is no explicit time variable. The aim consists in calculation of transition matrix elements, which, in turn, will determine transition probabilities. The initial state consist of separate quantum systems, particles and atoms, far apart from each other and noninteracting. Particles and atoms collide; asymptotically in the final state, particles and atoms again are far apart from each other and noninteracting 29 . Born's preliminary scattering paper [20] describes the collision of a single electron with an atom. The premise is that"before, as well as after, the collision, when the electron is far away and the coupling small, a particular state of the atom and a particular, rectilinearuniform movement of the electron has to be definable." 30 Mathematically, the task is reduced to determine the asymptotic behavior produced by the collision. Born distinguishes between "physical" (or "real") states of electrons and atoms and their mathematical representations. Particles are represented by plain wave functions of wave vector k. Physical significance (or "reality") is attributed to electron momenta p =hk and atom energies ǫ only. Wave functions serve as mathematical tools to calculate transition probabilities from real initial states to real final states. The initial noninteracting state is represented in terms of product wave functions Ψ i = ψ k i ψ m of free electron ψ k i with momentumhk i and free atom ψ m with energy ǫ m . The transition probabilities from initial to various final states caused by the (weak) electron-atom interaction are calculated. The first short paper does not contain mathematical details, but simply states the essential result and Born's interpretation of its physical significance. Asymptotically, after the collision has taken place, the total Ψ-function takes the form The integral over d 2 Ω k f is over the solid angle of outgoing momentahk f . Energy conservation (ǫ m − ǫ n =h 2 |k f | 2 /2m e -h 2 |k i | 2 /2m e ) determines the absolute value |k f |. The wave function Ψ s consists of a superposition of many product wave functions ψ k f ψ n . Born concludes: "We do not get an answer to the question, "what is the state after the collision?", but only to the question, "how probable is a given result of the collision?"... based on the principles of our Quantum Mechanics there exists no quantity, which determines the result of the collision for the individual elementary process." 31 The physical significance of the superposition Ψ s = n d 2 Ω k f Φ(k i m, k f n) ψ k f ψ n is reduced to: The transition probability from initial state represented by ψ k i ψ m (electron with momentumhk i and atom with energy ǫ m ) to any one of the possible final states represented by ψ k f ψ n (electron momentumhk f and atom energy ǫ n ) is proportional to the absolute square The same reasoning is adopted for light scattering, which, based on Einstein's concept of 29 This experimental situation corresponds precisely to the configuration addressed in the EPR-(Einstein-Podolsky-Rosen) paper of 1935 [22], which stimulated Schrödinger's papers about entanglement and "Schrödinger's Cat" [23]. The "EPR"-paper actually was not written by Einstein, but by Podolsky and does not constitute Einstein's own views properly. Einstein's own opinion on this matter is contained in Einstein's review "Physics and Reality" of 1936 [21]. A detailed account is given by Arthur Fine in "The Shaky Game, Einstein's Realism and the Quantum Theory" [24]. Further details are given in the present author's book [5]. 30 photons, should be understood as particle scattering: "I further believe that the problem of absorption and emission of light must also be treated in a completely analogous way...... in accord with the concept of light quanta." The extended version published one month later replaces the rather academic problem of colliding one electron with one atom by the typical physical situation realized in the laboratory: A stationary current of particles (electrons) is produced and brought into collision with a gas consisting of a large number of atoms. Again wave functions are used as mathematical tools to calculate the transition probabilities. Born specifies his general interpretation of Schrödinger's ψ-functions. He refers to Einstein's expression of "ghost field" ("Gespensterfeld") [25], introduced in the context of light scattering. Whereas energy and momentum are carried by particles (i.e. photons), the fictitious ghost field describes the probability distribution over large numbers of single particle scattering events. Schrödinger's wave functions are similar ghost fields without direct physical significance. Momentum and energy are transferred in such a way that "particles (electrons) actually fly about (als wenn Korpuskeln (Elektronen) tatsächlich (= actually, really, in fact) herumfliegen"). The flight path of the particles is determined only in so far, as restricted by energy and momentum conservation; apart from that, the probability for a particular path is governed by the function ψ. Particle dynamics are determined by probability laws." 32 The interpretation of the ghost field ψ is adapted to the physical problem, i.e. a large number (or "ensemble") of atoms and particles. The representation of an ensemble of noninteracting atoms is given in terms of the eigenfunctions ψ n (q) with eigenvalues ǫ n of the stationary Schrödinger equation. Since the system of functions ψ n (q) is complete, any function f (q) may be expanded in terms of the eigenfunctions f (q) = n c n ψ n (q). Born asks the question: If the normalized functions ψ n (q) constitute representations of atomic states of energy ǫ n , what type of physical system might be associated with a general superposition? Born's conclusion is the following: The general superposition f (q) = n c n ψ n (q) is related to the "probability for the occurrence of the various states in a mixture of equal and uncoupled atoms. The completeness relation dq |f (q)| 2 = n |c n | 2 leads to regard this integral as the number of the atoms..... |c n | 2 denotes the abundance of the state n and the total number is composed of the sum over the various contributions" 33 . In short: f (q) = n c n ψ n (q) is not to be associated with one individual atom but with a mixture ("ensemble") of many atoms and |c n | 2 is to be interpreted as the number of atoms in the state n. 34 The equivalent reasoning is applied to ensembles of free particles. Any general function g(r) may be expanded in terms of free particle eigenfunctions ψ k (i.e. a simple Fourier expansion). But again this general superposition will, in general, not represent an acceptable physical state of a single individual particle. For the physical problem under consideration, 32 "Die Bahnen dieser Korpuskeln sind nur so weit bestimmt, als Energie und Impulssatz sie einschränken; im ubrigen wird für das Einschlagen einer bestimmten Bahn nur eine Wahrscheinlichkeit durch die Werteverteilung der Funktion ψ bestimmt. Die Bewegung der Partikeln folgt Wahrscheinlichkeitsgesetzen." 33 "Wahrscheinlichkeit dafür, dass in einem Haufen gleicher, nicht gekoppelter Atome die Zustände in einer bestimmten Häufigkeit vorkommen. Die Vollständigkeitsrelation dq |f (q)| 2 = n |cn| 2 führt dazu, dieses Integral als die Anzahl der Atome anzusehen.....|c(n)| 2 | bedeutet die Häufigkeit des Zustandes n, und die gesamte Anzahl setzt sich aus diesen Anteilen additiv zusammen." 34 In 1936 Einstein will refer to this example to illustrate his "ensemble interpretation" [5]. i.e. an ensemble of free particles: The general superposition g(r) = k c k ψ k is related to the probability for the occurrence of various free particle states of momentumhk in a mixture (ensemble). The absolute square of the expansion coefficients |c k | 2 is to be interpreted as providing the abundance of particles in states of momentumhk. The problem to be solved is specified as follows: "For the processes considered, the paths of the particles before and after the collision are asymptotically rectilinear. For a very long time (in comparison with the actual collision process) the particles are in practically free states. In agreement with the experimental situation, we are led to the following approach: Let the distribution function |c(k)| 2 for the asymptotic paths before the collision be known; are we able to calculate the distribution function after the collision? Of course, we are considering a stationary current of particles". 35 In lowest order, the coefficients Φ(k i m, k f n), obtained in the preliminary communication are shown to be proportional to the matrix elements (ψ k i ψ m |V e.a |ψ k f ψ n ), where V e.a is the electron-atom interaction. The transition probabilities from initial states Ψ i = ψ k i ψ m to final states Ψ f = ψ k f ψ n are proportional to the absolute squares |(ψ k i ψ m |V e.a |ψ k f ψ n )| 2 . 36 To summarize: Born's statistical interpretation is about "transition probabilities" of discontinuous and statistical transitions; their probabilities are proportional to absolute squares of off-diagonal matrix elements. If Wave Mechanics, in addition to Matrix Mechanics, introduces functions ψ(r, t), these wave functions do not contain any additional physics. In particular, they do not describe the continuous evolution in space and time of an individual quantum system. Their usefulness is limited to mathematical tools for computation of matrix elements. No additional physical reality is to be associated with wave functions; they are nothing more than ghost fields, or phantoms of the imagination. The Doctrine of Classical Concepts The main architects of the Copenhagen interpretation of Quantum Mechanics are Werner Heisenberg and Niels Bohr. Bohr was the spiritual leader; although he did not contribute to the mathematical development of the new quantum laws, his supposedly deeper insight determined the guiding line for his young collaborators. The formal basis for the Copenhagen interpretation of Quantum Theory was provided in two papers by Heisenberg, the "re-interpretation paper" [4] of 1925 and the "indeterminacy paper" [26] of 1927. Bohr's review of 1928 [27] (published simultaneously in Naturwissenschaften and Nature) provided the final touches. Bohr's imperative assertion, that classical concepts have to be maintained to describe quantum physics, defined the framework. His old Quantum Theory had relied on classical concepts; continuity of all physical processes in Newtonian space and time were Heisenberg's Re-Interpretation The origin of Matrix Mechanics was attributed to Heisenberg's paper "Quantum Theoretical Re-Interpretation of Kinematic and Mechanical Relations" of July 1925 [4]. When Heisenberg wrote the paper, he had spent most of the preceding year in Copenhagen, and he was firmly attached to the classical concepts of Bohr's old Quantum Theory. Bohr's cardinal error was the rejection of Einstein's quanta of radiation; Bohr maintained that radiation had to retain its classical character. But if atoms could emit continuous radiation, then there had to be something oscillating inside the atoms, providing the required frequencies. This role was attributed to "virtual oscillators"; each spectral line of frequency ν was associated with its corresponding virtual oscillator. Bohr's frequency condition hν = ǫ n − ǫ m was based on this assumption. Heisenberg intended to provide a new kinematic description of physical quantities for this physical picture. In fact, what Heisenberg really re-interpreted was Born's preceding papers [17] and [13]. The June 1925 paper by Born and Jordan [13] had applied Born's concept of discontinuous quantum physics [17] to the interaction of atoms with radiation; emission and absorption of photons were combined with discontinuous changes of atomic properties. Heisenberg, after his return from Copenhagen, witnessed the final stages of its genesis. His own re-interpretation paper of July 1925 [4] picked up the same subject, but instead of following Born's line of thought, he twisted Born's intentions to signify the contrary. He applied a slightly modified Bohr-Sommerfeld quantization procedure to Bohr's virtual oscillators, which supposedly were responsible for the emission of continuous radiation. The virtual oscillator amplitudes -in Heisenberg's opinion -determined the intensities of the corresponding spectral lines. The essential differences and similarities between the Born-Jordan paper of June 1925 [13] and Heisenberg's paper of the July [4] are: I. Born-Jordan: All elementary dynamics is discontinuous; there are no continuous orbits and virtual oscillators; radiation is quantized; the interaction of atoms and radiation occurs by discontinuous emission and absorption of photons. Heisenberg's re-interpretation: All elementary processes are continuous in space and time; radiation is classical; the emission of continuous radiation is due to virtual oscillators of the corresponding frequency. II. Born-Jordan determine Einstein's probabilities for spontaneous emission and field induced emission and absorption of photons. The probabilities determine the intensities of spectral lines. Heisenberg's re-interpretation: He relies on Bohr-Sommerfeld quantization to determine quantized energies of virtual oscillators. This, in his opinion, constitutes the "integration of the equations of motion". Virtual oscillator amplitudes determine intensities of spectral lines. III. Born-Jordan represent the atomic dipole moment by "quantum vectors" A(n, n − τ ), which are identical to the future "matrix elements". Discontinuous action intervals τ h correspond to emission of photons of energy ǫ photon = ǫ n − ǫ n−τ . Heisenberg's re-interpretation: He adopts and re-interprets Born-Jordan's quantum vectors A(n, n−τ ); he restores continuity in time by a multiplicative phase factor e 2πi ν(n,n−τ )t , devised to fulfill Bohr's frequency condition hν(n, n−τ ) = ǫ n −ǫ n−τ . The A(n, n−τ )e 2πi ν(n,n−τ )t are interpreted as representations of position coordinates of virtual oscillators. IV. Born-Jordan motivate the elimination of continuous variables by "The true laws of nature are determined only by such quantities, which are observable in principle" 37 . Nature is discontinuous at the elementary quantum scales; continuous variables loose their physical significance. Heisenberg's re-interpretation: He adopts the Born-Jordan principle of retaining only observable quantities, but re-interprets the reason for doing so. Although he still maintains that continuous electronic orbits and virtual oscillators constitute the underlying subatomic physics, he declares the subatomic dynamics to be "invisible in principle". This "invisibility in principle" of subatomic orbits and oscillators is presented as ad hoc postulate, without further justification. Heisenberg himself was not satisfied with this ad hoc postulate, and during the following years he searched for supporting arguments. The "indeterminacy paper" of March 1927 [26] contains his "explanation". The "Measurement Problem" When Born and Jordan published the commutation relations in September 1925 [1], neither Heisenberg nor Bohr recognized their physical significance. Heisenberg remained fully attached to the classical concepts of Bohr's old Quantum Theory; continuity in space-time of all physical processes constituted its basic assumption, and exact values of all physical quantities at all times were taken for granted. But how could the commutation relations be reconciled with classical concepts? In particular, the apparent incompatibility of precise values of position and momentum called for an explanation. Heisenberg's "indeterminacy paper" of 1927 [26] seemingly provided the answer. In order to "explain" the physical content of commutation relations, he invented the "measurement problem": The measurement of a physical quantity q should necessarily cause unavoidable and uncontrollable disturbances of its canonically conjugated partner p. For example: Both particle position and momentum have exact values at all times; these exact values may be determined separately. But while the position of a particle may be determined exactly (and thereby known), the act of position measurement necessarily disturbs its momentum, thereby precluding its simultaneous determination (or knowledge). The accent here is on "necessarily" disturbs, and on "simultaneous determination"; only the simultaneous determination of canonically conjugated quantities should be prohibited by unavoidable and uncontrollable disturbances produced by measurements. This reasoning was extended to all other physical quantities; Heisenberg remained convinced that "All notions, which are used for the description of mechanical systems in classical theory, may be defined exactly also for atomic processes, in analogy to the classical notions". 38 While Born's discontinuous quantum physics contains quantum uncertainties of all physical quantities as constitutive elements, Heisenberg's interpretation replaces quantum uncertainties by indeterminacies resulting from experimental disturbances. Heisenberg's "explanation" of the physical content of commutation relations is fundamentally wrong. Of course, many real measurements do disturb the system to be measured; that is true for many experiments in classical physics and remains so for quantum physics. But Heisenberg's claim, that a measurement necessarily disturbs the system to be measured, is false. Particle position measurements of atoms in crystalline materials provide the crucial example. Diffraction experiments are the standard method. Photons or neutrons or electrons are scattered off the crystal, the observed Bragg peaks provide the information necessary to determine the atomic positions. Bragg scattering processes are purely elastic; they not only leave the atomic positions unchanged, they also do not change their momenta. The momentum transfer from the scattered particles (e.g. neutrons or photons) to the crystal is absorbed by the rigid crystal, while the momenta of individual atoms bound in the crystal remain unchanged. Of course, inelastic scattering processes changing atomic momenta are possible, too; they are part of background contributions in addition to Bragg peaks 39 . Quantum uncertainties, not indeterminacies, are constitutive elements of quantum physics. Heisenberg's fundamental error invalidates all further conclusions, which he invokes to justify the retention of classical concepts. In order to quantify the indeterminacies, Heisenberg attributes dual properties, i. e. particle and wave character, to individual photons and other particles; wavelength λ and momentum p are related by λ = h/p. In order to measure the position of a particle X, photons (or other particles) are scattered off particle X. The position indeterminacy of particle X should be given by the wavelength λ of the photons. Due to the Compton effect, the scattering process then should cause a momentum disturbance of particle X of order p = h/λ. The resulting product of position and momentum indeterminacies then should be of order h. Heisenberg accepts that Matrix Mechanics describes discontinuous quantum transition; but, in contrast to Born, he maintains the space-time continuum of Newtonian physics and attributes the origin of discontinuities to disturbances caused by measurements. Thereby, "neither the mathematical scheme of Quantum Mechanics requires a revision, nor is a revision of space-time geometry for small distances and times necessary". 40 Furthermore, the retention of classical continuity in space and time required a justifica-38 "Alle Begriffe, die in der klassischen Theorie zur Beschreibung eines mechanischen Systems verwendet werden, lassen sich auch für atomare Vorgänge analog den klassischen Begriffen exakt definieren." 39 Further details about Bragg scattering, background contributions, and measurements of position uncertainties are contained the appendix of ref. [5] 40 "das mathematische Schema der Quantenmechanik [wird] keiner Revision bedürfen; ebensowenig wird eine Revision der Raum-Zeit Geometrie für kleine Räume und Zeiten notwendig sein." tion for the statistical nature of quantum transitions. Again, the measurement problem is called to the rescue. "The fact that Quantum Theory may only provide the probability of electron positions (in the 1S-state for example) may, according to Born and Jordan, be viewed as characteristic and statistical elements of Quantum Theory in contrast to classical theory. But we might also state, as Dirac does, that statistics is introduced by our experiments". 41 Heisenberg is in accord with Dirac, he specifies: "We did not assume that Quantum Theory is an essentially statistical theory in contrast to classical theory.....The sharp formulation of the causality law, 'if we know the present exactly, we are able to calculate the future', is not wrong due to the second part of the sentence, but because the precondition is wrong." 42 The statistical outcome thereby is attributed to imprecise knowledge of initial conditions. This argument applies equally to classical physics, which Heisenberg readily admits: "This would not be different in classical theory." 43 Imprecise knowledge of initial conditions is invoked to retain the concept of classical electron orbits. Starting from some initial conditions, where both p and q are supposed to be known with some indeterminacy, Heisenberg claims that "within the limits of the indeterminacies, the values of q and p obey classical equations of motion, as can be deduced directly from the quantum mechanical lawsṗ = − ∂H ∂q ;q = ∂H ∂p . As mentioned, the trajectory may only be computed statistically from the initial conditions, a fact which may be considered to result from the essential indeterminacy of the initial conditions." 44 Although these remarks suggest that, in Heisenberg's opinion, there should exist an underlying world, which is deterministic, he declares such speculations to be meaningless: "Physics should merely provide a formal description for relations between observations." 45 This remark indicates the different philosophical attitudes of Born and Heisenberg concerning the question: "What should a physical theory in general, and Quantum Theory in particular, achieve?" Born's aim had been a logically consistent understanding of quantum physics; Heisenberg is merely aiming at a formal description for relations between observations. Pursuing this objective, Heisenberg was forced into one further ad hoc hypothesis: The "reduction of the wave packet". Describing the initial position indeterminacy of an electron at some initial time by a wave packet, the wave packet should, according to Heisenberg's own reasoning, spread out in space with increasing time. Experimentally, however, single electrons are only observed as particles, not as extended waves; Heisenberg's "explanation" again involves the measurement problem: "Every determination of position reduces the wave packet 41 "Darin, dass in der Quantentheorie zu einem bestimmten Zustand, z. B. 1 S, nur die Wahrscheinlichkeitsfunktion des Elektronenortes angegeben werden kann, mag man mit Born und Jordan einen charakteristisch statistischen Zug der Quantentheorie im Gegensatz zur klassischen Theorie erblicken. Man kann aber, wenn man will, mit Dirac auch sagen, dass die Statistik durch unsere Experimente hereingebracht sei." 42 "Dass die Quantentheorie im Gegensatz zur klassischen eine wesentlich statistische Theorie sei in dem Sinne, dass aus exakt gegebenen Daten nur statistische Schlüsse gezogen werden könnten, haben wir nicht angenommen.... An der scharfen Formulierung des Kausalgesetzes: 'Wenn wir die Gegenwart genau kennen, können wir die Zukunft berechnen', ist nicht der Nachsatz, sondern die Voraussetzung falsch". 43 "Dies wäre in der klassischen Theorie keineswegs anders." 44 "die Werte von p und q innerhalb dieser Genauigkeitsgrenzen den klassischen Bewegungsgleichungen Folge leisten, kann direkt aus den quantenmechanischen Gesetzenṗ = − ∂H ∂q ;q = ∂H ∂p geschlossen werden. Die Bahn kann aber, wie gesagt, nur statistisch aus den Anfangsbedingungen berechnet werden, was man als Folge der prinzipiellen Ungenauigkeit der Anfangsbedingungen betrachteten kann." 45 "Die Physik soll nur den Zusammenhang der Wahrnehmungen formal beschreiben." to its original size." 46 This reduction of the wave packet is a special version of the general "collapse of the wave function". If, as is assumed by the Copenhagen interpretation, the time dependent Schrödinger equation [4] describes the continuous evolution of a particular physical system, then its mathematical form predicts the evolution from some particular initial state into a superposition of many different physical states at some later time, in contradiction to experimental observations. Here again the "measurement problem" is claimed to provide the explanation: The act of measurement causes the "wave function collapse" to the state actually observed. The collapse itself remains unexplained, which really means that this type of "explanation" does not explain anything. But that, according to Heisenberg's and Bohr's philosophy, does not constitute a problem, since "Physics should merely provide a formal description for relations between observations." 47 The Complementarity Principle In April 1928, Bohr published a review [27], essentially in agreement with Heisenberg. The space-time continuum of Newtonian physics and the conviction that all physical quantities have exact values at all times remained as prerequisites for all further conclusions. Slight differences between Heisenberg and Bohr are restricted to the emphasis attributed to discontinuities or wave properties. While Heisenberg considered the particle concept and discontinuities caused by measurements to be of primary importance; Bohr emphasized the wave character of particles. During the years preceding the discovery of the new quantum laws, Bohr had steadfastly rejected Einstein's photon concept of radiation; diffraction phenomena, in Bohr's view, provided definite proof of the wave character of radiation. Even if the discovery of the Compton effect forced Bohr to admit the existence of light quanta, he persisted; particle and wave character of photons should not be considered to be mutually exclusive, but "complementary". Similarly, observation of reflection maxima in scattering experiments of electrons off crystalline surfaces "indisputably demonstrated" the wave character of electrons and other particles as well. Particle-wave duality, elevated to "Complementarity Principle", played the crucial role in Bohr's interpretation of quantum physics. Nevertheless, the classical concepts of the old Quantum Theory should still provide the framework for the description of all observations. Bohr argued that all measurements have to rely on macroscopic, i.e. necessarily classical, instruments. Measuring the properties of a quantum system, e.g. an atom, measuring apparatus and atom should be viewed as one interrelated total system; no separate reality should be attributed to the atom alone. And since the measuring instrument has to be described classically, Bohr concluded, that the description of atomic properties, too, must be in terms of classical concepts. The limits of obtainable accuracy in measuring physical quantities of quantum systems should be determined by the limits of accuracy of the combined system, consisting of quantum object and measuring apparatus. Applying this reasoning to the measurement of electron position by optical means, Bohr invoked the resolution limit of classical microscopes as limit to the accuracy of electron position. Assuming that arbitrarily accuracy may be achieved using correspondingly shorter wave lengths, the particle character of photons should cause correspondingly large momentum transfers due to the Compton effect, leading to larger and larger inaccuracies of momentum. Bohr relies on two hypotheses, which are both fundamentally wrong. I) Classical instruments may be made arbitrarily precise; they may determine exact values of physical quantities (e.g. position), but do not allow the simultaneous determination of its canonically conjugated partner (here momentum). This was Heisenberg's main argument, when he invented the measurement problem; Heisenberg's fundamental error has already been pointed out above. Bohr's second argument: II) The limit of accuracy is given by the instrumental resolution. This second argument is not only wrong, but in complete contradiction to experimental practice. Although progress in experimental methods was often essential in understanding quantum phenomena, limited instrumental resolution has not prevented conclusions about scales, far beyond the accuracy of the instruments themselves. Progress in quantum physics has rather been obtained from new insight, which provided a consistent understanding of observations. Thus, information about subatomic length scales is obtained from high momentum transfer scattering experiments, relying on detectors whose accuracy is many orders of magnitude worse. But a consistent understanding of quantum physics was not Bohr's objective, nor Heisenberg's [26]: "Physics should merely provide a formal description for relations between observations." And if mutually exclusive notions had to be invoked to describe different observations, " it is a question of convenience at what point the concept of observation involving the quantum postulate with its inherent 'irrationality ' is brought in......The two views of the nature of light are rather to be considered as different attempts at interpretation of experimental evidence" [27]. The Consolidation of the Copenhagen Interpretation By 1930, Wave Mechanics superseded Matrix Mechanics almost completely; Quantum Theory was identified with 'Schrödinger equation'. The space-time continuum of Newtonian physics as prerequisite for the understanding of quantum physics was taken for granted. The Copenhagen interpretation had gained widespread recognition; classical concepts, in particular continuity in space and time, remained to be its central doctrine. The book by Born and Jordan "Elementare Quantenmechanik" [18], published in 1930, constituted a belated attempt to stem the tide. Right away on the first page of §1 we find: "There can be no question of an "explanation" of the unfamiliar quantum laws by means of reduction to classical concepts; on the contrary the fundamental and primary character of the basic quantum theoretical assumptions emerged clearly only due to new developments. Progress consists precisely in abandonment of the remains of classical views; as a result a self-contained theory emerged, which allows to describe all atomic processes consistently and which contains the classical theory as special limit." 48 Pauli, who had become the most fervent advocate of the Copenhagen interpretation, reacted by a scathing review [28]. Start and finish are outright ridicule: "The book is the second volume of a series, which explains aim and meaning of the n'th volume by the virtual existence of the (n+1)'th volume........ The features of the book as far as print and paper are concerned are excellent". In between Pauli criticizes the algebraic methods as "inhibiting the insight into scope and internal logic (!) of the theory....such as the statistical interpretation of Quantum Mechanics (!)". Condescending advice is given to Born and Jordan (authors of the fundamental laws of Quantum Theory!) not to delve into statements of principle, such as the postulate "to represent each physical quantity by a matrix", and leave the interpretation to the true owners of the understanding, i.e. the followers of the Copenhagen interpretation. "The meaning of such a "representation" of a physical quantity in reality can be understood only due to later conclusions", i.e. Heisenberg's explanation of "indeterminacies", Bohr's stationary states, collapse of the wave function, complementarity, and particle-wave duality. Therefore Born and Jordan should "restrict the theory to the methods of measurements of particle position and momentum or of energy eigenvalues of stationary states and to the postulates of possible measurements obtained from the general wave-particle experience." If Quantum Theory is widely not understood until today, it is due to the "success" of the Copenhagen interpretation. Majority opinion attributed the Nobel prize for 1932 to Heisenberg "For the creation of quantum mechanics". During the following decades, the Copenhagen interpretation constituted the basis for almost all textbooks on Quantum Mechanics. This heritage dominates teaching of elementary Quantum Theory until today. Conclusion The scientific revolutions of the first quarter of the 20'th century, Relativity Theory and Quantum Theory, both rest on fundamental principles, imposed by universal constants. Relativity Theory is based in on the velocity of light c being a universal constant. Velocity of light c not being infinite requires a redefinition of space-time on large and cosmological scales. Einstein recognized that there is no space-time given "a-priori", independent of all empirical facts. Physical notions of space and time are related to measurements, and, light velocity not being infinite, implied that measurements of spacial distances and time intervals depend on the reference system from which measurements are performed. Einstein required that this should be reflected by the basic laws of Relativity Theory. The primary aim of this paper is to demonstrate that quantization of action in terms of a finite, i.e. non vanishing, universal quantity h requires a redefinition of space-time on atomic and subatomic scales. It was Max Born who discovered the key to understanding quantum physics. The step taken was even more radical than Einstein's. While Relativity Theory still retained the continuum, albeit reference dependent, Born recognized that the continuum as prerequisite to all understanding must be replaced by a discreet manifold on the elementary quantum scale. Action variables may only change by integer multiples of Planck's quantum of action, requiring all other physical quantities to change by finite steps as well. Furthermore, discreetness of all events in nature eliminates the justification for determinism, implied by the differential equations of classical physics. All elementary processes in nature are discreet and governed by statistical laws. And, similar to Einstein, Born required that this should be reflected by the basic laws of Quantum Theory. Consistent with the discreet character of nature, Born and Jordan derived the basic laws of Quantum Theory in discreet mathematical forms, Matrix Mechanics. Born-Wiener [19] and Schrödinger [3,4] soon replaced matrices by field theoretical forms, easier to handle mathematically. But, in addition to easier mathematics, field theoretical forms led to fundamental misunderstandings. The variables r and t of Schrödinger's wave function suggested Newtonian space-time coordinates; the dependence on the continuous variable t was interpreted as continuous variation of physical quantities in time. This misunderstanding was enforced, when Matrix and Wave Mechanics were shown to be equivalent. Born on one side and Schrödinger on the other had opposing views of this equivalence, which may be distinguished by "Born equivalence" vs. "Schrödinger equivalence". The latter became official doctrine, finalized by the supposed equivalence of "Schrödinger representation" (states are time dependent) and "Heisenberg representation" (operators are time dependent, eq. 10). In section 5 ("Time in Quantum Physics") the difference between time t as external parameter and time for closed systems has been highlighted. In §22 of the book "Elementare Quantemnechanik" (ref. [18]), Born specifies that time t in the Heisenberg representation must be considered as external parameter, not directly relevant for the temporal behavior of individual quantum systems. But Born's insistence was completely lost by the scientific community. The letter t was interpreted in Schrödinger's sense as Newtonian time. Discontinuous changes of action variables by integer multiples of Planck's constant h represent the key to understanding Quantum Theory. The mathematical implementation of this basic requirement resulted in commutation relations for canonically conjugated physical quantities. All further conclusions are direct consequences of this quantization condition. Most importantly: There is no continuous physical time. Concerning origin and physical significance of commutation relations, the scientific community widely ignored Born's reasoning. Field theoretical representations and the advent of quantum field theory consolidated the general conviction, that Quantum Theory retains Newtonian space-time notions, physical processes occurring continuously in space-time. But changing mathematical representations from discreet matrix calculus to operators and functions of continuous variables does not alter the physical content. All of physics is contained in matrix elements, and the use of commutation relations guarantees that matrix elements are independent of the particular representation used to obtain them. Actually, a close look at the application of Quantum Theory to the analysis of experimental results reveals that "Born equivalence" does constitute general practice! Typically, the original assumption of continuity in time is abandoned at the very end: Experimental evidence is incompatible with what a continuum theory would predict. To obtain accord with observed facts, the application of Born's statistical interpretation becomes necessary, which implicitly means that discreetness has been reintroduced through the back door. Any type of "collapse of the wave function" is equivalent to Born's statistical interpretation and the recognition of discreetness. While typical language upholds "Schrödinger equivalence", final practice amounts to "Born equivalence". A logically consistent understanding of Quantum Theory is obtained by going back to the origin: Max Born not only provided the first representation of its fundamental equations (together with Pascual Jordan [1]), he also recognized the basic principles of quantum physics. Nature is discreet and statistical at the elementary level: "Action variables may only change by integer multiples of h, requiring all other physical quantities to change by finite steps as well".
19,042.8
2020-05-31T00:00:00.000
[ "Physics" ]
Investigations on Quench Recovery Characteristics of High-Temperature Superconducting Coated Conductors for Superconducting Fault Current Limiters : Superconducting fault current limiters (SFCLs) are attracting increasing attention due to their potential for use in modern smart grids or micro grids. Thanks to the unique non-linear properties of high-temperature-superconducting (HTS) tapes, an SFCL is invisible to the grid with faster response compared to traditional fault current limiters. The quench recovery characteristic of an HTS tape is fundamental for the design of an SFCL. In this work, the quench recovery time of an HTS tape was measured for fault currents of different magnitudes and durations. A global heat transfer model was developed to describe the quench recovery characteristic and compared with experiments to validate its effectiveness. Based on the model, the influence of tape properties on the quench recovery time was discussed, and a safe margin for the impact energy was proposed. Introduction The smart grid technology market is booming due to demands for the automatic management of complex power grids [1]. The incorporation of renewable energy sources such as wind, solar or hydro and other distributed energy sources is generating unprecedented challenges to power generation, transmission and distribution in terms of efficiency, reliability and flexibility [2,3]. The development of power grid technology is being revolutionized rather than improved; for example, many direct-current grid projects are being tested and are proved promising, especially for power transmission and distribution in data centres, electrical ships or aircraft, and self-sustainable micro grids [4][5][6][7]. However, the smart grid meets important challenges associated with fault currents, which are occurring at rising rates, and have larger peak values and faster propagation speeds [8,9]. Therefore, in addition to intelligent parts for the realization of real-time monitoring and control of grid operation, smart grids feature advanced power applications such as fault current limiters to reduce the fault current rapidly and effectively [10,11]. A superconducting fault current limiter (SFCL) that mainly consists of high-temperaturesuperconducting (HTS) coils generates almost no Joule losses during normal operation; however, it can suppress fault currents much faster and more efficiently than conventional ones, let mechanical current breakers function, and recover to normal operation quickly when an electrical fault happens [12,13]. Such features like "invisibility" and self-healing (i.e., that the SFCL can automatically switch between fault current limit and normal operation state) mean SFCLs meet the needs of smart grids [14]. With increasing interest in the use of SFCLs in modern grids, the feasibility, fabrication and management of SFCLs are being considered in laboratories and even tested in grids [15][16][17][18]. For example, the 10 kV/200 A, 40 kV/2 kA and 220 kV/1.5 kA resistive-type SFCLs were tested in a grid and proved to be effective in China by Shanghai Jiaotong University, Institute of Electrical Engineering and Beijing Jiaotong University [19][20][21]. However, many scientific and technical problems remain in building a practical SFCL, due to the complex non-linear, temperature-and magnetic-field-dependent properties of superconducting materials. An SFCL works by transforming into normal states (namely, quench) to restrain fault currents, and by recovering to superconducting states to restore normal grid operation. The quench recovery time, which is the time taken by an SFCL to restore normal operation, is crucial as it determines the time window for fault isolation and load current compensation [22]. In the literature, experiments and simulations were reported to investigate quench recovery characteristics of SFCLs. From the macroscopic point of view, the topology of SFCLs was discussed qualitatively, mainly for determining the current-limiting efficiency [23][24][25]. For more precise descriptions of quench recovery processes, measurement and calculation of HTS tapes were carried out, which clarified the physical nature of the process [26][27][28][29][30][31][32][33]. However, when a practical SFCL is designed and built for operation, how the quench recovery time changes with a different fault current or energy needs to be known. Approaches for such a quick and effective estimation of the quench recovery time directly from the impact energy of a practical fault current have not been reported. State Grid Corporation of China has developed a resistance type 10 kV/100 A SFCL prototype and tested it in a grid. In the present work, as part of theoretical research in the project, the quench recovery time of an HTS tape was investigated numerically and experimentally. Fault currents of different magnitudes and duration were applied to an identical HTS sample to measure its non-linear response at 77 K. A simple but effective global heat transfer model was proposed to quickly estimate the recovery time with the impact energy. With consideration of the temperature dependent properties of different composite materials in HTS tapes, the model fits well with the experimental results. Based on the model, influences of geometry and material compositions of HTS tapes on quench recovery wer discussed to guide the future design of SFCLs. In addition, the damage temperature of HTS tapes was verified by the model and experiment. Furthermore, a safety margin of the impact energy for avoiding irreversible damage of HTS tapes was suggested. Sample The characteristics of the HTS tape used in the experiment are listed in Table 1, and the picture of the sample is displayed in Figure 1. The time-dependent voltage and current were measured by the four-point method. The copper terminals were 20 cm apart and the distance between the two voltage leads was 18 cm. The voltage taps were soldered onto the YBCO tape supported by a G10 plate. The YBCO copper tape was wrapped with kapton films to simulate actual working conditions. During the experiments, the entire experimental set-up was kept at 77 K by being immersed in liquid nitrogen under standard atmospheric pressure. When the measurement started, the programmable power supply output a current pulse of large amplitudes (detailed in Section 3) to impact the tape. The tape was subsequently quenched and lost its superconducting characteristics. Immediately after the impact current pulse was over, the power supply was switched to output a small constant current of 0.5 A. This value of current allows continuous measurement of the sample without producing non-trivial heat. The sample then started to be cooled down by liquid nitrogen to recover superconductivity around 90 K and reach 77 K finally. Throughout the measurement, the voltage drops of the sample and R 0 were recorded by the nanovoltmeter and the data acquisition card at sampling rates of 1000 Hz and 10 Hz for the quench process and recovery process, respectively. The amplitude of current flowing through the sample was obtained by the voltage measurement of R 0 . Results In this work, sixteen groups of experiments were carried out on the sample. As shown in Table 2, impact currents of different amplitudes and durations were applied to the sample. Currents and voltages were measured simultaneously and the corresponding impact energy was calculated by their integration. The quench recovery time, defined as the time needed to drop from the maximum resistance of the tape to around zero, was extracted. The quench and recovery process of the HTS tape can be identified from the resistance of the tape. Figure 3 compares the resistance of the YBCO tape under different impact currents of 370 A, 480 A, 580 A, and 690 A, with the same pulse duration of 500 ms. As shown in Figure 3, the resistance first ascends to a peak value and then descends to zero with time, which represents the quench and recovery processes of the HTS tape. The incoming current travels the path with the least resistance once after the YBCO copper tape is completely quenched, so the current will, therefore, pass through both sides of the copper layers in our experimental setup. The peak resistance occurred at 500 ms when the impact current ended. The peak resistance of the sample increased with the increasing amplitude of the impact current. After the peak, the resistance reduced to zero, suggesting that the sample recovered to the superconducting state. The descending slopes of the four curves are almost the same, which will be further discussed in the next section. Figure 4 compares resistance of the YBCO tape under the same impact current 580 A for different pulse durations of 250 ms, 500 ms, 750 ms and 1000 ms. As shown in Figure 4, the ascending slopes of the four curves are identical for euqal impact currents. The peak resistances occur at different times according to when the impact currents end. The peak resistance and quench recovery time increase with the durations of current pulse. Similar to Figure 3, the descending slopes of the four curves are almost the same. At the end of the experiment, the YBCO tape was found to be damaged under an impact current of 690 A and pulse duration of 1000 ms, as shown in Figure 5. With the observed drop off of the voltage and delamination of the tape, the superconducting tape was irreversibly damaged. Numerical Model As mentioned above, the resistivity of the sample, which depends on the temperature, can indicate the states of the sample; as a result, we proposed to use the global temperature as the state variable to describe the quench and recovery processes. A global heat transfer model was developed, inspired by earlier heat transfer models [34,35]. During the quench process, a current pulse of amplitude I a was supplied to the sample. The temperature quickly increased and the process could be described as a differential equation, In this equation, the product of current I a and voltage V HTS was the electrical power absorbed by the sample. q(T) was the heat dissipation from the sample to the liquid nitrogen. The item on the right side of the equation describes the temperature increase of the sample. The temperature dependent heat capacity c(T) is a weighted value calculated from heat capacity of copper and Hastelloy [36], since they composed nearly all mass of the tape. The integrated form of Equation (1) is: where t p is the pulse duration. Q i is referred to as the impact energy as shown in Table 2. Q d is the total energy dissipated from the tape to liquid nitrogen during the quench process. In literature, the quench process is often supposed to be adiabatic for simplification, which means [37,38], Applicability of this assumption will be discussed in the next section. During the recovery process, the temperature slowly decreased following, During this process, the heat dissipation q(T) was dominated by convection cooling of the liquid nitrogen; as a result, q(T) was determined by the surface area of heat exchange and the temperature difference between the sample and liquid nitrogen, where S is the double-sided surface area of the sample, T 0 is the boiling point of liquid nitrogen (77 K) and h(T) with the unit of W/(m 2 K) is the convention cooling coefficient. The values of h(T), was taken from reference [36]. In this work, we are mainly interested in the recovery time, so instead of solving equations from the very beginning, we can also solve Equation (4) from the time when the recovery process starts. The initial condition for Equation (4) is then the peak temperature of the tape when the quench process ends. This peak temperature can be evaluated from the measured resistivity of the tape, where A s is the effective conducting area and L s is the length of the sample. The sample contained several metal layers among which copper contributes the main conducting route. For simplicity, the effective conducting area A s was calibrated by experiments. The resistivity was assumed to be the value of copper at 90 K when linear resistance first appeared after the impact current applied. Then the temperature with time can be estimated by simply comparing this resistivity to the temperature-dependent resistivity of copper [39]. This approach provides more reasonable results, because it does not take the adiabatic assumption for the quench process as Equation (3). Further discussions will be made in the next section. This model can be easily solved by time-step iteration. In the next Section, the temperatures and recovery time will be calculated using the global heat transfer model and compared with experiments to discuss the feasibility and application of the model. Discussion The influence of the impact energy on the recovery process was investigated through comparison of the numerical model and experiments. The resistivity of the tape first increased with the temperature to a maximum value, indicating a quenched state, and then decreased to zero, indicating recovery of the superconducting state. The resistivity and temperature of the tape reflect the state of the tape. The peak resistance or temperature is of importance for indicating initiation of quench recovery. Peak resistance of the tape is plotted against impact energy in Figure 6. With increasing impact energy, the peak resistance increases for any pulse duration. Furthermore, the peak resistance increases linearly with the impact energy. For the same impact energy, a shorter pulse duration results in a larger peak resistance. The reason for this is that a shorter pulse results in smaller Q d in Equation (2), thus larger peak current and peak resistance. Figure 7 illustrates the effect of impact energy on peak temperature with pulse durations of 250 ms, 500 ms, 750 ms and 1000 ms, respectively. With increasing impact energy, the peak temperature increases throughout the pulse duration. For the same impact current, the peak temperature increases with the impact energy and the trend is the same to that of peak resistance shown in Figure 6. The reason is that the resistivity of stabilizers increases almost linearly with temperature above 100 K. Figure 8 shows the temperature change as a function of time during the recovery process, which is calculated by Equation (4) using the iterative method of time steps. The temperature dependence of the convection coefficient and the heat capacity of the tape were taken into account. The recovery time can be recognized as the time needed for the temperature to drop from the peak value to 90 K as shown in red dotted line in the Figure 8, which is the critical temperature of the tape. As shown in Figure 8, for any impact energy, the temperature decreases with time at varying rates. The temperature decrease was faster at above 120 K; however, it was boosted below 120 K, during the recovery process. This can be explained by the fact that the heat exchange process is dominated by film cooling at higher temperatures. The gas film hinders the heat exchange between the liquid nitrogen and the tape in the film boiling state, so the heat transfer coefficient of film boiling at higher temperatures is smaller than that of nucleate boiling at lower temperatures. Moreover, the heat capacity of stabilizers and substrates decreases with temperatures, which increases their sensitivity to temperature. The recovery time can be extracted as the cross points of the curves with the horizontal axis for further discussions. Figure 9 presents the recovery time calculated by solving Equations (1)-(5) assuming that the quench process is adiabatic. The calculated values are compared with experiments. As shown in Figure 9, the curves of recovery time with impact energy almost overlap for different pulse durations, which demonstrates that the recovery time is mainly determined by the impact energy. This observation holds for either the experimental or calculated results. However, there exists a large deviation of the calculated curves from the experimental ones. As marked in Figure 9, the calculated recovery time was generally overestimated by 2 s. This error results from the ideal assumption that the quench process is adiabatic. However, heat exchange during the quench process should not be neglected; in fact, a common observation is that quench cannot be triggered, when the impact energy is not large enough and the heat exchange rate between the tape and liquid nitrogen is relatively fast. Figure 10 presents results calculated using the modified model by solving Equations (4) -(6). The peak temperatures obtained from Figure 7 were used as initial conditions instead of considering the quench process. As shown in Figure 10, the calculated and experimental results are in good agreement, which validates the numerical model. Similar to Figure 9, the calculated results show clear trends that the impact energy dominates recovery time; however, unlike Figure 9, the pulse duration does play a role as well. The shorter the pulse duration, the larger the recovery time, which can be explained by the fact that the loss dissipation time during quench is shorter for shorter pulse durations. The peak temperature as a function of the impact energy in Figure 8 shows equivalent trends, correspondingly. As mentioned in Section 3, the sample was irreversibly damaged by delamination under an impact current of 690 A and pulse duration of 1000 ms. The occurrence of such damage is considered to be directly related to the peak temperature. A supplementary experiment using an impact current of 860 A and pulse duration of 500 ms was carried out as marked in Figures 6, 7 and 10. This sample was also damaged. As shown in Figure 7, the peak temperatures of the two damaged samples were both around 530 K. This observation is consistent with [40][41][42]. In other words, this threshold temperature determines the possibility of recovery. When the impact energy is large enough to increase the temperature of the HTS tape to around 530 K, the tape will be delaminated or damaged. As discussed above, the validity of the model has been justified by the agreement between the experimental and calculated results. In the literature, factors that influence the recovery time were reported. For example, Pavol suggested lowering the thermal capacity/wetted surface ratio of the tape to accelerate recovery and demonstrated the effect in a numerical simulation [34,43]. Hellmann also reported that surfaces with a macroscopic texture and increased lamination thickness can achieve a higher heat flux to surrounding coolants [44]. Furthermore, Maeda found that recovery characteristics were much improved by pressurization of liquid nitrogen [27]. Based on the developed model proposed in this work, the influence of different factors on the quench recovery time can be investigated or predicted in a more simple and effective way. Next, the geometry of the tape will be discussed; moreover, the impact energy that can result in irreversible damage of the tape will be predicted. Figure 11 shows the calculated recovery time as a function of the impact energy for tapes of different widths. The tape would be damaged when the impact energy was 500 J and the width of tape was 4 mm. Generally, the recovery time increases with impact energy at similarly for different tape widths. The wider the tape, the shorter the recovery time. This can be understood by the fact that wider tapes posses a larger area of heat exchange. In addition, the extra width of the tape acts as a thermal sink and the peak temperature is lowered. This suggests that incorporating wider metallic coats into an HTS tape can be effective at increasing the speed of recovery. The influence of the thickness of copper stabilizers was also investigated by the model and cross-validated with results in the literature. Calculated results of peak temperatures with impact energy were compared for HTS tapes with different thicknesses of stabilizers in Figure 12. Peak temperatures show a linear trend with impact energy, and the slope decreases with increasing thickness of copper. The reason for this is that thick copper stabilizers act as thermal sink and contribute to an enthalpy increase. As mentioned above, the HTS tape can be irreversibly damaged when the peak temperature exceeds 530 K. From Figure 12, the safe margin of impact energies can be identified for different types of tapes. For example, the stabilizer has to be thicker than 40 µm when an impact energy of 500 J is applied. Based on the peak temperature, the recovery time as a function of the impact energy was calculated for different copper thicknesses, as shown in Figure 13. Interestingly, unlike the peak temperature, the recovery time decreases with increasing copper thickness, particularly for large impact energies. For thinner tapes, the peak temperature is higher, but the heat capacity is smaller; as a result, the cooling efficiency of thinner tapes is much improved. This observation is consistent with results reported in [34]. The results suggest that thinner tapes, despite being more easily damaged, help to reduce recovery time. To provide a more intuitive picture of the analysis, a two dimensional contour plot is shown in Figure 14. The two axes represent width and thickness of the tape. The contour lines represent when the margin temperature 530 K is reached. The three lines in Figure 14 represent the safety margin of the tape when the impact energy is 300 J, 400 J and 500 J. For example, when the impact energy is 500 J, tapes with geometry size below the black line are probably damaged. Below the 530 K contour line is the predicted dangerous zone where the tape will be damaged, and above is the predicted safe zone. It is not difficult to see that as the impact energy increases, the zone regarding the width and thickness of the tape increases. The result shows that selecting the appropriate width and thickness of the tape in the safe zone according to different impact energies can avoid damage to the tape. Figure 14. Curves of the 530 K temperature contour as a function of the width of the tape and the thickness of the copper layer in the tape when the impact energy is 300 J, 400 J and 500 J. Conclusions In this work, quench recovery characteristics of a high-temperature superconducting (HTS) tape were investigated experimentally and numerically. The resistivity of an HTS tape was measured throughout the quench and recovery processes. The influence of the impact current on the recovery time was discussed in detail. Correspondingly, a numerical model was developed with temperature-dependent material properties taken into account. The calculated recovery time fit well with experimental results, which justified effectiveness of the model. The model also suggested that the previous adiabatic assumption for the quench process was not proper since it could produce obvious error to the calculated values in the recovery time. Besides, impact energy margin that could result in irreversible damage of HTS tapes were calculated and validated by repeatedly damaging experiments. Based on the model, the influences of the geometry of the tape on quench recovery time and the margin of impact energy that may damage the tape were discussed. Further study can be done to improve the model by including dynamic boundary conditions to describe the quench process.
5,257
2021-01-22T00:00:00.000
[ "Physics" ]
SOLVABILITY OF A CLASS OF COMPLEX GINZBURG-LANDAU EQUATIONS IN PERIODIC SOBOLEV SPACES . This paper is concerned with the Cauchy problem for the complex Ginzburg-Landau type equation u t = ( δ 1 + iδ 2 )∆ u − iµ | u | 2 σ u in (0 , ∞ ) × R d , where δ 1 > 0, δ 2 ,µ ∈ R and d ∈ N . Existence and uniqueness of spatially periodic solutions to the problem are established in a space which corresponds to the Sobolev space on the d -dimensional torus when 0 < σ < ∞ ( d = 1 , 2) and 0 < σ < 1 / ( d − 2) ( d ≥ 3). The result improves the case p = 2 of the result in the space W 1 ,p given by Gao-Wang [2, Theorem 1] in which it is assumed that d < p and σ < p/d . 1. Introduction. In this paper we consider the Cauchy problem for a class of complex Ginzburg-Landau equations    ∂u ∂t = (δ 1 + iδ 2 )∆u − iµ|u| 2σ u, (t, x) ∈ (0, ∞) × R d , Since spatially periodic functions can be regarded as those on the d-dimensional torus T d , the problem (CGL) 0 can be also translated to the problem on T d . In this paper we shall use W m,p per (R d ) instead of W m,p (T d ) (Sobolev space on T d ) because our interest is solvability of (CGL) 0 on R d and our treatment is based on functions on R d . Then there exists a unique local solution of (CGL) 0 . (II) Assume further that Then there exists a unique global solution of (CGL) 0 . We focus our eyes on the case p = 2. If p = 2, then d satisfying (1) is restricted to d = 1 and the combination of σ and d satisfying (2) exists only when σ = 1 and d = 1. In other words, Gao and Wang have not dealt with the case d ≥ 2 or σ = 1. The purpose of this paper is to relax the conditions (1) and (2) when p = 2 and to extend the restriction from σ ∈ N to σ ∈ R + := (0, ∞). Here we define local and global solutions of (CGL) 0 as follows. In particular, if T = ∞, then u is said to be a global solution of (CGL) 0 . Now we state local and global existence of solutions to (CGL) 0 in the following two theorems, respectively. Theorem 1.2 (Local existence). Let u 0 ∈ W 1,2 per (R d ) and δ 1 > 0. Assume that σ satisfies Then there exists a unique local solution on [0, T ) of (CGL) 0 for some T > 0. Also, let u and v be local solutions on Then where L and ω are positive constants depending only on δ 1 , δ 2 , µ, σ, d, T and M . This theorem can be regarded as a limiting case of the results for (CGL) κ as κ ↓ 0. Indeed, in [3,7,9,10] global existence of solutions to (CGL) κ was established under the condition (a) or the following (b) κ : To prove Theorems 1.2 and 1.3 we prepare fundamental estimates for (CGL) 0 which are given in Section 2. In Section 3 we first construct a mild solution of (CGL) 0 and we next prove local existence of solutions (Theorem 1.2). Section 4 is devoted to the proof of global existence (Theorem 1.3). In Section 5 we give some remarks on the inviscid limit (as δ 1 ↓ 0) of solutions to (CGL) 0 . 2. Preliminaries. For δ 1 > 0 and δ 2 ∈ R we define G t as follows: First we show that G t plays a fundamental role in solving The following three lemmas can be proved by the direct calculations. Lemma 2.2. Let G t be as in (5) with δ 1 > 0. Then for every t > 0, Now we define * by the convolution with respect to spatial variables: In the next lemma we describe that (δ 1 + iδ 2 )∆ generates an analytic semigroup on W 0,2 per (R d ) which can be represented by the convolution operator. Lemma 2.4. Let G t be as in (5) with δ 1 > 0. Define T (t) as T (0) := I and Then T (t) is a uniformly bounded C 0 semigroup on W 0,2 per (R d ) and its infinitesimal generator is given by (δ 1 +iδ 2 )∆ with domain W 2,2 per (R d ). Moreover, T (t) can be extended to an analytic semigroup, and so Proof. First we show that T (t) is a uniformly bounded C 0 semigroup and T (t) is differentiable for t > 0. From (7) and (10) it follows that and hence T (t) is uniformly bounded; note that T (0) = I. Using the Fourier transform, we can verify that Next, in the same way as in the proof of [1, Theorems 7.1], we can show that for δ 1 > 0, (9) and (10) give Thus by Pazy [11, Theorem 2.5.2] we obtain the conclusion. We can also obtain the following two lemmas for W 1,2 per (R d ). . Moreover all the above injections are continuous. where * denotes the convolution w.r.t. spatial variables, G t is defined by (5) and Lemma 3.2. Let u 0 ∈ W 1,2 per (R d ) and assume that Then there exists T > 0 and a unique mild solution on [0, T ] of (CGL) 0 . On the other hand, we see from (7) and (10) that Combining these inequalities and using Lemma 2.5, we obtain In view of (13) and (14) As in the proof of (14), we see from Lemma 2.7 that Taking the supremum on [0, T ], we have (15). Therefore if we take T sufficiently small, then the mapping S is a contraction on B R . Consequently, the contraction mapping principle yields that there exists a unique solution of (11) in B R . Finally we show uniqueness of mild solutions to (CGL) 0 . Let u and v be two mild solutions on [0, T ] of (CGL) 0 . Then we can see from Lemma 2.7, (7) and (8) that where Combining (17) with (16), we have Hence by Gronwall's inequality we obtain that u(t) − v(t) 1,2 ≤ 0. Therefore we conclude that u = v on [0, T ]. Local solutions. We show that the mild solution of (CGL) 0 is the local one of (CGL) 0 in the sense of Definition 1.1. Proof of Theorem 1. Assume further that the assumption in Theorem 1.3 is satisfied. Then there exists a constant L 0 > 0 depending only on δ 1 , δ 2 , µ and σ such that Proof. Multiplying the equation in (CGL) 0 by u(t), integrating it over (0, 1) d , taking its real part and using integration by parts, we obtain 1 2 Thus we obtain (23). To prove (24) we set Multiplying the equation in (CGL) 0 by −∆u(t) and |u(t)| 2σ u(t), in the same way as in the proof of (23), we have and where C σ := σ/ √ 2σ + 1 > 0. By virtue of the conditions on δ 1 , δ 2 , µ and σ in Theorem 1.3, we can choose k ≥ 0 satisfying In fact, we have (28) by taking Combining (28) where C 2 is a positive constant satisfying w 0,2σ+2 ≤ C 2 w 1,2 for w ∈ W 1,2 per (R d ). This is nothing but the desired inequality (24). We are in a position to complete the proof of Theorem 1.3. Therefore we see from the standard argument that u can be extended to the global solution of (CGL) 0 . We finish the proof of Theorem 1.3. Letting δ 1 ↓ 0 in (CGL) 0 , we obtain the Cauchy problem for nonlinear Schrödinger equations Here we point out some remarks in order. Consequently, we conclude that u is a global weak solution of (NLS).
1,972.6
2015-11-01T00:00:00.000
[ "Mathematics" ]
Scientific Software Development Is Not an Oxymoron Susan M. Baxter*, Steven W. Day, Jacquelyn S. Fetrow, Stephanie J. Reisinger‘‘Many scientists and engineers spend much of their lives writing,debugging, and maintaining software, but only a handful have everbeen taught how to do this effectively: after a couple of introductorycourses, they are left to rediscover (or reinvent) the rest of programmingon their own. The result? Most spend far too much time wrestling withsoftware, instead of doing research, but have no idea how reliable orefficient their programs are.’’ —Greg Wilson [1] A s Greg Wilson's American Scientist article [2] circulated on the ''bio-IT'' e-mail lists and blogosphere this past winter, many of us sighed, groaned, and smiled in recognition. The field of computational biology crosses the span between engineering and science-a surprisingly (to some) large gulf that typically is uncovered in the process of developing scientific software. Why opine on best practices for scientific software projects now? Computational biologists are taking on increasingly important roles in this Internet-enabled, information-rich, high-throughput era of biology [3]. Analytics and algorithms must operate on disparate and relatively large datasets. Curation and peer review is essential to critical analysis of computational conclusions. Software applications are needed to aggregate, integrate, and manage data, tools, results, and discoveries. Computational biologists are involved as advisors to technical teams developing and maintaining long-lived data resources, as product owners for software development, as coding and algorithm experts, and as reviewers of proposals and manuscripts. Whether code is developed for use in a single laboratory or as part of a larger, multiinstitutional project, there are best practices worth knowing and following. We are starting with the premise that scientific software development brings together different cultures. A ''certified technology stack'' might mean a robust n-tiered architecture to some and an expensive waste of resources to others. We want to avoid fanning controversy over interdisciplinary science [4,5] and misunderstandings inherent at the interface between engineering and science [6,7]. We hope to provide a common understanding so that we all-specialists and generalists-can work effectively on scientific software projects, increasing project efficiency, software longevity, user community acceptance, and translational impact. We see important similarities between the way scientists and software engineers approach and attack problems which may provide a general framework for successful scientific software development. Scientists are taught the scientific method from the time they perform their first experiments. Similarly, software engineers are taught about the software development life cycle before they write their first ''if'' statement. By understanding similarities between these approaches, we can layer some practical methods from the software development life cycle onto computational biology projects to build a solid foundation for success. Two of us are card-carrying software engineers; two of us are formally trained as scientists. We are all battle-scarred veterans of large scientific software development projects, while working in business, nonprofit, government, and academic settings. Many of those projects were successful; some were not. We think that the best practices learned and employed on large scientific software projects can also instruct smaller development projects carried out by singleinvestigator laboratories or small teams. (In addition to the references cited, see Box 1 for a suggested library and for resources to improve scientific software development processes.) We define success as delivering a code base that produces consistent, reproducible results, is usable and useful, can be easily maintained and updated, and has a reasonable shelf life. We will also add that successful scientific software projects are usually fun-realizing this might expose how truly geeky we are. Suggested Best Practices To achieve success in scientific software projects, we propose a minimal set of guidelines for pragmatic practitioners, peer reviewers, and project leaders of small-(single-lab) to medium-(collaborative, noncommercial projects) sized projects. We debated, solicited advice, reread some of our favorite books [8,9], and took guidance from our editors, to boil down our experiences and this enormous topic to five recommended, stripped-down practices for successful scientific software development: 1) design the project upfront; 2) document programs and key processes; 3) apply quality control; 4) use data standards where possible; and 5) incorporate project management. We can trace project failures back to breakdowns in any one or more of these practices. We will next explain what we mean by each practice. Design the project up-front. Good scientists do not perform experiments before developing a hypothesis, then describing materials and methods to test that hypothesis. Similarly, before the first line of code is written, software projects should be proactively and thoughtfully designed. This does not necessarily require a voluminous tome, but it should answer two key questions: ''What will the program(s) do?'' and ''How will the results produced by the program be verified?'' The most simple design documents describe inputs, how those inputs will be transformed by the program(s), and outputs. Based on the purpose of the software, identifying the appropriate technologies or programming languages is a vital decision during the design phase. While typically driven, often mistakenly, by the current in-house expertise of the software developers, there should be careful analysis in addressing the problem with the most practical selection of technologies. For example, if ease of distribution is considered important, the platform-independent nature of Java may make the most sense; if the software deals with a great deal of text manipulation, Perl may be best suited; if speed of execution is essential, C or Cþþ may be the way to go. In addition to considering the built-in strengths of a particular language, most offer a vast array of canned libraries (whether included in the distribution or preexisting as an open source project) developed to handle all but the most arcane technological issues. It is at this design juncture that much time can be saved in building software components that could be acquired for almost nothing through relatively minimal research. Additionally, plugging in trusted, reusable code bases lends credibility to the overall quality of the software and streamlines the testing phase. The team should develop test plans and create data to test their code. In the development of test plans, it is also good practice to consider independent variables, such as how long the program might take to run on a certain platform, how it will work with a real-world-sized input file, or how well it will interface or interoperate with other programs that are not a part of your project. The design phase should also address software usability requirements. If the software under development will be used only by the programmer, usability might not be a large concern. However, as funding agencies emphasize dissemination, collaborative teams aim to share tools; and to use statistics to help justify renewal of funding, usability should be a higher priority in scientific software development. Designing facile user interfaces, interactive feedback cycles, maintenance and release plans, or easy reuse of code or tools requires careful thought, due diligence, and resourcing up-front. Typically, the proposal writing or management approval process provides a mechanism to force project design. Before coding begins, projects can discover existing tools and data standards and articulate the planned functionality and testing of the software. No matter the scale of the software project, it is important to incorporate feedback from key stakeholders (thesis advisor, external advisory committee, etc.) in this process to ensure that the design meets expectations. Document programs and key processes. One of the foundations of scientific research is the lab notebook, where materials, methods, and results are recorded so that experiments can be repeated. Similarly, all computer programs and code bases should be well-documented, modular, and easy to read and follow even by users who did not write the program. Modularity can be a complex issue, but at a basic level it refers to coding in a way so that the overall task being performed is divided into small, discrete units of work. This design paradigm promotes reusability and flexibility [10]. A modest level of documentation might provide help through a user-guide, information on how to compile and execute a program, and in-line comments describing program functions and modules. Use a quality control process. One cornerstone of good science is reproducibility of results. Similarly, being able to consistently reproduce the results of a computer program is the yardstick used to measure the validity of that program. Reproducibility requires three things: ensuring a program works the way it should (testing), knowing exactly what was used to produce the results (version control), and recognizing and tracking program bugs. Programs should be thoroughly tested according to the test plans developed in the design phase. Well-designed unit tests may be used to address whether a particular module of code is working properly and allows testing to proceed piecemeal and iteratively throughout the development process. This enables bugs to be identified and handled early so as to avoid major problems during integration and final testing. Undeniably, computational biology projects are fluid: there are always newer, better data files and standards available, requiring continual updates to the code base. Consequently, it is critical to track exactly which version of software and which set of input files and parameters were used to produce a specific set of results. This is especially important six months later, when the original programmer has moved on to another project. Developers should use version control for both data and source code, tying results to specific versions. Subversion [11] and CVS [12,13] are open source version control systems freely available. Finally, confessing to and tracking known bugs should be encouraged since bugs are to be expected in software products. Jira [14] and Bugzilla [15] are widely used issue-tracking tools. Beyond application of functional testing, quality can be addressed further through performance optimization using bounds checkers (e.g., Valgrind provides basic debugging capabilities plus detailed profiling of memory use). These issues are typically overlooked during software development as problems with memory leaks and poor memory management hide behind software functionality and may long go unnoticed. Apply data standards where possible. Disseminating to and sharing results with the broader research community is critical and often provides the basis for new scientific progress. The same is true for computer programs. The inputs, outputs, and ''results'' of computer programs are often data files. Whether included as supplementary materials for a manuscript or as subtables in an enterprise level relational database, scientific data should be supplied in accepted, standard formats wherever possible. Admittedly, biology is a fast-moving target. However, the increasing need to share, compare, and integrate data and tools is driving communitywide initiatives to standardize biological data formats [16,17]. As one example, the MGED Society has defined minimal sets of parameters to describe gene expression array datasets (MIAME), along with a data standard (MAGE) [18]. As a result of their lead and other work, shared repositories for microarray results are now available and evolving [19,20], and journals are increasingly requiring supplemental data deposition at them [21]. Software developers should research the availability of communityaccepted data standards as inputs and outputs for their programs. Even if suitable data standards are not available, it is important to include documentation (metadata) describing the data. Metadata should explain the format (syntax) of the data as well as definitions and assumptions that allow the data to be interpreted or used in the proper context (semantics). Data standards ensure the ability to scale and integrate code bases, enable accurate and efficient code development, and reduce user and peer reviewer frustration. Incorporate project management. In scientific research, principal investigators ensure that experiments are performed according to defined procedures, while making progress in the context of a schedule and a budget. For software development projects, a project manager performs a similar function. Principal investigators who are not themselves software engineers may find themselves filling a project manager role because they supervise people in their labs who write software. Project management for a modest algorithm-development project involving one or two programmers might involve informal design and code reviews, regular meetings to track progress against an established timeline, and review (and sign-off) of testing results. Larger, collaborative projects, however, can become hopelessly chaotic without more disciplined project management. A commonly used approach to managing larger projects is to break them into manageable subprojects, with a series of release cycles interleaved with user or stakeholder feedback. A simple project website, wiki, or more sophisticated solutions such as Xplanner [22] and Basecamp [23], can be used to facilitate team communication, share project plans and documentation, and transparently manage development projects. In our opinion, the Scrum software development methodology [24] offers a practical way to iteratively manage medium-sized software projects. Examples of Successful Projects That Employ Best Practices Outside our own anecdotal experiences, we think there is growing evidence that software best practices can effectively meet real-life, scientific needs. We can point to heavyweight projects, such as the cancer Biomedical Informatics Grid (caBIG) [25], and to more modest, lightweight activities such as the Bioconductor project [26]. Specifically, the Bioconductor project has adopted practical techniques that are instructive for small software projects [26]. The Bioconductor project develops statistical software packages ubiquitously employed in biomedical research. This group recognized that reproducing computational research reported in the literature is usually hampered by poorly documented software packages. While scientific manuscripts now typically point to supplementary materials (usually data and computer programs) on the Internet, access to them is not always enough to replicate the research reported. This opensource project adopted the concept of a ''vignette,'' which is a detailed and interactive document providing a textual description of software functionality [27]. This form of documentation, long regarded as a software best practice, has engendered quite a cult following in the scientific community. In this case, the ultimate goal of reproducible research has exposed software best practices as an enabler and not as a burdensome side effect. Reading back over this article, we recognize that there are many ''shoulds'' in our guidelines. In our defense, we write from our collective, heartbreaking experiences watching wheels reinvented, finding dead or unusable programs, and, worse, inheriting rancid and labyrinthine code bases. We are of the opinion that community adherence to the guidelines described here will increase the impact and usability of computational biology work, without placing undue burden on the creators of rapidly evolving, scientific code bases. "
3,305.4
2006-09-01T00:00:00.000
[ "Computer Science" ]
GNSS AND PHOTOGRAMMETRY BY THE SAME TOOL: A FIRST EVALUATION OF THE LEICA GS18I RECEIVER Leica Geosystems recently introduced a multi-constellation GNSS sensor named GS18i. It is capable to perform tilt compensation and has an integrated photogrammetric camera, allowing the users to measure inaccessible features: this is called visual positioning. The Laboratory of Geomatics, at the University of Pavia – Italy, performed a first evaluation of the rover. Five accessible points were measured repeatedly with the pole having different tilt angles; measurements’ total number was 2077. After moderate blunder detection, RMSE values are 12, 10 and 18 mm, for the East, North and height components. Measurement quality is substantially independent from the pole’s tilt angle. Moreover, ten points belonging to a building’s façade were repeatedly measured by photogrammetry, through the integrated camera, from distances in the range between 4 and 12 meters. In total, 1436 measurements were acquired. After blunder detection, RMSE values are 45, 25 and 66 mm, for the x, y and z components of a local cartesian system. Measurement quality mildly depend on the object-camera distance. Despite a good overall accuracy, results show some surprising aspects: the high ratio between the planimetric component x and y, the counterintuitive behaviour of the y dispersion, which decreases when the distance increases. While the present paper aims at simply being a first evaluation of the rover, next activities will deal with rigorous and controlled photogrammetric processing of the images and will also include simulations, in order to ascertain the role played by the various error sources involved. INTRODUCTION Leica Geosystems recently introduced a multi-constellation GNSS sensor named GS18i (" Leica GS18I," 2020). Together with the usual functionalities of a modern GNSS receiver, it is capable to perform tilt compensation, thanks to the onboard IMU (Inertial Measuring Unit). Moreover, it has an integrated photogrammetric camera (Figure 1), allowing the users measuring inaccessible features. It can be said it implements the integration of GNSS and photogrammetry. As a confirmation, the company speaks about visual positioning. Very interestingly, the controller's onboard software can orient the images in the way called, in the photogrammetric jargon, direct sensor orientation. To do so, it integrates GNSS measurements, those coming from the accelerometers and information obtained by image matching. The Laboratory of Geomatics, at the University of Pavia, Italy, had the equipment on loan from Leica Geosystems Italy, for a first evaluation and then acquired it. The Laboratory conducted a rather extensive validation, which is partly illustrated in the present paper. RELATED LITERATURE Integrated survey was one of the most important trend topics in the last two decades. Combinations of GNSS receivers, INS systems, LiDAR devices and cameras were developed for supporting MMV (Mobile Mapping Vehicles), UAV (Unmanned Aerial Vehicles) or autonomous driving, creating new opportunities of surveying; computer vision and artificial intelligence were further improved the quality of the results achievable by these systems. Leica GS18I receiver sets perfectly in this trend by combining a GNSS/INS system with a camera inside a unique device. Preliminary analysis on the integrated use of a GNSS receiver and multi-camera system were proposed in (Baiocchi et al., 2018;Cera and Campi, 2017) in what was called "imaging rover". They used this approach in an extensive series of tests having different morphological and geometric characteristics (archaeology, cultural heritage, geology, etc.) finding good results comparable to those obtained by traditional techniques but with cost effective reduction in terms of logistics and time. The photogrammetric use of acquired images requires an accurate estimation of the six external parameters: the 3D positions and the attitude of the camera at each shutter click. In the Leica GS18I this is done by combination of the GNSS and INS data, respectively. As the quality of 3D position is quite wellknown information thanks to the huge experience reached in RTK positioning (El-Mowafy, 2000;Feng and Wang, 2008;Luo et al., 2020), the analysis of attitude still represents a possible research topic. Receivers usually measure tilt (connected with attitude angles) by means of accelerometers, to determine the inclination, and electronic compass, to establish the direction. Nevertheless, this solution presents some issues: magnetometer inside the compass is influenced by the inclination, an on-site calibration is required and the measurements can be influenced by local magnetic disturbances (Luo et al., 2018). To avoid the drawbacks mentioned above, the tilt compensation solution of the Leica GS18I utilizes precise IMU measurements from industrial-grade micro-electro-mechanical sensors; tests presented in shows as the proposed IMU-based tilt compensation is applicable at large tilt angles of more than 30 degrees, where a 3D positioning accuracy of 2 cm is still achievable. In Leica GS18I global position and attitude information are then combined with images to measure points in the so called "visual positioning" (Schaufler, 2020). As its release date was in the middle of 2020, there is little literature available for its performance. One of them is in which the authors present the operating principle and test the receiver under different configuration, camera-to-object distances and trajectories varying in length and geometry. Measurement's assessments show high-precision results where 2D and 1D RMS errors are 2.9 cm and 2.5 cm, respectively. AIM AND ORGANIZATION OF THE PAPER The purpose of the note is to validate precision and accuracy of the photogrammetric measurements performed by the GS18i. As a baseline, results are shown for pure GNSS measurements of accessible benchmarks. In summary, the present paper validates two kinds of measurements: • Scenario 1 -the GNSS-NRTK measurements of accessible benchmarks, which were performed with the pole kept tilted and the compensation on; as already mentioned, such measurements are not the main focus, in the present paper, but are analysed as a baseline; • Scenario 2 -photogrammetric measurements of inaccessible points, which are located on a façade of a building. They are obtained from the images acquired by GS18i and from the exterior orientation parameters automatically determined by the receiver. The paper is organized as follows. Section 4 describes the sensor studied and its operating principle. Sections 5 and 6 describe the test site used and the way the reference coordinates were determined. Section 7 illustrates the two scenarios considered in this paper: measurement of accessible points with the rover mounted on a pole, and with the pole kept tilted; measurement of inaccessible points located on a façade by means of the camera. Sections 8 and 9 present results for the two consider scenarios. Finally, Section 10 discusses the main findings and give some hints on the next planned activities. BASICS ON THE OPERATING PRINCIPLE OF LEICA GS18I The Leica GS18i sensor is a modern, multi-constellation GNSS receiver, capable of acquiring signals from all the systems which are available in Europe, namely GPS, Glonass, Galileo and Beidou. It usually is operated mounted on top of a pole, though it is still possible to place it on tripods, by suitable adapters. It can perform tilt compensation: even if the pole is not vertical, when a point is measured, the system if able to compensate for the related deviation. Such feature is based on the use of a IMU unit (accelerometers and gyroscopes, plus SW procedures for the integration of instantaneous measurements), rather than inclinometers, which is the most diffused solution for similar instrumentation. The adoption of an IMU, instead of inclinometers, has several consequences. First, quality of tilt compensation is claimed to be better. Furthermore, measurement is sort of dynamic process and tilt compensation is obtained by Kalman filtering; indeed, the pole must be moved to the desired arrangement and the measurement must be performed quite soon, before significant drifts arise; therefore, measurement duration is not set by the user, but determined by the management software, in order to maximize quality. Finally, data coming from the IMU can be used, in conjunction with GNSS observations, to dynamically estimated instantaneous position and orientation of the sensor. This is used when the photogrammetric mode is operated, as illustrated in the next paragraph. The Leica GS18i sensor is equipped with an integrated ArduCam AR0134 camera ("Arducam AR0134," 2020), which is visible in Figure 1. It is a Bayer-pattern, global shutter, RGB camera having a 1.2 MP image resolution (1280 x 960). Pixel pitch is 3.75 m and focal length is 3.1 mm for the lens equipping the GS18i; however, the camera manufacturer offers several other options for the lenses. Resolution on the object is 12 mm at a 10 m distance. The camera is capable to acquire several tenths of frames per second but, when it is coupled with the GNSS antenna, it is used at the 2 Hz framing rate. The camera is calibrated so that a full photogrammetric use can be performed. The EXIF file reports fundamental parameters such as calibrated focal length and position of the camera's principal point; concerning lens distortion, we argue that images are undistorted by the controller during the processing happening just after the acquisition; when the images are downloaded into a computer, as we did, they are declared undistorted. The working principle is the following. When the user is in front of some feature needing visual positioning, such as the façade of a building or a trench in a road, he starts the suitable procedure and then walks in front of the feature, taking care that the camera frames what must be measured. The system automatically acquires an image sequence; by means of GNSS and IMU observations, images' exterior orientation parameters (EOPs) are directly determined. When the user stops the acquisition, the system needs some time to store the data and to refine EOPs by means of tie points, which are automatically extracted, and bundle adjustment. At ("Visual Positioning and Leica GS18 I," 2020), several examples are shown, of how the antenna can be used to acquire buildings, roads, or trenches. After image storage and pre-processing, it is possible to perform photogrammetric measurements directly on the field, by means of the controller: the user can click on a feature that is visible on one image of the sequence, and the system will automatically search for it in other images and output the coordinates. Nevertheless, it is possible to perform the measurements successively, in the office. The system typically uses four or five images to perform measurements, even if we could see examples with less or more; indeed, the driving criterion seems to be: all the images where the selected feature can be located with good quality. Also, the system seems quite effective in performing outlier rejection and in discarding wrongly matched features. Indeed, we did not perform a systematic study of this aspect, but the observation of several examples led us to the mentioned conclusion. The user can anyhow correct homologous point selection and enlarge the set of images used to perform a certain measurement. He can even fully manually identify features in the various images. As an example, the image sequences acquired for our test were typically constituted by 30-35 frames, having a storage size of 9.5 MB and needing around 60 seconds to be processed, after the end of acquisition. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2021 XXIV ISPRS Congress (2021 edition) TEST SITE A test site was set up at the Engineering campus of the University of Pavia. It is composed of points belonging to three categories: • five accessible (which a tripod can be set over) topographic markers, which are shown in Figure 2 in red and are named 100, 101, 102, 103, 104; • four conners belonging to the reception building; they are shown in Figure 2 again, with the names 201, 202, 203, 204; they won't be considered in this paper, any longer; • ten points belonging to the North-East façade of the same building; they are displayed in Figure 3 and have names 401-410. As Figure 2 highlights, several points are partially or almost completely covered by trees; moreover, point 100 is close to quite a high wall, which certainly generates a high level of multi-path. All in all, the selected test site is not ideal for receiving GNSS signal but is representative of what a surveyor can meet in daily activities. Points on the facade were surveyed with a redundant topographic network; measurements were processed by least squares adjustment and so-estimated standard deviation values range between 5 and 7 mm. REFERENCE COORDINATES Reference coordinates were determined of the benchmarks, in a very reliable and precise way. The five accessible points 100 -104 were surveyed with a redundant, static GNSS survey. The local network was connected to the Leica Smartnet® network of CORS (Continuously Operating Refence Stations), supporting all the available GNSS constellations. After the adjustment, coordinates of the benchmarks were available with an uncertainty (standard deviation) around 1.2 mm in East and North and 2.3 mm in height. More details will be given in another, more detailed paper, to be published soon. REPEATED MEASUREMENT OF BENCHMARKS For Scenario 1, three surveyors repeatedly visited points 100-104. The user visited all the five points in sequence, without switching the rover off. At each point, he performed 10 measurements, by leaning the pole in different directions and by different angles. At the end of the round, the antenna was disconnected from the network, so that a new initialization was performed. Number of repeated measurements is reported by For Scenario 2, repeated photogrammetric measurements of 10 benchmarks were performed. They belong to the façade marked in red in Figure 2 and are shown in Figure 3. The operators walked along the façade at different distances, ranging from 4 to 12 meters from the building. Line followed by operators are shown in Figure 2, just in front of the façade marked in red. The three surveyors were asked to acquire the façade 10 times for each of the five planned distance steps. They then measured the benchmarks, in the office. In total, 1436 points measurements were performed; each surveyor measured between 445 and 496 points; each benchmark was measured between 138 and 145 times; the various acquisition distances have numerosity in the range between 283 and 290. All measurements were performed in the NRTK mode, by connecting the antenna to the Leica Smartnet network. More precisely, the iMAX mode was selected, being a custom Leica mode, which is similar to VRS (Takac and Zelzer, 2008). Photogrammetric measurements were performed in the office with the Leica Infinity program. The user selects a sequence and then can see miniatures of the acquired images, as illustrated by Figure 4. He can choose sort of a master image and click on one point to be measured. The program will match the selected template in the neighbouring images and will autonomously decide whether to keep or not an observation, whether to include or not an image; the user is anyway enabled to correct or discard observations and to include new images. By the way, Figure 4 shows the plastic strips we used to guarantee the operator stayed at the planned distance. BASELINE ACCURACY OF TRADITIONAL GNSS MEASUREMENT OF THE ACCESSIBLE BENCHMARKS Though this is not the main focus of present paper, results for Scenario 1 are shown, as they constitute a valuable baseline for further considerations. Indeed Scenario 1 allowed us to assess the accuracy, which is attainable in the considered area, with the described instrumentation and by using corrections coming from the mentioned GNSS network. The word traditional is in italic because tilt compensation was on and measurements' duration was managed by the controller, rather than the user. The first bin is more populated than others adjacent because the operators were requested, each time they visited a point, to acquire the first measurements keeping the pole vertical. Figure 6 shows the box plot of the 3D error for individual points and for the whole dataset. In order to save space, the 3D variable was only plotted. The box plot highlights that benchmarks 100 and 102 present more dispersed measurements and the latter shows an even severe behaviour; the other benchmarks show comparable results. We were not surprised of what is highlighted by Figure 6, as point 100 has a wall just aside it (see Figure 2) and point 102 has a tree very close to it, whose crown is significantly protruding over the point. The overall histogram suggests that almost all the measurements are within the [0, 0.06] range, for the 3D residual, thus showing a very good performance: indeed, the 95 th percentile has value 5.5 cm. Nevertheless, outliers are clearly detectable, which we know the origin of, which is independent from the inherent behaviour of the used receiver. Table 2 reports descriptive statistics for the single components and for the 3D error. The whole cleaned dataset is considered here, including all the point. The very usual parameters are shown, min, max, mean and std; we also report RMSE, that is the square root of the sum of squared mean with squared std. As std is the mean distance from the average value, RMSE is the mean distance from the true value, which is 0. Indeed, residuals were calculated by forming the difference between NRTK-determined coordinated with the reference ones, which were measured with high precision and accuracy. RMSE values are around 1 cm for the planimetric components and of 1.8 cm for height. For the 3D distance, some indicators are missing. We could have shown them, of course, but their interpretation is different, Indeed, single components are supposed to be normally distributed, while 3D distance clearly has another distribution, as the histograms reported in Figure 7 confirms. For single components, RMSE can be interpreted as the half-width of the interval having 68.27% probability; in analogy with that, for 3D error, we calculated the upper limit of the interval having 68.27% probability (the lower limit was set to 0); the corresponding value, which is comparable, to a certain extent, to RMSE, is 2.4 cm and highlights that the analysed measurements are highly precise and accurate. Other percentiles can be extracted, of course: the 95 th one has value 4.3 cm, meaning that 95% of the inlying measurements have a 3D error lower or equal than the reported figure. The dependence of uncertainty over the tilt angle was studied too. The set of the pole angles, represented in Figure 5, was subdivided into 10 unequal intervals having the same numerosity. Measurements were partitioned accordingly and the 68.27 th percentile was extracted for the 3D distance, for each bin. Figure 8 illustrates results and has, in abscissa, the mean tilt angle for each bin; in ordinate, the 68.27 th percentile is reported of 3D error. Also, the 95% confidence interval is reported, which was obtained with the bootstrap statistical method. The reported curve is not easily interpretable and does not confirm what was expected, i.e., the curve increases with the tilt angle. On the contrary, it decreases when angles are considered, in the range [5,15] gons. For angles beyond 25 gons, a certain increase is visible indeed, but the RMSE figure is around 2.8 cm, which is not too far from the overall average value of 2.4 cm. It can also be observed that, in the range [0,22] gons, empirical results are compatible with the hypothesis that 3D measurement error (at the 68.27 probability level) is lower than or equal to the overall average, 2.4 cm. All in all, the rover seems very effective in compensating the pole's tilt. ACCURACY OF PHOTOGRAMMEYTRIC MEASUREMENTS Photogrammetric measurements of points shown in Figure 3 were also assessed. Preliminarily, coordinate conversion was performed, and a local cartesian reference system was adopted. The new x axis is horizontal and parallel to the surveyed façade; it increases moving rightwards. The y axis is vertical and parallel to the facade, too; in increases upwards. The z axis is defined in order to form a right-handed coordinate system and increases when moving from the facade to the rover. . Histograms of the residuals of the whole dataset (all the users, all the points, all the distances); y scale is logarithmic, in order to make low-populated bins visible. Figure 9 shows the histograms of the residuals for the whole dataset, including all the users, all the points and all the distances (4 to 12 meters). In order to make low-populated bins visible, logarithmic scale was set for the ordinate axis. Due to the properties of the logarithm function, the bin counts were incremented by 1; the lowest visible bins have a numerosity of 1. As there are outliers, clearly, their filtering was performed, by applying the same methodology described in Section 8. Out of 1436 measurements, 137 were discarded, corresponding to 9.5% of the total. Pictures and results shown from here on, are related to the inliers only. Exploratory analysis was preliminarily performed, of the residuals which were obtained by subtracting reference coordinated to the measured ones. Figure 10 reports the scatter plot for point 401: in the left-hand figure, the x-y components are shown; in the right hand-figure, the x-z plane is displayed. Measurements acquired by all the operators and at all the distances are merged; nevertheless, dots are coloured according to the rover-facade distance (between 4 and 12 maters, with steps of 2), as the legend shows. We could ascertain that the performance of the three operators involved is comparable, while there is a moderate dependence on the distance, as it will be shown later in the paper. Figure 10-left highlights that dispersion in x is higher than in y; Figure 10-right shows than dispersion in z in greater than in x. This is a general behaviour, as confirmed by Figure 11, that is related to point 410 and by Figure 12 that refers to the whole dataset. Boxplots were also generated for all the points. In the following, a couple of examples will be presented only. Figure 13 is for point 404, showing a behaviour that is comparable to the general one, in terms of measurements' dispersion, which is reported by the box's height. Indeed, x is more dispersed than y and z is more than x. There are exceptions, of course, and point 409 is such an example. Indeed, as Figure 14 illustrates, dispersion of the three components is approximately the same. Descriptive statistics parameters were determined for all the inlying measurements, as reported by Table 3 One question could be raised, why overall accuracy indices have been extracted, when it is well known that quality of photogrammetric measurements linearly depend on the cameraobject distance. For sure we will investigate such aspect, in further papers. At the same time, we do not think that, in practical scenarios, users will be able to guarantee that the distance is constant. In order to take advantage of the GS18i rover, they must be free to measure any point being in the recommended range, from 4 to 10 metres. As a last investigation, RMSE dependence on the camera-object distance was investigated. Measurements were grouped according to that parameter and quality parameters were extracted as before, for each of the available values, 4, 6, 8, 10 and 12 metres. Figure 15 shows the determined curves and highlights some interesting phenomena, some counterintuitive. RMSE(x) and RMSE(z) increase with the distance, even if the curves flat down or decrease between 10 and 12 metres. On the other hand, RMSE(y) decreases with the distance and this is surprising. DISCUSSION, CONCLUSIONS AND FURTHER ACTIVITIES A first evaluation of the Leica GS18i rover has been performed. A test site has been created, at the University of Pavia, Italy. It includes 5 accessible points and 10 points belonging to a façade. Their reference coordinates have been determined with accurate and redundant surveying methodologies. The test site is highly demanding, due to the presence of multi-path sources and obstructions. The accessible points have been determined by ordinary NRTK measurements, in which the pole was variously tilted. In total, 2077 measurements were acquired and processed. The rover proved to be highly capable to compensate for the tilt; overall, average accuracies are 12, 10 and 18 mm, for the East, North and height components, in terms of RMSE values. Points on the façade have been measurement by photogrammetry. A whole set of camera-object distances has been considered, ranging from 4 to 12 metres. Average RMSE values are of 45, 25 and 66 mm, for x, y and z (with respect to a local reference system), including all the distances and all the points. In general, we think this is sufficient for the scope of visual positioning. Undoubtedly, some aspects are difficult to explain, in terms of the general photogrammetric rules. Accuracy is significantly different for x and y. Moreover, RMSE(y) decreases when the distance increases, while the other two components behave as expected. Such tricky aspects will be investigated by performing full photogrammetric processing of the acquired images. Indeed, the GS18i rover performs photogrammetry in a sort of black box style. It is said that the camera is calibrated, but the parameters of the model are not accessible; downloaded images are said to be undistorted and so they seem, but we could not check that so far. Also, it is said that the controller extracts tie points and performs bundle adjustment, but no information is given about this process. Finally, points were measured in the office using the standard way: the user clicks one feature on one image and the program matches it in several others. For the present paper, matches performed by the software supplied by the manufactures were not checked, but simply accepted: we can only assure that point identification performed by user was careful. In next activities, full photogrammetric processing will be performed. Several other GCPs will be measured on the façade and the acquired photograms will be orientated by usual and full bundle adjustment. Moreover, if necessary, detailed simulation will be performed aiming at two main goals: • to ascertain what is the actual error budget, i.e., what is the weight if the various error sources: direct measurement of exterior orientation, matching of homologous points, consistence and quality of tie points; • to check whether the obtained measurement quality is the highest attainable, given the available instrumentation or can be improved via, e.g., refined camera calibration, improved tie points extraction, or different handling of the on-the-fly bundle adjustment. ACKNOWLEDGEMENTS The paper has been prepared within the frame of the CE4WE (Circular Economy for Water and Energy) research project, ID-1139857, funded by the Lombardia Region, Italy. Leica Geosystems Italia is acknowledged here (namely Eng. Filippo Quadranti and Eng. Davide Parmigiani) for lending us the rover we used for the first part of the test. Further activities, which are included in the present paper too, were performed with our sensor, after we bought it. Three students at the University of Pavia performed the measurements, within the activities related to the preparation of their bachelor dissertation. They are Marco Raniolo, Davide Lodigiani and Alessandro Filippi. They proved to be enthusiast and highly committed. Students' activity was carefully supervised by the authors and also by Paolo Marchese, a technician of the Laboratory of Geomatics, at the University of Pavia. He is gratefully acknowledged too.
6,312.6
2021-06-28T00:00:00.000
[ "Physics" ]
Squeeze and multi-context attention for polyp segmentation Artificial Intelligence-based Computer Aided Diagnostics (AI-CADx) have been proposed to help physicians in reducing misdetection of polyps in colonoscopy examination. The heterogeneity of a polyp's appearance makes detection challenging for physicians and AI-CADx. Towards building better AI-CADx, we propose an attention module called Squeeze and Multi-Context Attention (SMCA) that re-calibrates a feature map by providing channel and spatial attention by taking into consideration highly activated features and context of the features at multiple receptive fields simultaneously. We test the effectiveness of SMCA by incorporating it into the encoder of five popular segmentation models. We use five public datasets and construct intra-dataset and inter-dataset test sets to evaluate the generalizing capability of models with SMCA. Our intra-dataset evaluation shows that U-Net with SMCA and without SMCA has a precision of 0.86 ± 0.01 and 0.76 ± 0.02 respectively on CVC-Clin-icDB. Our inter-dataset evaluation reveals that U-Net with SMCA and without SMCA has a precision of 0.62 ± 0.01 and 0.55 ± 0.09 respectively when trained on Kvasir-SEG and tested on CVC-ColonDB. Similar results are observed using other segmentation models and other public datasets. | INTRODUCTION Colon Cancer can be fatal if not detected early and as such, poses a huge risk to public health. It is the third most common cause of cancer in the US. 1 One of the earliest signs of colon cancer is the emergence of polyps in the colon and rectum. Early detection and removal of polyps can increase the survival rate to 90%. 2 To this end, colonoscopy is performed to detect the presence of colorectal polyps. The problem with manual inspection is that polyps can be misdetected because they have heterogeneous morphological characteristics. Hence, there is an ongoing effort to develop Computer Aided Diagnosis Systems (CADx) that limit the number of misdetections. 3 Artificial Intelligence (AI) based Polyp Segmentation is a paradigm of AI-CADx where an AI model is purposed with the task of classifying the pixels that belong to polyps in images. Specifically, deep learningbased AI methods show promising results. 4 It is believed that AI-CADx will reduce the burden of a physician and lead to better patient care. It is also argued that CADx solutions could potentially be an alternative to manual screening. Therefore, it is of paramount importance that the accuracy and precision of deep learning-based AI-CADx are improved. As argued by Jha et al., 5 robustness and generalizability are two key aspects that need to be handled if we want CADx systems in clinical practice. The robustness is the ability of the CADx to perform reliably within an accepted error margin for all kinds of colonoscopic images. Generalization is the ability of the CADx to segment polyps reliably and accurately from images belonging to a wide range of image distributions. Solving these two aspects are key to making reliable AI-CADx for polyp segmentation. Figure 1 shows the variations in appearance and morphological features of polyps across different datasets. Towards learning robust and generalizing features for polyp segmentation, we propose a module called "Squeeze and Multi-Context Attention" (SMCA), an attention module that re-calibrates feature maps based on attention weights computed from the aggregated polyp and context features at multiple receptive fields. In doing so, we leverage the global context and the local context at multiple receptive fields to provide spatial and channel attention. In comparison, Squeeze and Excite (SE) 10 module extracts only global context through global average pooling to provide channel attention. Attention gates (AG) 11 provide spatial attention by calculating attention weights from coarser signals for each feature in a feature map. However, our module combines the channel attention mechanism from SE and the spatial attention mechanism from AG to compute attention weights that provide attention in the channel and spatial dimensions. Additionally, we perform the channel and spatial attention at multiple receptive fields. A point to note is that SMCA is a self-attention module whereas AG is an attention module. We evaluate the effectiveness of our module by incorporating it into multiple deep learningbased segmentation models, namely: U-Net, 12 Attention U-Net, 11 R2U-Net, 13 R2AU-Net 14 and ResUNet++. 15 In ResUNet++, we replace SE module with SMCA module. Towards robustness, we evaluate the five models with and without SMCA on four public datasets. Towards generalization, we construct inter-dataset test sets and evaluate the segmentation models with and without SMCA on them. Finally, we compare the attention maps of the convolution kernels of U-Net with and without our SMCA module using Grad-Cam++ 16 to qualitatively illustrate the differences in the feature representation. In summary, our contributions are as follows: • We propose an attention module called SMCA that takes global and local context at multiple receptive fields to re-calibrate the feature maps. F I G U R E 1 Sample images from Kvasir-Seg, 6 ETIS-Larib, 7 CVC-ColonDB 8 and CVC-ClinicDB 9 illustrating the variations in appearance and morphological features of polyps shown with red circles • We check the performance changes due to SMCA by extensively evaluating five models with and without SMCA through five-fold cross validation on four public datasets that is, Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB and Kvasir-Sessile. • We check the generalizing ability through extensive inter-dataset evaluation that is, we train our models with and without SMCA on Kvasir-SEG and CVC-Clinic. We then evaluate these models on four public datasets, which the model has not seen before. • We plot the attention maps of the convolution kernels of a U-Net with and without our SMCA. The comparison of the attention maps at multiple hierarchies highlights the differences in the feature representation of the two models. | RELATED WORK Previously, hand-crafted features were used to detect and segment polyps. In Reference 8, authors proposed a three-stage process for polyp segmentation. They performed region segmentation followed by region description and finally, region classification. The authors of 17 used shape as a discriminatory feature instead of texture. The reason behind their proposal was that small polyps had predominantly elliptical shapes. In Reference 18, authors proposed a dictionary learning approach by extracting hue histogram features and used support vector machine to classify normal and polyp images. However, the limitation of hand-crafted features is that they do not generalize well to unseen images. Furthermore, the complexity of these proposed solutions greatly limit the applicability in real-world scenarios. The limitations posed by hand-crafted feature extraction have been circumvented by using Convolutional Neural Networks (CNN). CNNs have shown great success in the polyp segmentation task. In the MICCAI polyp segmentation challenge, most of the proposed models were based on CNNs and the winning model was also a CNN. 19 Since U-Net 12 came into existence, it and its variants have been commonly used in medical image segmentation. 20 From the literature, it can be observed that the modifications proposed by authors have mostly been in convolution operations, attention blocks and feature aggregation blocks. With respect to changes in convolution operations, Alam et al. 21 replaced the encoder of U-Net with ResNet-50 and Sun et al. 22 extracted better features by using dilated convolution. Towards the use of attention blocks, one of the earliest architectures was the Attention U-Net 11 which incorporated attention gates to improve segmentation of abdominal regions from CT images. Rundo et al. 23 introduced SE modules into U-Net to improve prostate zonal segmentation. On similar lines, Jha et al. 15 created a variant of ResUNet 24 for polyp segmentation by introducing SE module and attention gates. In Reference 25, the authors introduced a spatial attention layer to a U-Net for the task of polyp segmentation. In Reference 26, authors introduced an attention module called "Focus Gate" that uses spatial and channel attention to calculate the attention weights. The authors demonstrated that their dual attention-gated U-Net called "Focus Net" outperformed state-of-the-art models. With respect to feature aggregation blocks, Mahmud et al. proposed PolypSeg-Net 27 where sequential depth dilated inception (DDI) blocks were used to aggregate features from different receptive fields. From the aforementioned works, we observed that using channel and spatial attention blocks and aggregating features from multiple receptive fields were beneficial for segmentation. SMCA module was constructed with these two ideas in mind. Specifically, our SMCA module captures information at multiple receptive fields of a feature map by using average and max pooling of varying kernel sizes. The extracted information from the multiple receptive fields is passed through convolutional blocks to calculate spatial and channel attention weights per receptive field. The channel and spatial attention weights from multiple receptive fields are combined to calculate the final attention weights which are used to re-calibrate the original feature map. The literature also revealed that majority of the existing works evaluated their models on test sets, which were derived from the same datasets. [28][29][30][31] An exception to this trend was the recently published work of Jha et al. 5 They performed inter-dataset evaluation to prove the generalizing capability of their proposed model. However, they performed one-fold cross validation for all their experiments. We take this a step further and perform five-fold cross validation experiments to prove the advantages of incorporating SMCA into models to increase its generalizing ability. | Network architecture In this sub-section, we briefly describe the various models we considered for this study and illustrate through diagrams where we placed the SMCA module in the models. | U-Net architecture The architecture of the proposed U-Net with SMCA is shown in Figure 2. It consists of encoder, decoder and the SMCA module. The encoder extracts feature through a series of encoding blocks. As information passes down the encoder block, low-level features are converted to high-level features. An encoder block is a series of two convolution operations followed by SMCA and Max Pooling. Before information is passed to the next encoder block, SMCA enhances the features extracted. In the baseline U-Net, the SMCA module is not present. The number of kernels increases in subsequent encoder blocks as follows: 32, 64, 128, 256 and 512. The decoder F I G U R E 2 The five segmentation models used in our work. The original models do not have SMCA module. In ResUNet++, we replaced SE Layer with SMCA. /1 and /2 represent stride 1 and stride 2. 2  2 and 3  3 denote kernel sizes. Â2 next to Upsample denotes the scale of upsampling. All Upsampling operations are bilinear interpolation block is similar to the encoder with the additional operation being that it concatenates features from the encoder with the upsampled features from the previous decoder block. The decoder kernels decrease in every subsequent decoder block as follows: 256, 128, 64 and 32. | Attention U-Net architecture The architecture of the proposed Attention U-Net with SMCA is shown in Figure 2. It consists of encoder, decoder, SMCA module and an additional attention gate. 32 Similar to U-Net, SMCA enhances the features extracted from an encoder block and then the feature is passed to the next block. The SMCA module is absent in the baseline Attention U-Net. The number of kernels increases after every encoder block as follows: 32, 64, 128, 256 and 512. On the other hand, the de-coder kernels decrease as follows: 256, 128, 64 and 32. | R2U-Net architecture In this variation of U-Net, Recurrent Residual Convolutional Neural Networks (RRCNN) are introduced. The authors propose the inclusion of these two modules primarily for two reasons. First, the inclusion of residual units helps in training deep architectures as it minimises the occurrence of vanishing and exploding gradients. Secondly, recurrent units ensure better feature representations arising from the accumulation of feature maps. This network achieved state-of-the-art in Skin Lesion Segmentation. 33 The encoder and decoder structures are shown in Figure 2. The number of kernels increases after every encoder block as follows: 64, 128, 256, 512 and 1024. On the decoder side, the kernels reduce with every decoder block as follows: 512, 256, 128 and 64. | R2AU-Net architecture In this variation of U-Net, attention gates introduced in Attention U-Nets are used in R2U-Net. Inclusion of attention gate further strengthens the feature representation of the network. The encoder and decoder structures are shown in Figure 2. The number of kernels used in the encoder and decoder blocks is the same as R2U-Net. | ResUNet++ architecture ResUNet++ is a segmentation model constructed to improve polyp segmentation performance. This model is built upon ResUNet. 24 This architecture has feature enhancement modules such as Residual Blocks, SE module, Attention Gates and Atrous Spatial Pyramid Pooling (ASPP). 34 The SE layers are included after every residual block in the encoder. Additional skip connections are introduced to propagate information from the encoder blocks to attention gates. The filters in the encoder section increase as follows: 32, 64, 128, 256 and 512. In the decoder section, the filters decrease with each decoder block as follows: 512, 256, 128, 64 and 32. Altogether, ResUNet++ has one stem block (See Figure 2), three encoder blocks and three decoder blocks. The final decoder block has an ASPP layer and a 1  1 convolution for channel reduction. | Feature enhancement modules We consider feature enhancement modules to encompass modules that manipulate feature maps through convolution operations or recalibrate the feature maps by computing attention weights. In this section, we describe the various feature enhancement modules used in the five segmentation models. | Attention gates Attention gates were first proposed by Chen et al. 35 Since its introduction, several segmentation models have been used it. In our work, three of the models (Attention U-Net, R2AU-Net, ResU-net++) use attention gates. The reason for using the attention mechanism is that it highlights the relevant information in a feature map while suppressing the irrelevant information. In doing so, the feature representation of the segmentation model is strengthened and therefore, semantic information is preserved as information hows through the network. | ResNet block As more layers are added to a network, gradients may either vanish or explode. 36 This can result in the network not converging during training. To alleviate this problem, residual blocks have been introduced. Residual blocks create a short connection from the input that is added to the output. With this simple trick, the gradients how properly during backprogation and vanishing and exploding gradients are prevented. Altogether, residual units are a combination of two convolution layers, Batch Normalization (BN), ReLu and short connection. Residual Blocks have been used in ResUNet++. A diagram of the ResNet Block is shown in the bottom right corner of Figure 2. | Residual recurrent block (RR Block) While Residual blocks typically short the input after two consecutive convolution layers, the RR blocks create a short connection between input and output after every convolution layer. A diagram of the Residual Recurrent Block is shown in the bottom right corner of Figure 2. In this diagram, two recurrent blocks are connected sequentially. | Squeeze and excite Formally, the SE Layer is described as follows: is a residual block parameterized by two convolutional layers ϕ 1 and ϕ 2 . x R CÂHÂW and X R CÂH'ÂW' are the input and output feature maps. w se . are the excitation weights and it is computed as follows: RELU is the relu activation and GAP is the global average pooling operation. Υ denotes the sigmoid activation. SE module re-weights the features across the channel dimension by applying GAP on the individual channels of the feature map x. GAP reduces the feature map to a scalar. The vector produced by GAP is fed to two consecutive linear layers parameterised by W 1 and W 2 . The final sigmoid layer is used to compute the 'excitation' weights. The excitation weights are used to reweight the channels of features as shown in Equation (1). SE module provides channel attention by encoding the global context. Essentially, GAP reduces the features across the channel dimension to a vector of scalars, which represents the encoding of global context. The vector of scalars are passed through a Fully Connected (FC) network with one hidden layer. The hidden layer, which is of lower dimension than the channel dimension, in conjunction with the sigmoid activation function capture non-linear dependencies that exist across the channel dimension of the feature map. Through this process, features which are more important are scaled higher than features which contribute lesser to the segmentation task. The features are scaled along the channel dimension through the global context encoding. However, SE module uses only one receptive field dependent on the height and width of the feature map to provide channel attention. It was observed that the combination of different receptive fields boost semantic segmentation performance suggesting that both local and global context is beneficial for semantic segmentation. 37,38 Therefore, we argue that capturing only the global context to reweight the feature along the channel dimension is insufficient. We propose using global and local level context at multiple receptive fields to re-weight feature maps. To this end, we propose SMCA that we discuss in the next section. | Squeeze and multi-context attention We propose a module that uses global and local context for re-weighting the feature maps. SMCA encodes global context using SE module and encodes local context at multiple receptive fields using Average and Max Pooling operations. Average and Max Pooling of various strides and kernel sizes capture the local context at various receptive fields. They are inexpensive as they do not have any learnable parameters. In our experiments, we use strides of 2, 4 and 8 and kernels of size 2, 4 and 8, respectively to capture the local context at increasing receptive fields. The average and max pooling operations are followed by squeeze operation through 1  1 convolutions and convolution operations through 3  3 kernels that capture relavant channel and spatial information. The outputs of the 'Conv Squeeze Block' and 'Conv Normal Block' (See Figure 3) are added. The channel interdependencies are captured by the 'Conv Squeeze Block' and the relevant spatial information is preserved by the 'Conv Normal Block' thus providing channel and spatial attention respectively. Formally, we can define the SMCA module as follows: where x is the input feature map, X is the output feature map and w smca is the multi-context attention weights used to recalibrate the input map. w smca is computed of three spatial and channel attention weights at different receptive fields as shown as follows: For the sake of brevity, we remove the parameterization notations for the residual block F (.) and the 'squeeze' block F sq (., r ). F sq (., r ) is a special convolutional block where bottleneck is introduced in the channel dimension by reducing the channel dimension by a factor of r using 1  1 convolution. AP (., n) and M P (., n) represent the Average and Max Pooling operations where n denotes the stride n and kernel size n  n. Finally, the multi-context weights are upsampled bilinearly by the corresponding factor to match the input feature map dimensions. Formally it can be described as follows: where φ (., k ) denotes the upsampling operation by a factor k. | Dataset details We have used the following datasets for training and evaluating our models (Table 1). • KVASIR-SEG contains 1000 images annotated by endoscopists from Oslo University. Each image contains atleast one polyp. • CVC-ColonDB consists of 380 images from 15 colonoscopy videos. Each image shows at least one polyp. | Implementation details For our intra-dataset experiments, from each dataset, 10% of the images were randomly selected to construct the test set. The remaining images in the dataset were split into five portions of equal sizes. Leave-one-fold-out strategy was then used to construct the training and cross-validation sets. We evaluated U-Net, Attention | Loss function The choice of loss function is particularly crucial in polyp segmentation as we have an imbalance between the number of positive class samples (polyp pixels) and negative class samples (background pixels). If the class imbalance is not considered in the loss function, the model may converge to sub-optimal solution. Additionally, in medical applications, reducing false negative predictions typically takes precedence over reducing false positive prediction. Concretely, segmenting polyp pixels is more important than falsely segmenting nonpolyp pixels as polyps. Therefore, there have been several works that tackled the class imbalance problem. Yeung et al. 39 propose a unified asymmetric focal loss that prevents suppression of gradients of classes that occur infrequently. Additionally, Ma et al. 40 perform a thorough analysis of the contribution of 20 loss functions on 4 segmentation tasks. The literature reveals that Tversky loss 41 can weigh the influence of false negative class prediction over false positive class prediction when computing the gradients for model training. Therefore, we use Tversky loss for our experiments. It is an asymmetric similarity measure between predicted segmentation map and ground truth map. It is a generalization of Dice similarity coefficient (DSC) and Jaccard index. The Tversky loss is calculated as the mean of Tversky index (TI). Tversky Index is calculated as follows: where i is the ith pair of predicted and ground truth segmentation map. T P, F N and F P are the true positive, false negative and false positive count. α and β are the weights associated with the false positive and false negative count. β > α forces the model to improve the recall more than precision and vice versa. We set α = 0.4 and β = 0.6 based on grid search. We use these values for all our experiments. Finally, the Tversky loss over the mini-batch of size B can be defined as follows: | Evaluation metrics The models are evaluated using DSC, mean intersection over union (mIoU), precision and recall. The metrics are computed as follows: Precision where TP or True Positive is the total number of pixels in the predicted segmentation mask classified as polyp pixels and are actually polyp pixels, FP or False Positive is the total number of pixels classified as back-ground pixels but are polyp pixels and TN or True Negative is the total number of pixels, which belong to the background class and are predicted as background pixels. | RESULTS In this section, we report the findings of our intra-dataset and inter-dataset experiments. First, we report the segmentation metrics of each model separately. To this end, we report each model with and without SMCA and report the performance differences. Next, we report the results of our inter-dataset experiments by taking all the models together. The qualitative comparison of our intra-dataset experiments is shown in Figure 4. The qualitative comparison of our inter-dataset experiments with training sets Kvasir-SEG and CVC-ClinicDB are shown in Figures 5 and 6, respectively. | Evaluation of U-Net The results of our intra-dataset experiments on U-Net are presented in Table 2. We observe that SMCA improves all the metrics for Kvasir-SEG, CVC-Clinic and Kvasir-Sessile. Notably, the DSC improves by 5.1%, mIoU by 8.8%, precision by 7.8% and recall by 2.3% for CVC-Clin-icDB. SMCA brings improvement to Kvasir-Sessile dataset which contains images of polyps (less than 10 mm) that are hard to segment. Specifically, the DSC improves by 2.2%, mIoU by 3%, precision by 9.5% and recall by 9%. | Evaluation of attention U-Net The results of our intra-dataset experiments on Attention U-Nets are presented in Table 3. We report that SMCA shows improvement of all metrics for all the four datasets. The largest improvement is shown on CVC-ColonDB with increase of 65% for DSC, 100% for mIoU, 86.4% for precision and 5% for recall. Similar to U-Nets, we observe that SMCA improves the performance on Kvasir-Sessile dataset. Another observation is that the Attention U-Net (with and without SMCA) performs worse relative to U-Net on Kvasir-SEG, CVC-ColonDB, CVC-ClinicDB and Kvasir-Sessile. Table 5 shows the results of our intra-dataset evaluation of R2AU-Net. We report that SMCA shows a general improvement in segmentation metrics on all the datasets. Similar to R2U-Net, the model trained on Kvasir-SEG and CVC-ColonDB shows notable improvements due to SMCA. We find that the DSC, mIoU, precision and recall improve by 17.3%, 25.8%, 27.9% and 6.4% on Kvasir-SEG. We also observe that the recall of R2AU-Net with SMCA is almost at par with R2AU-Net without SMCA. Furthermore, the performance improvements on Kvasir-Sessile are negligible in comparison to the other three datasets. | Inter-dataset evaluation In this section, we report the results of our inter-dataset experiments. The purpose of the inter-dataset evaluation is to further test the generalizability of models with our SMCA module when the test set is not derived from the same dataset. We use images of Kvasir-SEG and CVC-ClinicDB to construct our training sets similar to Jha et al. 5 The images in all these datasets are recorded with different imaging apparatus, have imaging artifacts such as illumination changes, motion blurring, gastrointestinal artifacts, and so forth. Furthermore, the shape and appearance of the polyps vary from dataset to dataset. Therefore, it is expected that there will be a drop in segmentation performance. Table 8 shows the inter-dataset evaluation of the five models with and without SMCA trained on CVC-Clin-icDB. Altogether, we report improvements in most of our models. For example, U-Net with SMCA shows 35%, 50%, 41% and 3% increase in DSC, mIoU, precision and recall when tested on CVC-ColonDB compared to baseline U-Net. An observation that can be drawn is that the interdataset performance of models trained on Kvasir-SEG is better than models trained on CVC-ClinicDB. We believe this is the case because the images of Kvasir-SEG have higher contrast than CVC-Clinic and also, the polyps in Kvasir-SEG are more diverse in size, shape, color and appearance. We conjecture that these attributes of the training set play a role. | Choice of channel compression ratio Choosing the correct channel compression ratio is important as it is mainly responsible for re-weighting the information across the channel dimension. Therefore, we performed experiments to find the ideal channel compression ratio r for our SMCA module. We chose U-Net as our baseline architecture and used Kvasir-SEG dataset to perform a five-fold cross validation experiment. Observing the results in Table 9, we chose the channel compression ratio 2 for all our intra-dataset and interdataset experiments. | Summary of results Looking at the quantitative results of intra-dataset experiments (See Tables 2-6), we can draw the following observations: (i) SMCA improves the performance when incorporated into five popular segmentation models; (ii) SMCA has a greater impact on larger models than on smaller models (see Tables 4 and 5 vs. Table 2); (iii) On an average, all the models perform the best on Kvasir-SEG, followed by CVC-ClinicDB, CVC-ColonDB and Kvasir-Sessile; (iv) SMCA when incorporated into ResUNet++ performs better than the baseline. Our results indicate that the SMCA is a better attention module compared to SE module. When observing the results of the inter-dataset experiment, we can draw the following observations: (i) Models with SMCA perform better than models without SMCA; (ii) Models generalize better when trained on Kvasir-SEG than on CVC-Clinic; (iii) Models with fewer trainable parameters perform better than models with more parameters. | Discussion on intra-dataset evaluation From the intra-dataset experiments, we conclude that models with SMCA show improvements in segmentation metrics. This demonstrates that our module is versatile and can act as a plug-in module to various deep learning architectures. We see that lightweight models such as the U-Net perform better in all the datasets compared to models with more parameters (ResUNet++, Attention U-Net, R2U-Net and R2AU-Net). We believe this to be the case because our training dataset is small due to the five-fold cross validation experiments. As such, chances of overfitting larger and deeper models are higher than shallow models 42 when training on small datasets. The authors of ResUNet++ 15 use augmentation schemes such as center crop, random crop, horizontal hip, vertical hip, scale augmentation, random rotation, cutout, brightness augmentation, and so forth. In our case, we use only random vertical and horizontal hip. Thus, we argue that using more augmentation methods will improve the performance of the larger models. Additionally, we observe that the boost in metrics due to SMCA to larger models is greater than the boost in metrics on U-Nets (See Table 4 vs. Table 2). We conjecture that the SMCA is able to counter the overfitting tendency by introducing a regularizing effect. The regularising effect is more prominent in larger models and thereof, the improvement in segmentation performance due to SMCA is more in larger models than smaller models. | Discussion on inter-dataset evaluation The inter-dataset evaluation of models is an important and necessary technique to test generalizing capabilities of the models. Our work builds on the cross-dataset experiments of Jha et al. 5 We believe that inter-dataset evaluation of models is important if we want to realise AI-CADx in clinical settings. Deep learning models perform poorly when the test set and training set have diverging image distributions. We think training and test set image distribution divergence will be a common problem faced in the polyp segmentation domain primarily because the images are recorded under different conditions (e.g., with different recording devices, different light scources, etc.). Furthermore, as mentioned previously, the polyps appear in various sizes, shapes and appearances. Additionally, the experience of the physician who is doing the colonoscopy will also affect the quality of the images. As such, performing inter-dataset evaluation should become a standard criteria to demon. strate the generalizing capabilities of AI-CADx for polyp segmentation. Our work is a step forward in this direction. In our work, we perform inter-dataset evaluation to demonstrate the improvements in generalizing capability of baseline models due to SMCA module. From Tables 7 and 8 we see that SMCA improves the generalizing capabilities of five segmentation models. Our results indicate that the feature recalibration of SMCA is beneficial towards learning robust features for polyp segmentation. We believe that SMCA learns robust features for multiple reasons. First, the use of max and average pooling with different kernel sizes allow the model to extract highly activated features and average activation of features simultaneously over varying receptive fields. The highly activated features are primarily due to polyp pixels and relevant background information necessary for polyp segmentation. Thus, the max pooling passes the highest activated polyp and background features and the average pooling passes an average of highly activated and lowly activated features which can be considered to be the context information of the polyp. Max and average pool with large kernels help to pass large context information around the polyps. Similarly, max and average pool with small kernels help to pass small context information around the polyp. We can draw an analogy for the working of SMCA to a physician inspecting the area around a suspected polyp lesion to demarcate a polyp mass from a non-polyp surrounding tissue. The use of receptive fields of different sizes in SMCA is analogous to a physician inspecting a large and small area around a suspected polyp lesion. A large area of inspection provides more context of a polyp's position in relation to the colorectal surface whereas a small area of inspection provides the necessary detail to distinguish a polyp lesion from the non-polyp colorectal surface. Second, the resulting feature maps from the multiple average and max pooling operations are passed through "Conv Squeeze Block" and "Conv Normal Block". These blocks are used to provide channel and spatial attention, respectively. Third, the attention weights of the "Conv Squeeze Block" and "Conv Normal Block" computed from multiple receptive fields are added together to compute the final attention weights. This, in effect, allows small and large-sized polyp features and its corresponding contexts to contribute towards recalibration of the original feature map. We also observe that models trained on Kvasir-SEG show better inter-dataset performance than models trained on CVC-ClinicDB. We conjecture that the higher contrast and larger variation (in terms of size, shape and appearance) of polyp images on Kvasir-SEG compared to the CVC-ClinicDB enabled the model to learn better generalizing features. Despite the differences in the training dataset, baseline models with SMCA show improvement in segmentation metrics suggesting the robust representations learned due to SMCA. 6.4 | Visualizing the effectiveness of our SMCA module The ultimate objective of proposing AI-CADx in polyp segmentation is to improve clinical decision-making by using human intelligence capabilities in conjunction with AI-CADx. Further, with the latest advancements in human-in-the-loop annotation tools, generating annotated datasets have become easier. 43 Therefore, the combined capabilities of AI and human intelligence may lead to efficient AI workhows and clinical workhows. While generating annotated datasets have become more efficient, AI models are still considered a black box. 44 As argued by Rundo et al., 45 one of the many challenges in installing AI-CADx in clinical practice is the lack of interpretability and explainability. Therefore it is of utmost importance to develop AI-CADx that are interpretable. Then these systems can gather trust amongst physicians and patients alike. Interpretability is mostly ignored in many works dealing with polyp segmentation. We believe that the risks posed by AI-CADx deployed in healthcare industry are far greater than in other industries. The risk of a model making a false prediction can have lifethreatening consequences. Therefore, it is of utmost importance to understand the decision-making process of machine learning models. This will help in understanding the pitfalls of deep learning models and help in finding techniques to redress them. We believe this will enable research in design and development of network architectures that are more reliable and have bettergeneralizing capabilities. Our work is a step forward in this direction. Visualising the feature representation of a model with and without SMCA can offer better insight than simply reporting the segmentation metrics. To this end, we use M3D-CAM's 46 implementation of Grad Cam++ to visualise the gradient weighted attention maps of the two convolution kernels at each "Conv Block" (See Figure 2) in the encoder and decoder of U-Net. In Figure 7, we present the side-by-side comparison of the attention maps from the "Conv Blocks" of baseline U-Net and U-Net with SMCA. The qualitative comparison of attention maps shows the difference in learned representation of both the networks. We visualise the attention maps of the convolution kernels in the "Conv Block" because each "Conv Block" from the second layer onwards receives the re-calibrated feature maps of the SMCA. Furthermore, "Conv Block" is present in both the baseline U-Net and U-Net with SMCA. Therefore, it serves as a good entity to make a fair comparison. One of the many challenges in segmentation is effectively retaining important semantic information along with high-level concepts as information propagates through the network. Cascade of max pool operations in CNNs result in learning of high-level concepts at the expense of granular information such as edge and color being lost. However, preserving the important low-level features alongside the high-level concepts can improve the precision and accuracy of segmentation maps. 47 Our qualitative analysis indicates that SMCA enables the CNN to preserve low-level features relevant for semantic segmentation in the deeper layers. The re-calibration of the encoder features using the max and average pooling operations of varying kernels help in the extraction of relevant polyp and context features at multiple scales. Additionally, computing spatial and channel attention weights from these extracted features helps in preserving important low-level semantic features while allowing the formation of high-level concepts. Our results indicate that this allows the models with SMCA to make more precise and accurate segmentation maps. Looking at the activation map of "Conv Block" at Conv Layer 3 of Figure 7 for U-Net with SMCA, we observe that the convolution kernels that receive re-calibrated feature maps activate low-level semantic concepts occurring throughout the image. This indicates that SMCA re-calibrates the feature map to preserve important low-level semantic concepts as information propagates through the network. In comparison, the attention maps of the baseline U-Net in Conv Layer 3 shows limited activation implying loss of relevant semantic information in the deeper layers. Similarly, activation maps at Conv Layer 4 of U-Net with SMCA show more activity than activation maps of baseline U-Net. Observing the activation maps at Conv Layer 5, we see that both U-Net and U-Net with SMCA learn high-level concepts. However, U-Net with SMCA does it while preserving the low-level semantic information in the preceeding layers. On the decoder side (See Conv Layer 6, Conv Layer 7, Conv Layer 8, Conv Layer 9), we see that the activation maps are more prominent for U-Net with SMCA and they start resembling the final segmentation map from Conv Layer 7 onwards. We conjecture that the closer resemblance of the activation maps to the predicted segmentation map for U-Net with SMCA is because the decoder is able to use the preserved lowlevel semantic features passed to it through skip connections. This, in effect, leads to the prediction of more accurate segmentation maps. | Limitations The models do not generalize well to unseen image distributions. All models perform better when test sets and the training set are from the same dataset. Although our module redresses this problem to an extent, there is more progress to be made in generalization. Self-supervision is an emerging area of research that makes models generalize better to unseen distributions. 48 We think there are significant advantages of this learning paradigm which can expedite the implementation of AI-CADx in clinical settings. Furthermore, our work is a retrospective study which is very different from prospective clinical application. The images in the datasets are selected by expert gastroenterologist. Prospective clinical use-case would involve testing the models on colonoscopy videos. Furthermore, our training set consists of polyps being present in every image. Our models are not trained to consider endoscopic images having any polyps. | CONCLUSION In this paper, we present a novel module called SMCA. We incorporated SMCA to five segmentation models: U-Net, Attention U-Net, R2U-Net, R2AU-Net and ResUNet++. We extensively evaluated the performance of the mentioned models with and without SMCA on four public polyp segmentation datasets (Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, Kvasir-Sessile). We report that models with SMCA perform better than baseline models. To further test the generalizing ability, we perform rigorous inter-dataset experiments. In the first interdataset experiment, we train all the models with and without SMCA on Kvasir-SEG and test it on CVC-ColonDB, CVC-ClinicDB and ETIS-Larib Polyp DB. In F I G U R E 7 Visualization of the attention maps at each convolutional kernel in the bottlenecks. There are two attention maps at each layer because there are two convolution operations at each "Conv Block" (See Figure 2). The attention maps with the pink borders are from the U-Net with SMCA, the attention maps without borders are from U-Net without SMCA the second experiment, we train all the models on CVC-ClinicDB and test them on CVC-ColonDB, Kvasir-SEG and ETIS-Larib Polyp DB. Finally, to better understand the impact of SMCA on features learned by models, we render the attention maps from the convolution kernels of U-Net with and without SMCA using Grad-CAM++. The qualitative comparison further illustrates that models with SMCA learn features that preserve important semantic cues throughout the depth of the network. This partially suggests why the models with SMCA predict more accurate segmentation maps. In summary, SMCA recalibrates the feature maps through simultaneous spatial and channel attention. The spatial and channel attention weights are computed through the extraction of relevant edge and context features at multiple scales. Our results suggest that models with SMCA can segment large and small polyps better than their baseline counterparts. Additionally, we report that SMCA-based models generalize better. This is demonstrated through our extensive intra-dataset and interdataset experiments. We think that SMCA will improve the segmentation performance for tasks where the objects to segment appear in different sizes such as brain tumor segmentation 49 where tumors appear in multiple sizes. In non-medical application, SMCA may be beneficial in segmenting regions of interest in urban scenes such as cars, traffic lights and pedestrians 50 which too appear in different sizes. As future work, we want to incorporate SMCA into the decoder network and analyse the changes in the performance. Additionally, we want to analyse the performance changes by pre-training the SMCA-based model through self-supervision.
9,342.2
2022-08-26T00:00:00.000
[ "Computer Science" ]
RTFE: A Recursive Temporal Fact Embedding Framework for Temporal Knowledge Graph Completion Static knowledge graph (SKG) embedding (SKGE) has been studied intensively in the past years. Recently, temporal knowledge graph (TKG) embedding (TKGE) has emerged. In this paper, we propose a Recursive Temporal Fact Embedding (RTFE) framework to transplant SKGE models to TKGs and to enhance the performance of existing TKGE models for TKG completion. Different from previous work which ignores the continuity of states of TKG in time evolution, we treat the sequence of graphs as a Markov chain, which transitions from the previous state to the next state. RTFE takes the SKGE to initialize the embeddings of TKG. Then it recursively tracks the state transition of TKG by passing updated parameters/features between timestamps. Specifically, at each timestamp, we approximate the state transition as the gradient update process. Since RTFE learns each timestamp recursively, it can naturally transit to future timestamps. Experiments on five TKG datasets show the effectiveness of RTFE. Introduction Temporal knowledge graph (TKG) is an extension of static knowledge graphs (SKGs) which introduce the time dimension. In SKGs, facts are considered to be time-invariant (Sil and Cucerzan, 2014). In reality, facts are not always true. For example, the triple (Obama, President, United States) was true only from 2009 to 2016 and (Obama, married, Mitchell) since 1992. However, SKGs do not reflect the change in facts over time. An example of TKG is shown in Figure 1. Besides, facts on social networks, e-commerce platforms and trading platforms also change over time. Therefore, TKGs have the potential to improve the performance of question answering, search, recommendation and prediction based on KGs (Huang et al., 2020;Garg et al., 2020). * * Corresponding author: Haihong E TKG can be expressed as a set of quadruples (subject, relation, object, timestamp). Different from SKGs which ignore the time attribute of facts, the facts of TKGs are distributed in timestamps, which can reflect the dynamic change of entities and relationships over time. Due to the limited coverage of KGs, TKGs are also incomplete. By completing TKG, missing and potential knowledge under specific timestamps can be found. In recent years, a lot of work (Bordes et al., 2013;Wang et al., 2014;Lin et al., 2015;Kazemi and Poole, 2018;Schlichtkrull et al., 2018;Sun et al., 2019;Zhang et al., 2020) has focused on KG completion by methods of graph embedding. These efforts have yielded good results, but most of them focused on SKGs and required training in a large number of triples. However, the TKG under a certain timestamp is a sparse multi-relation graph (Esteban et al., 2016), so it is necessary to absorb information from other timestamps. What's more, SKGE methods lacked the modeling of time attribute of relations, and were proposed based on the assumption that all facts occur at the same time. So they cannot reflect the temporal dependencies of facts. To handle these two problems, our RTFE passes parameters and features between timestamps in a recursive manner, which not only alleviates the sparsity problem of TKG, but takes advantage of the continuity and relevance characteristics of the fact as well. Existing TKG completion methods (Dasgupta et al., 2018;Goel et al., 2019;Lacroix et al., 2020) follow the training pattern of SKGE, which shuffles facts (s, r ,o ,t) of different time randomly and learns all facts in a chaotic temporal order by minibatch gradient descent algorithm. In other words, they just take time as a parameter but ignore the correlations in time evolution. However, the early state may affect the later one and later facts tend to be dependent on early ones. In particular, the state at t i directly influences that … … … Figure 1: A toy example of TKG where solid edges represent observed edges and red edges represent new facts that occurred at that timestamp. Besides, dotted edges represent missing or potential facts. at t i+1 . E.g., (Obama, Campaign, President, 2008) directly influences (Obama, Inaugurated as, President, 2009). It has been verified that the chronological order of events can be used to improve the performance of link prediction (Jiang et al., 2016a;Jiang et al., 2016b). Based on this, we further find that the early training state can improve later one if we train facts in their chronological order. In order to capture changes in TKG's state transition, we think TKG as a sequence of dynamic graphs, not as a whole graph labeled with time information. Besides, since new facts of future timestamps can be added to TKG, TKG is expanding dynamically. And the graphs of new timestamps may still be incomplete. However, existing TKG completion methods provide no solution to complete unseen future graphs. In their training pattern, facts of all timestamps are trained jointly to complete the graphs that have appeared. Models may need to be retrained on facts of all timestamps when a new timestamp appears. In contrast, our RTFE embeds and completes TKG's each timestamp in a recursive way. By using the information of previous timestamps, RTFE can be naturally extended to future timestamps during the state transition of parameters/features. RTFE only needs to be trained on new emerging facts, which is light and immediate. SKGE has been studied for many years while TKGE is still at birth. Problems encountered in SKGE (e.g., diverse relation patterns) can also occur in TKGE. Thus the advantages of SKG researches can be used to accelerate the development of TKGs if we bridge the gap between them. Our RTFE provides a way to migrate SKGE methods to TKGs while preserving their excellent effects. Further, existing TKG completion methods de-signed specifically for the characteristics of TKGs can also be enhanced using the training pattern of RTFE. To sum up, we have made the following contributions: 1. We propose a training pattern to bridge the gap between SKGE and TKGE. Therefore, state-of-the-art SKGE models can be used to accelerate the development of TKGE. 2. Existing TKGE models can be further enhanced with our framework RTFE, after finishing their own regular training. 3. To the best of our knowledge, we are the first to deal with the TKG evolution problem (i.e., new future timestamps are added to TKGs) in the TKG completion task. 4. The experimental results on 5 TKG datasets show that RTFE preserve SKGE models' excellent performance. And the predictive performance of state-of-the-art TKGE models are further enhanced using RTFE. Problem definition A temporal knowledge graph (TKG) can be represented as a sequence of graphs, i.e. G = {G t 1 , . . . , G tn } where G t i is a set of quadruples that occured at timestamp t i , i.e. G t i = {(s, r, o, t i )} where V is the set of G's entities and s, o ∈ V ; R is the set of G's relations and r ∈ R. We focus on the following task: given a training TKG G train = {G t 1 , . . . , G tn } , to infer the missing quadruples (s, r, o, t) in test set G test = {G t 1 , . . . , G tn } (i.e., assign high scores to true quadruples and low scores to false ones). As shown in Figure 1, missing facts with high probability are dotted. Recursive Temporal Fact Embedding (RTFE) Framework The state of TKGs change with the change of entities and relations over time. SKGE models fail to capture correlations during state transition. And existing TKGE models for TKG completion capture it implicitly. It can be observed that the TKG after the current change changes on the closest former state, which is similar to a first order Markov chain (Given the state at the current moment, the state at the next moment is independent of the state at the past moment). Inspired by Markov analysis, we use the time granularity of TKGs to discretely divide states. Then the basic model of RTFE can be expressed as: where S t i represents the state of G t i and P t i represents probability transition matrix to transform S t i to that of t i+1 . A typical KG embedding learner uses its parameters θ and features X to represent the semantic information of KG. Thus we approximate state vectors as: The idea of RTFE is to dynamically adjust θ and X as the TKG changes while passing the information of each timestamp graph. We simply assume the features and parameters satisfy Markov Property: where X t i and θ t i denote features and parameters at time t i . RTFE does not specify a model, but rather a training method for TKG completion. Existing SKGE methods and TKGE methods that follow the SKGE training pattern such as DE-SimplE (Goel et al., 2019) and TComplEx (Lacroix et al., 2020) can potentially be utilized as the embedding component. The RTFE framework is illustrated in Figure 2. In section 3, we specify that RTFE how to use SKGE models for TKG completion. In section 4, we generalize RTFE to existing TKGE models to enhance their performance. Preliminary training for static features Instead of training from scratch, RTFE uses SKGE as input to the first timestamp. In order to obtain the input features, the TKG is transformed into SKG G static , which is obtained by merging the facts of each timestamp: Suppose the SKG embedding learner be θ, which takes the knowledge graph G (facts of G) and the feature X (which can be predefined or randomly initialized) as inputs. Send G static and X to θ, and then get the updated feature X after training, which will be the input to the first timestamp. Learning each timestamp recursively In TKGs, parameters θ and features X should change with time (i.e., with the change of TKG). We find that, due to the continuity of facts, most of the facts are the same in the adjacent timestamps, while only a small number of facts changed. For discrete events, they influence the states of the surrounding entities, leading to the possibility that these entities may produce new facts. Therefore, model parameters and features fitting a certain timestamp provide a good starting point for the learning of the next timestamp. Different from most neural network-based SKGE models (Schlichtkrull et al., 2018;Wu et al., 2019) which only update θ during training, leaving the input features X unchanged, we let X be updated as well, to capture the temporal dynamics of entities and relations. Therefore, in our framework RTFE, model parameters θ and input features X are both updated in the way similar to equation (1) during state transition: where θ t i and X t i denote the state vectors of θ and X at time t i respectively; P t i represent the probability transition matrix. To transform state vectors at t i to that at t i+1 , we approximate the state transition P t i as the gradient update process of learning G t i (i.e., updating according to the gradient of the loss function for several epochs): where α is the learning rate; l is the loss function defined by the specified embedding learner; ∇ θ is the gradient of l with respect to θ. It must be pointed out that the state transition matrix in Markov analysis is fixed, so the above analysis method is generally applicable to shortterm prediction. But the state vectors are different in different states and the gradient between the states is also different. Since the state vector is fixed in a specific state, a model can be established for each discrete state by the time interval of TKG. Then the gradient can be updated between states according to the difference of each state vector, to continue our framework. RTFE recursively trains each timestamp according to equation (6) and (7) and uses θ t i and X t i to test G t i . Since RTFE is trained and tested by timestamp, only the latest parameters and features need to be stored, which shows good scalability for large TKGs. The framework RTFE is illustrated in Figure 2 and the overall training and testing algorithm is shown in Algorithm 1. For Graph neural network-based methods like RGCN (Schlichtkrull et al., 2018), we make its input feature do gradient update as well, so that the input features encode the information of each timestamp, so as to enhance the information transfer between timestamps. In addition, a residual connection is added to the network between the network inputs and outputs of each timestamp. For RDGCN (Wu et al., 2019) that was designed for entity alignment, in order to measure the plausibility of a triple (s, r, o) for SKG completion, we design a distance function consisting of type distance and semantic distance: where X E ∈ R |V |×d and X R ∈ R |V |×2d denotes output entity and relation representations. Enhancing TKGE models Since existing TKGE models such as DE-SimplE (Goel et al., 2019) and TComplEx (Lacroix et al., 2020) for TKG completion follow the training pattern of SKGE models (i.e., think of TKG as a whole graph, not as a sequence of graphs), we can use them as the embedding leaner of RTFE. Specifically, we think their own training process as the preliminary training of RTFE. After TKGE models finish their own training process, we use the obtained features and parameters as the input to the learning of the first timestamp. Then RTFE trains the TKGE model recursively by equation (6) and equation (7). Extensibility for future timestamps Since RTFE embeds each timestamp recursively, transforming form current state to the next state, it provides a way to complete upcoming future timestamps. Specifically, given a sequence of observed graphs of a TKG: G obs = {G t 1 , . . . , G tn } and a sequence of upcoming future graphs: G f ut = {G t n+1 , . . . , G t n+j }. We pre-train RTFE on G obs , then embeds timestamp recursively to obtain the latest features X tn and parameters θ tn . To complete G t n+1 , we use equation (6) to equation (7) to obtain X t n+1 and θ t n+1 similarly. Then graphs of G f ut can also be completed in this recursive way, without retraining of G obs . (Goel et al., 2019). The details of the five datasets are illustrated in appendix. Evaluation settings and metrics: For entity prediction, we used mean reciprocal rank (M RR) and Hits@1, Hits@3, Hits@10 as metrics. Hits@n is defined as: For relation prediction, we used mean rank (M R) = f act∈test_set rank(f act)≤n #test_set and Hits@1 as metrics since the number of relations is small. The rank of a test triple is obtained by replacing its head/tail/relation with remaining negative samples, and then evaluating the score rank of the original triple in all the replacement samples. Mean rank (M R) is the average rank of all test triples. And Mean reciprocal rank (M RR) is the average of the reciprocal ranks. For our RTFE framework, a timestamp-bytimestamp train-test mode was adopted. The total test result was a weighted average of all timestamp test results. For example, the final M RR was calculated as: Baselines: We compared our framework RTFE to state-of-the-art TKGE models including t-TransE (Zhang et al., 2020) as the embedding learner of RTFE to perform TKG completion as well. And finally we use TKGE models as the embedding learner of RTFE to show the gain of performance. Entity prediction Entity prediction is given a quadruple (s, r, o, t), to perform head entity prediction (i.e., to predict the plausibility of (?, r ,o, t) ) , and performs tail entity prediction (i.e., to predict the plausibility of (s, r, ?, t)). The plausibility of (s, r, o, t) is ranked among all corrupted quadruples, while all true quadruples are excluded according to TransE's filtering protocol. The experimental results are shown in Table 1 and Table 1: Entity prediction on continuous fact datasets: YAGO11k and Wikidata12k. Since the relations of these 2 datasets have typical "one-to-many" nature, the performance of tail prediction is better than that of head prediction. Table 2 where results marked (*) are taken from reported results of Hyte and ATiSE. Table 1 shows that on continuous fact datasets, both translation-based and graph neural networkbased methods can be transplanted to RTFE, and the results are better than Hyte, indicating the generality and superiority of RTFE. For both TransEbased approaches, RTFE-TransE outperforms Hyte on all metrics (e.g., with the improvement of 15.0% in tail M RR on Wikidata12k) because RTFE takes advantage of the continuity of facts directly. RotatE and HAKE are the most advanced translation-based approaches and RTFE-RotatE or RTFE-HAKE outperforms other methods, which demonstrates that our framework can preserve the excellent results of these methods over SKG. Besides, the performance of state-of-the-art TKGE model TComplEx is enhanced by RTFE, which shows the gain from our recursive training pattern. Table 2 shows that on discrete event datasets, RTFE also significantly improves the performance of TKGE models. Besides, on a dense dataset (i.e., with a small number of entities and a large number of facts) like GDELT, RTFE can take advantage of SKGE models such as HAKE. Relation prediction Relation prediction is given a quadruple (s, r, o, t), to evaluate the plausibility of (s, ?, o, t). The experimental results are shown in Table 3. RTFE-RDGCN type outperforms Hyte on YAGO11k that has only 10 relations (e.g., with the improvement of 12.5% in Hits@1), which implies that type information plays an important role in this task. Since the number of relations between these two datasets is relatively small (10 and 24), the performance improvement is not obvious after adding semantic information (e.g., with the improvement of 1.3% in Hits@1). Hyte performed well on Wikdata12k. This may be attributed to its SKGE training pattern, which helped to capture applicable relation types between two entities from all the facts. In contrast, the timestamp of RTFE is trained by time, so only the facts of the current timestamp and information of last timestamp are directly utilized. To provide RTFE-RDGCN with more training data about relations, we added additional 30% negative samples obtained by replacing relations of quadruples into the negative sample set: {(s, r , o)|(s, r, o) ∈ (4) G t , (s, r , o) / ∈ G}. We call this variant RTFE-RDGCN rel , which improves the performance of relation prediction on Wikidata12k compared with RTFE-RDGCN (with the improvement of 9.1% in Hits@1). Extensibility validation In order to verify the influence of pre-trained static features on RTFE's entire TKG completion, we divide timestamps into four time intervals and perform pre-training of RTFE on them respectively. Then the pre-trained static features of these time intervals are used as inputs to RTFE to test the performance of entity prediction at all timestamps. The experimental results are presented in Figure 3. Although a complete SKG is not provided for pre-training, RTFE still remains a similar performance, which verifies the framework's extensibility for future timestamps. So RTFE can be extended to future timestamps to some extent, without the retraining of former timestamps, which shows good lightness and immediacy. Ablation study In this subsection, we explore the effects of preliminary training and recursive training. w/o pretrain refers to RTFE without preliminary training for static futures (i.e., only recursive training). As shown in Table 4 In recent years, some work (Jiang et al., 2016a;Esteban et al., 2016;Tresp et al., 2017;Trivedi et al., 2017;García-Durán et al., 2018;Jain et al., 2020;Ma et al., 2019;Xu et al., 2019;Jin et al., 2019;Wang and Li, 2019;Tang et al., 2020;Goel et al., 2019;Xu et al., 2020;Jain et al., 2020;Lacroix et al., 2020;) began to use the time information to improve the KG completion or directly complete the TKG. Based on the fact or event they dealt with, we state representative TKGE methods as follows. (1) Event completion: DE (Goel et al., 2019) made the entity embedding into a function DEEMB that takes the time point as a variable. While DE transplanted SKG embedding methods to TKGs, it didn't involve recent GNN-based SKG embedding methods. TComplEx (Lacroix et al., 2020) presented an extension of Complex (Trouillon et al., 2016) by adding timestamp embedding into decomposition of tensors of order 4. ATiSE (Xu et al., 2019) incorporated time information into entity/relation representations by using Additive Time Series decomposition. (2) Event prediction: (Esteban et al., 2016) trained an event prediction model by using background information provided by KG and recent events. RE-NET (Jin et al., 2019) models the event sequence as a temporal joint probability distribution. The method is trained on historical data and then, by sampling from the probability distribution, predicts the events of the future timestamp graph. GHNN (Han et al., 2020) used Hawkes process to capture the dynamic of evolving graph sequences. Glean (Deng et al., 2020) incorporated both relational and world contexts to capture historical information. (3) Continuous fact completion: (Jiang et al., 2016a;Jiang et al., 2016b) used the order of relations and temporal consistency constraints to improve completion but did not make the embedding space directly contain time information. (García-Durán et al., 2018) used RNN to learn the representation of temporal relations, but did not consider that the embedding of entities should also change over time. Hyte (Dasgupta et al., 2018) represented timestamps as hyperplanes, and projected the entities and relations onto these hyperplanes. Then, the facts of all timestamps are learned jointly using a translation-based score function. Conclusion We propose a framework RTFE for TKG completion. We have transplanted SKGE models to TKGs and enhance the performance of existing TKGE models. Experiments show that on five TKG datasets RTFE outperformed baselines and is extensible for future timestamps to some extent. In the future, we will further deal with discrete events. Since events with adjacent timestamps are correlated, we plan to modify RTFE so that it can learn correlations (especially causality) of events. By modeling spatio-temporal dependency of TKG, events in future timestamps can be forecasted. Besides, we plan to deal with the task of predicting time validity of facts (Leblay and Chekol, 2018).
5,188.4
2020-09-30T00:00:00.000
[ "Computer Science" ]
Solving the electron and muon $g-2$ anomalies in $Z'$ models We consider simultaneous explanations of the electron and muon $g-2$ anomalies through a single $Z'$ of a $U(1)'$ extension to the Standard Model (SM). We first perform a model-independent analysis of the viable flavour-dependent $Z'$ couplings to leptons, which are subject to various strict experimental constraints. We show that only a narrow region of parameter space with an MeV-scale $Z'$ can account for the two anomalies. Following the conclusions of this analysis, we then explore the ability of different classes of $Z'$ models to realise these couplings, including the SM$+U(1)'$, the $N$-Higgs Doublet Model$+U(1)'$, and a Froggatt-Nielsen style scenario. In each case, the necessary combination of couplings cannot be obtained, owing to additional relations between the $Z'$ couplings to charged leptons and neutrinos induced by the gauge structure, and to the stringency of neutrino scattering bounds. Hence, we conclude that no $U(1)'$ extension can resolve both anomalies unless other new fields are also introduced. While most of our study assumes the Caesium $(g-2)_e$ measurement, our findings in fact also hold in the case of the Rubidium measurement, despite the tension between the two. I. INTRODUCTION The excellent agreement between the Standard Model (SM) and experimental observations makes the persisting anomalies all the more interesting. One long-standing discrepancy between theory and experiment is that of the anomalous magnetic dipole moment of the muon, a µ ≡ (g − 2) µ /2, which has recently been updated to a 4.2σ tension with the SM [1-3] 1 , ∆a µ ≡ a exp µ − a SM µ = (2.51 ± 0.59) × 10 −9 . (1) Further data from the ongoing Muon g-2 experiment at Fermilab is expected to reduce the uncertainty by a factor of four [5], and the future J-PARC experiment forecasts similar precision [6], both of which should clarify the status of this disagreement. To add to the puzzle, an anomaly emerged in the electron sector due to a) an improved measurement of fine-structure constant, α em , using Caesium atoms [7], from which the value of (g − 2) e may be extracted, and b) an updated theoretical calculation [8]. This yielded a discrepancy in the electron anomalous magnetic moment of which constitutes a 2.4σ tension with the SM [9]. Notably, this has the opposite sign to the muon anomaly, Eq. (1). Recently, however, a new measurement of the fine-structure constant using Rubidium atoms gave [10] ∆a Rb e ≡ a exp (Rb) e − a SM e = (4.8 ± 3.0) × 10 −13 . This is a milder anomaly, the discrepancy between experiment and SM being only 1.6σ, and it is in the same direction as the muon anomaly. Remarkably, the Caesium and Rubidium measurements of α em disagree by more than 5σ, therefore it is difficult to obtain a consistent picture of a exp e . Given this uncertain status quo, in this paper we choose to focus predominantly on the earlier Caesium result, Eq. (2), and only discuss the Rubidium result in section V (which, however, is the first Z analysis of this new experimental situation, to the best of our knowledge). The presence of dual anomalies in the electron and muon sectors motivates an exploration of new physics models that could simultaneously explain both. Moreover, the relative size and sign of these anomalies poses an interesting theoretical challenge. Let us consider these issues. Firstly, the opposite signs of ∆a µ and ∆a Cs e (from now on we will drop the superscript) immediately excludes all new physics models whose contribution 1 We note that the significance of this anomaly has been questioned by a lattice QCD calculation of the leading-order hadronic vacuum polarisation contribution to a SM µ [4]. to the magnetic dipole moment of charged leptons has a fixed sign. The dark photon [11], for instance, generates ∆a e,µ > 0, and therefore cannot satisfy the dual anomalies. Secondly, the contribution from flavour-universal new physics to (g − 2) is generally expected to be proportional to the mass or mass squared of the lepton (see e.g. [12,13]), whereas from Eqs. (1) and (2) These considerations, along with numerous low scale constraints discussed below, lead to significant model-building obstacles. So far, various attempts have been made to explain the anomalies, with different solutions relying on the introduction of new scalars, SUSY, leptoquarks, vector-like fermions, or other BSM mechanisms, see e.g. [9,. In this paper, we study a rather unexplored possibility that a (light) Z boson with flavour-dependent lepton couplings accounts for both anomalies. A new gauge boson of a U (1) symmetry is a well-motivated candidate for many BSM models. It has long been considered a possible explanation of the (g − 2) µ anomaly [47] (see also e.g. [48][49][50][51][52]), thus it seems important to investigate if a U (1) extension of the SM can at the same time also resolve the (g − 2) e anomaly. One immediate advantage of Z models is that it is possible to generate positive or negative contributions to the magnetic moment simply by adjusting the relative size of its vector and axial couplings to fermions, as will be shown below. We focus on the Z in mass range m e < m Z < m µ , which is a natural consequence of various experimental bounds (more on this in Sections II B and III). A Z in the MeV mass range has been of interest (see e.g. [53][54][55][56][57][58]) due to hints of a new 17 MeV boson to explain anomalies in nuclear transitions observed by the Atomki collaboration, both in Beryllium [59], and more recently Helium [60]. Models with MeV-scale Z also have the capacity to generate ∆N eff 0.2 in the early Universe [61], thereby somewhat ameliorating the Hubble tension [62]. The question then is whether the scenario survives the wealth of sensitive experiments, in particular for m Z ∼ O(MeV). To answer this, we first perform a model-independent analysis to identify regions in the parameter space of Z models that can successfully explain both the (g − 2) anomalies. This to our knowledge is the first study of this scenario in such a general and model-independent way, although a specific Z model was previously studied in the context of the dual (g − 2) anomalies and found not to work [63]. Note that we are focusing on the minimal scenario where the additional contribution to the anomalous magnetic moments comes solely from the Z , which is different from some of the other models studied in literature that include a Z plus other new fields (e.q. [39,44,63]). The conclusions from our model-independent analysis serve as a powerful tool in checking the viability of various specific Z models, and we hope that it will be useful for more complex model-building. The layout is as follows: Section II introduces our conventions for the effective Z couplings and potential origins of these couplings. We study experimental constraints on these couplings in Section III, summarising our findings in Figs. 2 and 3. In light of the array of experiments probing light vector bosons in the near future, we discuss the discovery potential of such a Z in Section III C. Equipped with the model-independent analysis, in Section IV we consider several models and the challenges they face. We demonstrate that some of the simplest and most common classes of U (1) extensions of the SM cannot explain the two anomalies simultaneously. Finally, in Section V we address the Rubidium (g − 2) e anomaly and study the capacity of a Z model to explain it in conjunction with the (g − 2) µ anomaly. A. Effective Z couplings In the most general framework, a new Z with family-dependent charged lepton couplings leads to flavour violation. However, in this paper we assume that the charged lepton Yukawa matrix and the matrix of charged lepton Z couplings are simultaneously diagonalisable and therefore the Z has only lepton-flavour conserving couplings. Various flavour models predict such scenarios (see, for instance, [64]) and in this way we avoid stringent limits on flavourviolation, such as from µ → eγ [65]. Flavour-conserving couplings of fermions to the Z can be described through L = −Z µ J µ Z , with gauge current, Rewriting the charged lepton interactions in terms of vector and axial couplings, It is typically a simple exercise to derive these effective couplings for a given model. For now we assume that the different effective couplings are unrelated. In models with no extra fermions, there are three different contributions to the couplings of SM fermions to the Z arising from a U (1) gauge group. These are: • Charge assignment of the fermion under the U (1) (flavour dependent). • Z − Z mass mixing, which is generated if the SM Higgs sector is charged under the U (1) (flavour universal). The combination of these three contributions can generate variety of vector and axial couplings. As explained in the introduction, in this work we are concerned with exploring the possibility that a single Z accounts for the (g −2) e,µ discrepancies. We will firstly survey the parameter space in a model-independent way in terms of the effective lepton-Z couplings defined in Eq. (6). The conclusions from this analysis are then used in Sections IV and V to study whether these couplings can be realised in a few specific classes of Z models. B. Contribution to the charged lepton anomalous magnetic moment The Z modifies the magnetic moment of a charged lepton via the one-loop diagram in Fig. 1. In the notation of Eq. (6), the contribution for a charged lepton of flavour α is [66] In the limits m α m Z and m α m Z , this simplifies to We see that the way to achieve correct signs for the contributions to muon and electron anomalies (∆a e < 0 and ∆a µ > 0) is with a non-zero axial coupling for the electron (C Ae ) and vector coupling for the muon (C V µ ). We remark that it is impossible to satisfy both the anomalies simultaneously if we demand flavour universality, i.e. C V e = C V µ and C Ae = C Aµ . This is straightforward to see from We may now make some broad arguments about preferred m Z values. In the case of a light Z with m Z m e , even the smallest effective couplings required to explain the anomalies, accomplished by setting C V e = C Aµ = 0, lead to orders of magnitude between C V µ and C Ae , which could only be accounted for by either an orders of magnitude difference in their charges under the U (1) or a very fine-tuned cancellation of the flavour-dependent part of C Ae against the flavour-universal contribution. We will see in Section III that such a light Z with couplings sufficiently large that it satisfies the anomalies is in any case excluded by cosmological constraints. Therefore, we will focus on m Z > m e . 2 It is interesting to note that if the anomalies had the opposite sign, i.e. had the experimental data required ∆a e > 0 and ∆a µ < 0, then C V e = C V µ and C Ae = C Aµ could have given a viable solution. Thus, neither the different sign nor the unusual ratio of the anomalies necessarily implies that flavour non-universal physics must be present. For the heavy regime, i.e. m Z m µ , two arguments follow. Firstly, considering the muon sector, in the region 2m µ < m Z ≤ 10 GeV, the new vector boson is excluded by BaBar from its decay into two muons [67], while for 5 GeV ≤ m Z ≤ 70 GeV it is similarly excluded by CMS [68]. Turning to the electron sector, we note that for m Z 10 GeV, the axial coupling to electrons required to satisfy the anomaly in electron sector is |C Ae | 0. less fine-tuned, solutions and so we will focus on this regime in the remainder of this paper. PLINGS The effective couplings introduced in Eq. (6) are subject to a wide variety of constraints, which we shall now discuss. In general, the Z could couple to all SM fermions, and indeed there are some rather stringent bounds on Z couplings to quarks. However, we will focus on Z interactions with electrons and muons, those being the critical ones for the explanation of the (g − 2) e,µ anomalies. Since lepton doublets contain both charged leptons and neutrinos, non-zero effective couplings to charged leptons generally imply effective couplings to neutrinos, which have their own experimental constraints. This will be borne out in the example models considered in Section IV. 3 For a given explicit model, there may be many additional constraints. These can arise in several different ways. Firstly, as mentioned just above, the Z may also couple to the tau or to quarks. Bounds on Z couplings to light quarks are discussed for instance in [53,54,[56][57][58]. Secondly, Z − Z mixing leads to a shift in Z boson couplings, which have been very precisely measured at LEP [69], as well as other electroweak-scale parameters. While there may be such model-dependent bounds, the goal of this section is to study 3 The dark photon is a notable counter-example, with interactions solely generated through gauge-kinetic mixing, where C V α = 0 while C Aα = C να = 0. However, the dark photon does not successfully explain the (g − 2) e,µ anomalies because, as is easily seen from Eq. (7), C Ae = 0 implies ∆a e ≥ 0. the viability or otherwise of a Z solution to the two anomalies based on leptonic Z couplings alone. The plethora of experimental constraints are described below, with our results summarised in Figs. 2 and 3. A. Couplings to electrons We first outline the most important limits on the effective couplings of the Z to electrons (C V e , C Ae ) and electron neutrinos (C νe ). Cosmological and astrophysical bounds MeV-scale states with even very small interactions with electrons or neutrinos (effective couplings as tiny as |C| ∼ 10 −9 ) can remain in thermal contact with the SM plasma during Big Bang Nucleosynthesis (BBN) and thereby significantly alter early universe cosmology. Bounds on the masses of electrophilic and neutrinophilic vector bosons from various cosmological probes were calculated in [70]. Combining BBN and Planck data, they found at 95.4% C.L. that an electrophilic Z , i.e. C 2 V e + C 2 Ae |C νe |, is constrained to have a mass of at least 9.8 MeV. From Eqs. (2) and (8), we see that for m Z MeV, the effective electron-Z coupling should be |C Ae | > 10 −6 , so the BBN bounds do apply here. The limit is slightly weakened for larger |C νe |, therefore we take m Z ≥ 9.8 MeV as a conservative lower bound on our Z mass. 4 The Z also affects various aspects of stellar evolution. The most critical of these for a MeV-scale Z is white dwarf cooling [71]. The Z mediates an additional source of cooling, via e + e − → Z → νν. Since the Z mass under consideration is much larger than white dwarf temperatures, T W D ∼ 5 keV, this can be treated as an effective four-fermion interaction at the scale T W D with the Z integrated out. Motivated by the good agreement between predictions and observations of white dwarf cooling, the benchmark set by [71] is that new sources of cooling should not exceed SM ones. We therefore impose as an approximate bound. When plotting this constraint in Fig. 2, we assume that only C V e , C Ae and C νe are non-zero. Finally, we note that a Z which couples to neutrinos can also be an additional source of energy loss for supernovae, if it is able to escape the supernova core. We followed the formalism in Appendix B of [61] and enforced that the additional energy loss due to the Z is no greater than the energy loss in the SM during the first ten seconds of the supernova explosion. However, for a roughly MeV-scale Z this observation only constrains a band of effective couplings 10 −12 C να 10 −7 , which is much too small to be relevant for the anomalies. Collider and beam dump bounds A stringent limit on Z interactions with electrons comes from the BaBar experiment, which searched for a dark photon, A , via e + e − → γA with A → e + e − . The results are reported in [72] and probe masses from 20 MeV up to 10.2 GeV. The bound on ε, the kinetic mixing parameter in the dark photon model arising from the gauge-kinetic term Ae . We neglect the statistical fluctuations in the BaBar bound (cf. Fig. 4 of [72]), opting conservatively to extrapolate from the most constraining points of the 90% confidence exclusion region and obtain our bound by interpolating between these. This constraint becomes mildly stronger with Z mass, with for instance for m Z 40 (20) MeV. For m Z < 2m µ , the Z is sufficiently light that it decays only to electrons and neutrinos. We will see that the couplings to electrons should generically be much larger than couplings to neutrinos, thus BR(Z → e + e − ) ≈ 1. The BaBar result alone rules out a vast region of parameter space. The smallest axial Z − e coupling required to satisfy the (g−2) e discrepancy is given by |C Ae | 9×10 −6 (m Z /MeV), as can be seen from Eq. (8) by setting C V e = 0. Then the BaBar bound |C Ae | 3 × 10 −4 for m Z Z heavier than 40 MeV up to the largest mass probed by the experiment, 10.2 GeV. This limit only strengthens for C V e = 0, since a larger C Ae is then required to explain (g − 2) e , while at the same time C Ae is more constrained because BaBar bounds the combination Ae . The KLOE experiment also constrains the Z coupling to electrons [73]. Although generally weaker than BaBar's limit, its exclusion region covers additional parameter space since the experiment probes masses as small as 5 MeV. For these low masses, the bound is around Beam dump experiments probe the Z couplings to electrons, since the Z may be pro- , see e.g. [74]. The produced Z s should therefore decay in the dump before they reach the detector. The best bound comes from NA64 [75], which sets limits on a Z with masses between 1 MeV and 24 MeV. A further stringent bound on the parameter space comes from the precise measurement of parity-violating Møller scattering at SLAC [76]. For Z masses below around 100 MeV, the bound is independent of m Z and yields [77] |C V e C Ae | 10 −8 . As indicated above, a tiny C V e is ideal for explaining the (g − 2) e anomaly while avoiding collider constraints with as small a value of C 2 V e + C 2 Ae as possible. Taking C V e close to zero is clearly also an efficient way to evade this Møller scattering limit. Neutrino scattering bounds Very strong restrictions on the effective couplings come from measurements of neutrinocharged lepton scattering [55]. There have been many experiments testing neutrino interactions. Here we study the most relevant ones: TEXONO [78] Borexino [79], and CHARM-II [80]. These experiments are known to be among the most constraining in general (see e.g. following Ref. [55]. Comparing this with the TEXONO measurement, [78] puts extremely stringent bounds on the Z effective couplings. Borexino measures the scattering of solar neutrinos. The electron neutrino survival probability is measured as (51 ± 7)%, while the experiment cannot distinguish muon and tau neutrinos. For simplicity, we therefore assume that 50% of the scattered neutrinos are electron neutrinos, with 25% each of muon and tau neutrinos. 5 Then the scattering rate induced by the Z is The cross-section including new physics should not deviate from the SM cross-section by more than about 10% [79,81], and this restriction sets a strong limit on the parameter space. Note from Eqs. (13) and (14) that C V e and C Ae can both be large as long as the C νe,µ,τ are sufficiently small. Analysis of constraints in the electron sector We now combine all the constraints discussed above to analyse viable parameter space for the explanation of ∆a e . Our results are summarised in Fig. 2. In the plots, effective couplings to muons and taus are set to zero, which is relevant for the bounds from White Dwarfs and Borexino, cf. Eqs. (9) and (14) respectively. We first set the neutrino coupling, (a) C νe = 0, |C V e | fixed such that a e is 1σ below the experimental value. (b) C νe = 0, |C V e | fixed such that a e is 1σ above the experimental value. (c) C V e = 0, C Ae > 0 fixed such that a e is 1σ below the experimental value. (d) C V e = 0, C Ae > 0 fixed such that a e is 1σ above the experimental value. As the vector coupling of Z to electron is required to be smaller than the axial coupling, Eq. 8 for electron is well approximated by Following this conclusion, we set C V e = 0 in Figs. 2 (c,d) to explore the maximum allowed parameter space for the neutrino coupling, C νe , against the mass m Z . Similar to before, in the left plot, Fig. 2 (c), we set C Ae such that a e is 1σ below its experimental value, while in the right plot, Fig. 2 B. Couplings to muons Now we turn to the bounds on the effective couplings of the Z to muons and the muon neutrino, namely C V µ , C Aµ , and C νµ . There are fewer bounds on these than on the couplings to electrons for a few reasons. One is that electrons, being stable, are far easier to handle experimentally. Another reason is that we are led to probe Z masses sufficiently light that they don't decay into muons. Then, as we have seen, various experiments constrain C V e,Ae from the absence of Z → e + e − but cannot similarly constrain C V µ,Aµ from the absence of the Z → µ + µ − decays as these are already kinematically forbidden. Despite this, there remain various strict limits on Z interactions with muons and muon neutrinos. Cosmological and astrophysical bounds When |C νµ | 10 −9 , bounds from BBN and Planck studied by [70] MeV to avoid constraints from measurements of N eff and primordial element abundances. Additionally, we note that a study of energy loss in supernovae due to Z − µ interactions by [82] rules out a Z with coupling |C V µ | 4 × 10 −4 for masses less than O(100) eV. 6 Recall, however, that for m Z 100 eV, the effective coupling required to explain the (g − 2) e anomaly must be greater than 10 −9 . With an interaction of this size, the BBN bound on a new electrophilic species dictates that m Z must be at least in the MeV range. We can therefore rule out the possibility of an extremely light Z (i.e. m Z MeV) being able to explain the two g − 2 anomalies. Its mass must consequently be at least 16 MeV, as we showed from the analysis of constraints on Z couplings to the electron sector in the previous section. Neutrino scattering bounds Several neutrino scattering experiments bound couplings to muons and muon neutrinos. The most stringent of these are Borexino and CHARM-II, introduced above. The Borexino result was given in Eq. (14). The mean (anti)neutrino energy in the CHARM-II experiment is much larger than the Z masses we consider, with E ν = 23.7 GeV and Eν = 19.1 GeV [80], therefore the approximation m Z √ m e T which we used to obtain Eqs. (13) and (14) cannot be used. We apply the formalism in [55,81] to obtain numerical results, which enter into Fig. 3 by enforcing that the shift in the neutrino scattering cross-section induced by the Z is no greater than 6% [55]. We mention that some doubts on the CHARM-II analysis were presented in [56], however we do not enter into this discussion. A Z with couplings to muons and muon neutrinos also modifies the neutrino trident process, ν µ N → ν µ µ + µ − N [83]. Neglecting the coupling C Aµ , since |C Aµ | |C V µ | is necessary to explain the (g − 2) µ anomaly when m Z m µ (see Eq. (8)), the trident crosssection including the Z contribution is [83] σ Trident σ SM This can be compared with the CCFR measurement, σ CCFR /σ SM = 0.82 ± 0.28 [84], to give a constraint. 6 In the models studied in [82], the Z has interactions with both muons and muon neutrinos. However, at low masses ( MeV) it is the Z − µ interactions with dominate the bounds, while the Z − ν µ interaction plays a negligible role. (a) C νµ = 10 −5 , C Aµ fixed such that the a µ anomaly is exactly satisfied. (c) C V µ , C Aµ = 0, C Ae > 0 fixed such that a e is 1σ below the experimental value. (d) C V µ , C Aµ = 0, C Ae > 0 fixed such that a e is 1σ above the experimental value. FIG. 3: Constraints on the mass and effective couplings of the Z in the muon sector. In (a) we have set C νµ = 10 −5 fixed C Aµ so that the a µ anomaly is exactly satisfied, while in (b) we show contours of the values of |C Aµ | this corresponds to, as a function of |C V µ | and m Z . In the bottom two plots we focus on the neutrino couplings, setting C Ae > 0 and C V e = 0 such that the contribution of the Z to a e is (c) 1σ below, and (d) 1σ above the experimental value. See text for more details. Analysis of constraints in the muon sector We combine the results of the above constraints in Fig. 3. In Fig. 3 (a) The bounds on the Z interaction with muon neutrinos are significantly stronger. Fig. 3 (c) and (d) show bounds on neutrino couplings from various experiments (we take C νe = C ντ = 0). We must invoke couplings to electrons, since modifications to both neutrino scattering on electrons and white dwarf cooling necessarily depend on the Z coupling to electrons, as does Z detection at beam dumps. To be as minimal as possible, we take only non-zero C Ae , assuming C V e = 0. In (c) C Ae > 0 is set (as a function of m Z ) such that a e is 1σ below its experimental value, while in (d) it is set such that a e is instead 1σ above. This allows us to see the full range of allowed C νµ . Clearly, its absolute value cannot be much larger than ∼ 2 × 10 −5 , which justifies the choice of C νµ in plot (a). Taking C Ae < 0 instead would only flip (c) and (d) about the x-axis, since the neutrino scattering and white dwarf constraints are invariant under C Ae → −C Ae and C νµ → −C νµ when those are the only non-zero couplings. C. Future Discovery Potential Having surveyed the current limits, in this section we will discuss future experiments which could discover (or preclude) the low scale Z explanation of (g − 2) e,µ by closing the allowed parameter space given in Figs. 2 and 3 (keeping in mind that these were generated assuming the Caesium a e result). The place to start is with the magnetic dipole moment anomalies themselves. The two highly inconsistent measurements of α em (from which the value of (g − 2) e is derived) made in Caesium [7] and Rubidium [10] atoms demand a third independent experiment to resolve the situation. It is indeed not even clear whether an anomaly exists. On top of this, the Muon g-2 and J-PARC experiments [5,6] are expected to provide improved measurements of a µ , which is particularly important given the recent debate about the SM prediction [4]. Beyond this, there are several future experiments which are expected to test the allowed Z couplings to charged leptons. We note first of all that an improved measurement of parity-violating Møller scattering can never close the parameter space, as this bounds the combination |C V e C Ae |, which can always be satisfied by taking one of C V e or C Ae to zero while the other (depending on the sign of the a e anomaly) explains the discrepancy. Thus, we will not discuss future experiments in this area. To fully probe the available space, we require other bounds to be strengthened. Currently, the lower bound on the Z mass, m Z 16 MeV, is fixed from NA64's visible decay limits. An alternative experiment with similar sensitivity is MAGIX at MESA [88], which is also currently under construction and expecting results in the next few years. The combination of NA64 and Belle-II (or MAGIX) could entirely rule out or discover low scale Z explanations of the current Caesium (g − 2) e result. Beam dumps (e.g. FASER [89] and SHiP [90]) are also expected to play a role. This provides hope that a firm conclusion could be reached within the next few years. The MUonE experiment [91] will probe the product of couplings to electrons and to muons. In this way it is a unique test of a Z which explains both anomalies, because it is required to have significant couplings to both leptons. The experiment is expected to cover a significant portion of the parameter space which remains open, see [92,93]. Finally, we point out that while there are many dark photon experiments beyond those listed above, many do not directly test our framework. There are two reasons for this. Firstly, we are concerned with the lepton-Z couplings only, so experiments which involve production of the Z though quarks are not applicable. This includes electron-proton scattering (such as DarkLight [94]), proton-proton scattering and pion decays (e.g. NA62 [95]). Secondly, we require visible (Z → ee) decays of the Z , which excludes the invisible-only experiments such as PADME [96], VEPP-3 [97], BDX [98] and LDMX [99]. Consequently, the available parameter space in Figs. 2 and 3, and hence the discovery potential, may only be fully reached by the small number of experiments which focus on vector bosons produced by leptons and which decay to e + e − . IV. VIABILITY OF SPECIFIC Z MODELS Having completed our model-independent analysis in Section III, we now turn to specific realisations of Z models. The ingredients for the simultaneous explanation of the (g − 2) anomalies with a single Z are: 3. Large vector coupling to muons, 5 × 10 −4 < |C V µ | 0.05, and an axial coupling C Aµ that is smaller by at least a factor of a few. We now attempt to realise this hierarchy of couplings in various classes of Z models, each of which inevitably introduces additional relations between effective couplings. We will begin with the simplest case of just the SM extended by a U (1) . We will then move onto a scenario with additional Higgs doublets, and finally discuss the viability of a Froggatt-Nielsen style model, in which the gauge invariance of the charged lepton Yukawa interactions is relaxed. Note that in each case the dominant contribution to the shift in (g − 2) e,µ comes solely from the Z . Before commencing, we also remark that the cancellation of gauge anomalies is crucial for constructing a consistent theory. The U (1) 3 and U (1) grav 2 anomalies can always be satisfied by introducing additional chiral fermions which are charged under the U (1) but sterile with respect to the SM (in fact one needs at most five [100]). The anomaly cancellation conditions involving SM groups are typically more challenging to satisfy. However, this section addresses the primary question of whether it is possible to generate the desired effective couplings, without delving into how to do so in an anomaly-free way. A. SM+U (1) First consider a minimal Z model, in which the SM is extended by a gauge U (1) and also add a scalar, S, charged under the U (1) , whose non-zero VEV, S = v S / √ 2, spontaneously breaks the U (1) symmetry. We note here that this unspecified U (1) covers in particular the case of gauging combinations of electron, muon and tau number, i.e. U (1) xe+yµ+zτ for some x, y, z. Let us establish the formalism, which will also be useful for the subsequent models. In general there is mixing between U (1) Y and U (1) , and the kinetic terms for the pair of U (1)s can be written as where X µ is the gauge field associated with U (1) and X µν is the corresponding field strength tensor. An appropriate rotation and rescaling of fields removes the mixing (see e.g. [101]), and leaves the couplings in the covariant derivative in the form, where g and g 1 are the respective U (1) and U (1) Y gauge couplings, and Y and z are the respective charges of the field under U (1) Y and U (1) . In the above, we have only kept terms that are leading order in the kinetic mixing parameter ε, which is taken to be small. This givesg −g 1 ε. Breaking the EW and U (1) symmetries and diagonalising the gauge boson mass matrix, we move into the basis of mass eigenstates, A µ , Z µ , and Z µ , using where w is the weak-mixing angle, φ is the Z − Z mixing angle, and s (c) denotes sine (cosine). This gauge boson mixing is given by where z H g/g + 2z H , z H (z S ) is the U (1) charge of the Higgs (S). Finally, after outlining this procedure, we can write the effective couplings of SM fermions to the gauge boson mass eigenstates. We find that the effective couplings for charged leptons at leading order in g ,g where z Lα (z Rα ) is the U (1) charge of the lepton doublet (singlet), l Lα (e Rα ). Here we see from the U (1) invariance of the SM charged lepton Yukawa couplings, the new fermions could also contribute to (g − 2) e,µ . 7 Here we consider option (b). This was previously explored e.g. in the context of the Atomki anomaly [103]. In Section IV C we will consider option (c). Let us take the type-I 2HDM, wherein all SM fermions couple to the same Higgs doublet, H 2 . This choice will not be important for the following discussion, since we are concerned only with the lepton couplings, thus our discussion is general. We can also generalise to the case of many Higgs doublets, see for instance Appendix A of [55]. The key point is that this set-up modifies Eq. (24) and therefore permits non-negligible axial couplings. The kinetic mixing between U (1) Y and U (1) and the subsequent modification of covariant derivatives is as described in Eqs. (18)- (20). The neutral gauge boson mass mixing is modified by the presence of two Higgs fields, H 1,2 , with U (1) charges z 1,2 and VEVs Then the mixing angle is given by tan 2φ where with z j =g/g + 2z j for j = 1, 2. Note that in the limit β → 0 (π), i.e. when only v 1 (v 2 ) is non-zero, we recover the result of Eq. (22) up to z H → z 1 (z 2 ). Accounting for the kinetic and mass mixing, the effective couplings for charged leptons and neutrinos at leading order in in g ,g are using that the U (1) -invariance of the charged lepton Yukawa couplings demands z Rα = z Lα − z 2 . We see that C Aα can be non-zero when z 1 = z 2 , and that it is flavour-universal. C V α and C να , on the other hand, are flavour-dependent. However, both depend linearly on z Lα , so that Consequently, there are not six independent effective couplings C V α , C Aα , C να for α = e, µ, but rather only four are independent. Given this, it is in fact simple to argue that this class of models cannot simultaneously explain the (g − 2) e,µ anomalies. Our model-independent analysis in Section III established that due to the stringency of the bounds from neutrino scattering experiments, the effective neutrino couplings must be tiny: C νe , C νµ Figs. 2 and 3. From Eq. (30), this implies that we need |C V e − C V µ | 10 −5 . However, it is apparent from points 2 and 3 of the summary list at the beginning of this section that Clearly, this framework is not successful. In the simplest U (1) extension of the SM, only the (g − 2) µ anomaly could be resolved as it was impossible to generate significant axial couplings of the Z . Introducing additional Higgs fields enables large axial couplings, so that either the (g − 2) e or the (g − 2) µ anomaly may be explained. However, the correlations between different effective couplings and the strength of the bounds on neutrino couplings conspire to preclude an explanation of both anomalies at the same time. C. Froggatt-Nielsen model A second way to generate sizeable axial couplings, as is necessary to explain the Caesium (g − 2) e anomaly, is by considering a Froggatt-Nielsen type model [102]. In this set-up, we modify the charged lepton Yukawa interactions to some effective interactions of the form, Here λ αβ = λ α δ αβ is a diagonal matrix of couplings (in the charged lepton mass basis), ϕ is a flavon, n αβ = n α δ αβ is a diagonal matrix whose entries are determined by the U (1) charges of the flavon and the SM leptons, and Λ is the scale of some unspecified UV physics. Then the SM charged lepton Yukawa couplings are recovered at the non-zero VEV of the flavon, i.e. y α = λ α ( ϕ /Λ) nα . More complicated set-ups can also be written down (e.g. the clockwork model of [58]), and there may be more than one flavon. at leading order in g ,g. This model was previously studied in [57] to explain the Atomki Beryllium anomaly [59], another instance in which unsuppressed C Ae is required. Combining Eqs. (23), (24) and (32) gives This is a generalisation of Eq. (30) to the case of non-universal C A . However, we see from Fig. 2 (a) and Eq. (7) to a good approximation, see Eqs. (7) and (8). Thus, in the mass range of interest, and hence there is no combination of effective couplings fulfilling Eq. (33) such that both anomalies are satisfied to within 1σ and all experimental constraints are satisfied. It is notable that even in such a general theoretical setting, the Z explanation is unsuccessful. V. Z SOLUTIONS CONSIDERING THE RUBIDIUM MEASUREMENT We have thus far considered only the (g − 2) e anomaly from the Caesium measurement, Eq. (2). Significantly, this has the opposite sign to the muon anomaly. In Section IV, it was shown that the combination of the different signs and sizes of the anomalies, along with the copious experimental constraints, makes it impossible to construct a model which can satisfy both at the same time. One might suppose that it is easier to explain two anomalies which have the same sign, which is exactly the situation if one considers instead the recent Rubidium result for a e , cf. Eq. Let us immediately turn to the most general class of models considered in the previous section, the Froggatt-Nielsen scenario. The SM+U (1) and NHDM+U (1) models are indeed specific cases of this set-up. The key feature of this model is the relation between electron and muon couplings given in Eq. (33), which is itself a consequence of gauge invariance. We note that the magnitude of the Rubidium anomaly is similar to that of the Caesium anomaly, with |∆a Rb e /∆a Cs e | = 0.55, and therefore the former demands C V e ∼ O(10 −4 ), just as the latter had required C Ae ∼ O(10 −4 ). Moreover, the electron neutrino couplings are still constrained to be O(10 −5 ), with the bounds of Figs. 2 (c,d) modified by an order-one factor because the relevant bounds are similar or identical under C Ae → C V e , see Eqs. (9), (13) and (14). The most minimal case is non-zero C V e and C V µ only, in which case Eq. (33) dictates C V e,µ 5 × 10 −4 , which however is excluded by BaBar. 9 For smaller m Z , keeping the same effective coupling C V causes either too small a shift in a µ or too great a shift in a e . Generalising to include C Ae and C Aµ , the former is restricted by the Møller scattering bound, |C V e C Ae | 10 −8 . In Fig. 4, we plot 1σ regions which explain the two anomalies individually along with the various constraints, setting C Ae = −10 −8 /C V e to saturate the Møller scattering limit, and C Aµ = C νe = C νµ = 0. Eq. (33) dictates that C V µ = C V e + 10 −8 /C V e . As can be seen, while either anomaly can be satisfied by itself, the pair cannot simultaneously be explained. Various alternatives do not ameliorate the problem. Smaller |C Ae | would in turn require that |C V e | is smaller in order to satisfy ∆a Rb e , thereby lowering the blue bland in Fig. 4. Making C Ae > 0 would decrease C V µ as a function of C V e , thus raising the purple a µ band. Finally, larger |C Aµ | would mean a larger C V µ is needed to explain ∆a µ , this also raises the purple band. For this reason, the general Froggatt-Nielsen scenario cannot solve the anomalies. Since this set-up covers the SM+U (1) and NHDM+U (1) models, those scenarios are similarly unsuccessful. We see that three main challenges in explaining both ∆a Cs e and ∆a µ -namely i) the relative magnitudes of the anomalies, ii) the stringent experimental limits on the different effective couplings, particularly C νe and C νµ , and iii) the relations between the effective couplings due to gauge invariance-are also present in the attempt to explain ∆a Rb e and ∆a µ simultaneously. Thus, although the different signs of the muon and Caesium electron anomalies is an interesting feature, it therefore seems that this is not the main obstacle for Z model-building. Since the sizes of the anomalies is fixed by experiment and the limits on effective couplings will only get stronger with time (see the summary in section III C), in order to solve both anomalies one must find ways to get around Eq. (33) in particular. Possible ways to do this, such as introducing extra fermions, are beyond the scope of this paper. 9 The BaBar and NA64 bounds on |C Ae | in Fig. 2 (a,b) for C V e 0 (i.e. along the diagonal) can be reinterpreted here as a bound on |C V e |, since the experiments bound the combination C 2 V e + C 2 Ae . VI. CONCLUSION There is a mixed experimental picture for the anomalous magnetic moment of charged leptons. While the status of (g −2) µ has been solidified by the recent Fermilab measurement, there is considerably more uncertainty surrounding (g − 2) e . We have explored in detail the possibility of simultaneously explaining both the (Caesium) (g − 2) e and (g − 2) µ anomalies with a single low scale Z . After introducing the formalism in Section II, in Section III we found the experimentally allowed region which can explain the anomalies to within 1σ. We then demonstrated in Section V that such models also cannot simultaneously satisfy the (g − 2) µ and Rubidium (g − 2) e anomalies. This was notable since those two anomalies have the same sign. Thus, factors such as the strong individual limits on Z couplings (studied in Section III) and the relative size of the two anomalies are more challenging to overcome in Z models than their relative sign. To our knowledge, this was the first study of a Z explanation for the muon anomaly with the newest (g − 2) e result. The conclusion of our analysis is that Z -only explanations of the dual (g −2) e and (g −2) µ anomalies are ruled out. Additional new fields must be introduced in order to explain the two discrepancies. This is true both for the Caesium and Rubidium values of a e . If the (g − 2) µ anomaly, measured both at Brookhaven and Fermilab, is borne out by the future J-PARC experiment, and (either) (g − 2) e discrepancy persists, the SM will be faced by two disagreements between theory and experiment of a similar nature but a different magnitude and possibly sign. In principle, a MeV-scale vector boson can have couplings to leptons which resolve both while satisfying the plethora of existing experimental constraints. It appears, however, that additional fields contributing to leptonic magnetic moment(s) are also required. Given the promising experimental outlook over the next decade, we should know soon whether or not there does exist such a Z , and associated dark sector, with the ability to resolve the (g − 2) e,µ anomalies.
10,974.6
2021-02-15T00:00:00.000
[ "Physics" ]
Notes on xenophytes detected in Catalonia , Spain S . NotES oN xENoPhYtES dEtEctEd iN cataloNia, SPaiN.— These notes include six species, among them three grasses: Axonopus compressus, Dactyloctenium aegyptium and Megathyrsus maximus, the crucifer Lepidium densiflorum, a dwarf annual composite Soliva sessilis, and the climbing Aristolochia sempervirens (Aristolochiaceae), all present in or around Barcelona, Catalonia (northeastern Spain). All are recent additions to the increasing alien flora of the region. Some have been recorded previously from the Iberian Peninsula but are new to Catalonia; others appear to be new records for the peninsula. Climber found growing on Montjuïc, the coastal hill within the city limits of Barcelona, where it seems to prefer the shade of pine (Pinus halepensis Mill.) and other trees.In early 2013 about a dozen plants were located in an area occupying approximately 1 km 2 between 130 and 180 m above sea level.It is a Central and Eastern Mediterranean species, with its western limit in Algeria, though naturalized further north in Italy and the south of France.The only Peninsula record is from Portugal (Beira Litoral;Almeida, 1999) where it has become naturalized.We are not aware of earlier records from Spain, and can only suppose it to be a recent introduction in this locality.Schrad.Spain, Barcelona: Barcelona, Zona Franca, Passeig de la Zona Franca, UTM 31T DF2879, 0-10 m, 9.05.2012, S. Pyke (BC878060). S. PykE Collectanea Botanica vol.32 (2013): 83-86, ISSN: 0010-0730, doi: 10.3989/collectbot.2013.v32.007 Within this section of Lepidium L. (sect.Dileptium DC.), there are several closely related species amply distributed worldwide, particularly in warmtemperate regions.The large population of several thousand plants observed in 2012 in a re-sown road verge of the most southerly district of the city of Barcelona corresponds to L. densiflorum, a North American species which, over some parts of its range, comes into contact with L. virginicum L., resulting in populations with intermediate characteristics which prove difficult to classify.Both these species are naturalised, or appear as sporadic casuals, in Europe.Also recorded from Europe, though less frequently, are L. africanum (Burm.f.) DC., from climatically suited regions of the African continent, and L. bonariense L. from the Cono Sur of South America.The native European L. ruderale L. is present in the northern half of the Iberian Peninsula, and the alien L. virginicum and L. bonariense have also been recorded from various localities.However, I have been unable to find records or samples of L. densiflorum, though the possibility of misidentifications among these critical taxa cannot be ruled out. Although regional and national floras may help, it is advisable for the collector of plants belonging to this section to consult the second edition of Flora Europaea (Akeroyd & Rich, 1993), as this treatment provides a key to both native and alien species, along with brief but accurate descriptions.This low-growing more or less prostrate annual plant of the Composite tribe Anthemideae, originating in the Cono Sur of South America, is today naturalized in North America, Europe, Oceania and elsewhere, and apparently best considered taxonomically sensu lato at present.The populations observed show cypselas with wing outline conforming to the original description of S. pterosperma, but studies (for instance Lovell et al. 1986) have shown this character to be not entirely reliable. Soliva sessilis This could well be the first record of this plant from Catalonia.It is present in various other coastal localities in the peninsula, including the Basque Country (Aizpuru et al., 2007), Huelva (Sánchez Gullón & Verloove, 2009), NW Spain and Portugal (Tutin, 1976). This grass was found growing on a slope planted with ivy (Hedera L. cultivars).Although a tropical species-native to the New World tropical and subtropical regions -it seems to tolerate fairly cold, damp conditions, although it needs to be stressed that most winters have been mild in the last decade, thus favouring the survival of warmer-climate plants.This stated, the very cold spell in early 2012, which could have eliminated the population, failed to do so, and at the time of writing, the plants are visible again and appear to be well established.Some members of the genus are similar to Digitaria Haller, though apparently more closely related to Paspalum L. The species in question is closely related to A. fissifolius (Raddi) kuhlm.(A.affinis Chase), a grass naturalized in the Minho region (river Cávado) in the NW Iberian Peninsula, and recently detected in the province of Huelva in SW Spain (Valdés et al., 2011).In fact, the two taxa are hard to separate until well studied.The most useful distinction appears to be the ratio of fertile floret length/total spikelet length, best observed when grain is maturing (details in Zuloaga, 2003, andGiraldo-Cañas, 2008).The leaves are generally broader with margins normally ciliate in A. compressus, and this species has slightly longer, more pointed, spikelets than those found in A. fissifolius. Both species develop long stolons which root at the nodes, thus increasing their chances of survival.These stolons are characterized by their short leaves and often arching internodes.Growth is fast once temperatures over 21ºC are reached. This Old World tropical and warm-temperate grass has been recorded from various parts of Southern Europe including the Iberian Peninsula (Aragoneses et al., 2011).This population, likely to persist in its observed locality if the present climate trend remains, constitutes what is believed to be the second record from Catalonia, the first being from the nearby locality of Gavà (Verloove & Sánchez Gullón, 2008). It can be recognised by its (2)4-8 digitate inflorescence, each raceme ending in a short bare section of the rachis.The seed is rugose-tuberculate, with more or less horizontal furrows, a character which, together with Eleusine Gaertn., separates this genus from other tropical and sub-tropical grasses. Apparently, this grass normally behaves as an annual, and the plants observed are caespitose annuals.However, most of the inflorescences on the collected material are made up of only two arms.In fact, the species is reportedly very variable, and stoloniferous plants are also known to occur. Other species in the genus include perennials like D. australe Steud., which has a strongly vegetative behaviour and produces long stolons, and supposedly differs also in its inflorescence having fewer racemes.This more southerly African species is used as a lawn grass in many countries, and as a consequence, is now naturalized in Australia and other parts. A grass earlier detected in Cambrils (Tarragona) growing in an area later affected by urban development (Verloove, 2005) ut Urochloa maxima (Jacq.)R. D. Webster, and later found growing in great quantity along the route of the motorway AP7 in the provinces of Castelló and Valencia (Verloove, 2006). The present record of this African grass is from the province of Barcelona, close to the city of Barcelona in the locality of Sant Feliu de Llobregat, where a variety responding closely to var.coloratus (C.T. White) Simon & Jacobs grows on the banks of the A2 shortly after this road separates from the motorway on leaving the lower Llobregat area. This genus, the name being taken from the subgeneric rank (Simon & Jacobs, 2003), attempts to resolve the earlier difficulties involved in passing this grass, along with M. infestus (Peters) Simon & Jacobs, to the genus Urochloa P. Beauv.The rugose lemma and palea of the upper (fertile) floret is the chief character these two species present which help distinguish Megathyrsus (Pilg.)B. k.Simon & S. W. L. Jacobs from Panicum L. as presently defined. According to White (1938), var.coloratus is distinguished by its hairiness especially on the leaf sheaths and in the ligular zone as well as the dark purple mature spikelets, and is supposed to be a robust grass (although the literature states that the habit is exceedingly variable in M. maximus s. l.).The Sant Feliu population possesses these characteristics, although the plants observed are not especially robust.Verloove (2005Verloove ( , 2006) ) mentioned the Cambrils population without indicating the variety, and those further south he included in var.pubiglumis (k.Schum.)Simon & Jacobs.There may not be much support for these varieties in a taxon so variable and widely dispersed as is M. maximus. Conclusions In this present age the migration of plants, in many cases as a consequence of human activity, is a reality that cannot be effectively impeded.However, the more environmentally aggressive species need to be identified and, where possible, appropriate control methods employed. Of the plants mentioned above, perhaps only Megathyrsus maximus and Soliva sessilis could be considered a potential nuisance.The exotic grasses are at present at their climatic limit, but if the north Mediterranean coast becomes gradually more subtropical this type of plant will gradually become more established in the region. As regards the degree of naturalization, the following can be considered as more or less naturalized: Aristolochia sempervirens, Soliva sessilis and Megathyrsus maximus (this latter species more so further south).Dactyloctenium aegyptium selfsows and reappears every year, though in very small quantity, and in the cited locality seems to be in direct competition with Eleusine indica.The other records need to be monitored, and can at present be considered as casuals, though could become more firmly established given time.
2,080.4
2013-12-30T00:00:00.000
[ "Biology" ]
WMT 2016 Multimodal Translation System Description based on Bidirectional Recurrent Neural Networks with Double-Embeddings Bidirectional Recurrent Neural Networks (BiRNNs) have shown outstanding results on sequence-to-sequence learning tasks. This architecture becomes specially interesting for multimodal machine translation task, since BiRNNs can deal with images and text. On most translation systems the same word embedding is fed to both BiRNN units. In this paper, we present several experiments to enhance a base-line sequence-to-sequence system (Elliott et al., 2015), for example, by using double embeddings. These embeddings are trained on the forward and backward direction of the input sequence. Our sys-tem is trained, validated and tested on the Multi30K dataset (Elliott et al., 2016) in the context of the WMT 2016 Multimodal Translation Task. The obtained results show that the double-embedding approach performs significantly better than the traditional single-embedding one. Introduction Sequence-to-sequence learning is a new common approach to translation problems (Sutskever et al., 2014). The basic idea consists in mapping the input sentence into a vector of fixed dimensionality with a Recurrent Neural Network (RNN) and, then, do the reverse step to map the vector to the target sequence. From this new perspective, multimodal translation (Elliott et al., 2015) has become a feasible task. In particular, we are referring to the WMT 2016 multimodal task that consists in translating English sentences into German, given the English sentence itself and the image that it describes. This paper describes our participation in this task using a translation scheme based on Bidi-rectional RNNs (BiRNNs) which allows to combine both information from image and text. In this paper, we take as baseline system the one from (Elliott et al., 2015) and focus on experimenting with the word embedding system and encoding techniques. The rest of the paper is organised as follows. Section 2 briefly describes related work on image captioning and machine translation. Section 3 gives details about the architecture of the multimodal translation system. Section 4 reports details on the experimental framework including the parameters of our model and the results obtained. Finally, Section 5 concludes and comments on further work. Related work Image captioning has gained interest in the community and deep learning has been applied in this area. The two most common caption-related problems are caption generation and caption translation (Elliott et al., 2015). Similarly, machine translation approaches based on neural networks (Sutskever et al., 2014; are competing with standard phrase-based systems (Koehn et al., 2003). Neural machine translation uses an encoder-decoder structure . The implementation of an attention-based mechanism (Bahdanau et al., 2015) has allowed to achieve state-of-the-art results. The community is actively investigating in this approach and there have been enhancements related to addressing unknown words (Luong et al., 2015), integrating language modeling (Gülçehre et al., 2015), using character information in addition to words (Costa-jussà and Fonollosa, 2016) or even combining different languages (Firat et al., 2016), among others. System description This section describes the main architectures that have been tested to build the final system. Baseline approach The baseline system is a RNN model over word sequences (Elliott et al., 2015), which can use visual and linguistic modalities. The core model is a RNN over word sequences, trained to predict the next word in the sequence, given the sequence so far. The input sequence is codified in 1-of-K vector, which is embedded into a high-dimensional vector. Then, a unidirectional RNN is used. Finally, in the output layer, the softmax function is used to predict the next word. This model is extended to a multimodal language model, where sequence generation in addition to be conditioned on the previously seen words, are conditioned on image features. The translation model simply adds features from the source language model, following work from (Sutskever et al., 2014; and calling the source language model the encoder and the target language model the decoder. Sequence-to-sequence approach and enhancements Inspired by the architecture presented in (Sutskever et al., 2014), we train a system based on the many-to-many encoder-decoder architecture. It accepts a sequence x 1 , .., x N as input and returns a sequence y 1 , .., y N , where N is the maximum sequence length allowed. The architectures that we have tested start in a unidirectional encoder-decoder, then we use a bidirectional encoder-decoder, a bidirectional encoder-decoder with double embeddings, and a final architecture that accepts a combination of input text and image. See Figure 1 Architecture (A) The model receives as input the codifications 1-of-K of the source sequence x 1 ...x n , then the word embedding is computed, obtaining a new representation E(x 1 )...E(x n ). This new sequence is processed by a RNN L, obtaining the vectors L 1 ...L n . These vectors are processed by another RNN D, obtaining the sequence D 1 ...D n , which is processed by a conventional neural network obtaining the target vectors which are normalised using sof tmax. Architecture (B) The main difference is that we are using BiRNNs, processing the input sentence forward and backward. The BiRNN is implemented with LSTMs (Long Short Term Memories) for better long-term dependencies handling (Hochreiter and Schmidhuber, 1997;Chung et al., 2014). The BiRNN are represented by unit L, but in this case, one in each direction, generating two vectors Lf i and Lb i , corresponding to each input x i . Architecture (C) In addition to using BiRNNs, each input codification is processed by two different feed-forward neural networks E f and E b , generating two vectors E f (x 1 )...E f (x n ) and At each timestep the pair of vectors are fed to the BiRNN Lf and Lb. Architecture (D) Finally, the last architecture proposes to introduce an image. See Figure 3.2. This is the main advantage of using a machine translation system based on neural networks: we can use multimodal inputs. In this case, image and text. The model in this case has two inputs: the input text sequence x 1 ...x n and the image vector, which is the result of intermediate layers of a pretrained convolutional neural network (Simonyan and Zisserman, 2014). Data The system is developed, trained and tested with the Multi30K dataset provided by the WMT organization. On our experiments, all characters are converted to lower case. The chosen vocabulary consists on all the training source words and all the training target words that appear more than once. This choice is made to minimise the number of unknown tokens at the source sentences and to avoid an excessive model size and training time. Model training Each source sentence is encoded onto a N × V matrix M , where each row represents a 1-of-K encoding of a word over a source vocabulary with V words. An unknown word is replaced by an special <U> token and a <E> token is appended at the end of the sequence. If the sequence length (including <E>) is less than N the remaining rows will be zeros. If the sequence is too long, then it is truncated in order to suit the input size restrictions. During the training phase, target sentences also have a <B> token before the first word. For a given example, the generated prediction is considered to be all the words generated between the <B> and <E> tokens. Unknown tokens are replaced by the second highest probability word. Training is performed on batches of size 10000 and on mini-batches of size 128. The target metric is the categorical cross entropy and the used optimiser is Adam (Kingma and Ba, 2014). Results are validated at each epoch on the dataset validation split using the BLEU metric (Papineni et al., 2002), along with model perplexity. BLEU scores during validation are also used as an early stop criteria in case the maximum score so-far is not surpassed on the following 10 epochs. In order to evaluate our system performance obtained results are compared against a single-embedding system trained under the same conditions and parameters. Their BLEU score monitorization can be observed in Figure 3 and the chosen parameter set is summarised in Table 1. Table 2 shows the BLEU and METEOR (Lavie and Denkowski, 2009) results for the main architectures described in section 3 for the official test set of the WMT 2016 Multimodal Translation We see that using BiRNNs improve vs RNNs, and double-embeddings improves over singleembeddings. Finally, adding the image information does not improve results. Therefore, the best architecture (C) is the one that participated in WMT 2016 Multimodal Translation Task. Official results ranked our system in the 14th position out of 16. We priorised participating with a pure multimodal extensible architecture. However, we know it would have improved our ranking just performing a simple technique as rescoring our system with a standard Moses (Koehn et al., 2007 The best architecture (C) (compared to using one embedding) is capable of solving problems like unknown words or chosing the appropriate word. Table 3 shows an example that shows the word fixation problem. Results However, our generated translations have often many repeated words or end prematurely, mainly due to the differences in lengths and alignments between source and target sentences and the lack of feedback from previous timesteps. In any case, our system is still capable to generate readable translations and to replace unknown words with similar ones. Source a man sleeping in a green room on a couch Generated ein mann schlaft in einem grünen grünen auf einem sofa Reference ein mann schlaft in einem grünen raum auf einem sofa Table 3: An example that shows the word fixation problem Also, our system performance drastically decreases on long sentences, or on sentences where the length of the source and target sentences differ too much. Conclusions Our system is not competitive compared to standard phrase-based system (Koehn et al., 2003) or the auto-encoder neural machine translation system (Bahdanau et al., 2015) as shown by our ranking in the official evaluation (14 position out of 16). However, the architecture of our system makes it feasible to introduce image information. Maybe in a larger corpus we would get competitive results. All software is freely available in github 1 . The main contribution of this paper is that we show that double embeddings (trained on forward and backward input sequence) provides a significant improvement over single embeddings. As further work, we are considering experimenting towards replacing the word based encoder for a character-based embedding (Costa-jussà and Fonollosa, 2016), or to introduce attention-based decoders (Bahdanau et al., 2014). Due to the system's modularity, it is also possible to reuse intermediate outputs to train additional models. For example, it is possible to extract the BiRNN intermediate outputs and fed them to another decoder model, thus reducing training time.
2,499
2016-08-01T00:00:00.000
[ "Computer Science" ]
Gene module regulation in dilated cardiomyopathy and the role of Na/K-ATPase Dilated cardiomyopathy (DCM) is a major cause of cardiac death and heart transplantation. It has been known that black people have a higher incidence of heart failure and related diseases compared to white people. To identify the relationship between gene expression and cardiac function in DCM patients, we performed pathway analysis and weighted gene co-expression network analysis (WGCNA) using RNA-sequencing data (GSE141910) from the NCBI Gene Expression Omnibus (GEO) database and identified several gene modules that were significantly associated with the left ventricle ejection fraction (LVEF) and DCM phenotype. Genes included in these modules are enriched in three major categories of signaling pathways: fibrosis-related, small molecule transporting-related, and immune response-related. Through consensus analysis, we found that gene modules associated with LVEF in African Americans are almost identical as in Caucasians, suggesting that the two groups may have more common rather than disparate genetic regulations in the etiology of DCM. In addition to the identified modules, we found that the gene expression level of Na/K-ATPase, an important membrane ion transporter, has a strong correlation with the LVEF. These clinical results are consistent with our previous findings and suggest the clinical significance of Na/K-ATPase regulation in DCM. Introduction Heart failure affects more than 40 million people globally with a high rate of morbidity and mortality [1,2]. Cardiac hypertrophy is common in the early stage of heart failure progression, whereas dilated cardiomyopathy (DCM) is the leading cause of heart transplantation [3]. Clinical data suggested that about 50% of patients with cardiac hypertrophy eventually become decompensated and develop heart failure, while the other 50% develop diastolic dysfunction [4][5][6]. Initial cardiac hypertrophy is a mechanism that compensates for reduced cardiac output, but subsequent cardiac myocyte death, reduced contractility, and massive tissue fibrosis compromise cardiac function and lead to heart failure [4,5,7,8]. It is also known that black people have the highest risk of heart failure-related death [9]. The risk for developing DCM in black people is about 3-fold compared to whites, and the death rate is also higher in black patients that cannot be explained by socioeconomic status [10]. The overall hospitalization rate of heart failure has improved, but the disparity between black and white people has not decreased [11]. Population-based gene-sequencing has identified some variations in the black patient cohort that may be associated with racial differences between blacks and non-Hispanic whites [12,13]. However, it has not been well understood how these variants increase the risk of heart failure in blacks. The weighted gene co-expression network analysis (WGCNA) is a widely used high-throughput data analysis tool to study biological networks using large cohorts of patient data [14,15]. The R package of WGCNA allows researchers to define gene modules and study the relationships between co-expressed gene modules and clinical phenotypes as well as to compare the gene module changes between different groups of patients. From clinical data and animal studies, it has been shown that decrease of Na/K-ATPase is an important risk factor for cardiac decompensation and dysfunction [16][17][18][19][20][21][22][23]. Decrease of Na/K-ATPase has also been a significant phenomenon in patients with aging [24][25][26], diabetes with hypertension [27][28][29], and neurological disorders [30,31]. Na/K-ATPase is an important membrane protein enriched in muscle and kidney tissues. A functional Na/K-ATPase is composed of alpha and beta subunits. There are four alpha isoforms (α1, α2, α3, and α4) and at least three beta isoforms (β1, β2, and β3) that are expressed in a tissue-specific pattern. Human hearts express three alpha isoforms (α1, α2, α3) of Na/K-ATPase [32][33][34], but the specific role of each isoform is not fully understood. Na/K-ATPase has been extensively studied for its ion transporting function since it was discovered in the 1950's. It was not until the early 2000's that the signaling function of Na/K-ATPase started to be appreciated [35][36][37][38]. More recently, we demonstrated that reduction of α1 isoform of Na/K-ATPase causes cardiac cell apoptosis in response to its ligand treatment in animal models of uremic cardiomyopathy [19,20]. In the current work, we used WGCNA as a tool and analyzed RNA-sequencing data in DCM patients and compared the DCM-related gene modules and signaling pathways between African Americans and Caucasian Americans. Access to patient data The gene expression FPKM (fragments per kilobase of exon per million reads) data of RNAsequencing were downloaded from public available NCBI GEO datasets (GSE141910). The clinical phenotype data and gene expression count data of this cohort were obtained from an online source (GitHub-mpmorley/MAGNet), which was kindly provided by Dr. Michael Morley from University of Pennsylvania. DESeq2 normalization and pathway analysis The count values of RNA-sequencing data from heart left ventricle tissue were analyzed for gene expression change using the DESeq2 R package [39] with RStudio [40]. The study cohort included patients with dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), peripartum cardiomyopathy (PPCM), and donors of heart transplant. Since the number of HCM and PPCM was relatively small and only DCM patients received heart transplant in this cohort, we used data only from the DCM patients (n = 166) and their donors (n = 166) for this analysis. The differentially expressed genes (DEGs) were defined as log 2 FoldChange > = 1 or < = -1, and p value< = 0.01. Gene names of DEGs were then uploaded to the Enrichr website (https://maayanlab.cloud/Enrichr/) for gene ontology (GO) analysis. The KEGG pathways analysis was performed on the WebGestalt website (www.webgestalt.org) by uploading the whole gene set and their log 2 FoldChange values derived from the Deseq2 analysis. Weighted gene co-expression network analysis (WGCNA) The FPKM values of RNA-sequencing data from GSE141910 dataset were used for gene coexpression network analysis using a WGCNA R package [15]. Gene expression data were checked for extremely low expressing genes or missing values using goodSamplesGenes function in the WGCNA package. The network construction, module detection, topological analysis, and visualization were performed following the online WGCNA tutorial (https://horvath. genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/). Specifically, a weighted gene network was created based on the adjacency matrix, which was calculated from the gene co-expression similarity as described in the original publication of WGCNA R package [15]. The one-step automatic network construction and module detection method were followed from the online tutorial. The soft threshold power was set at 10, the minimum gene module size was set at 30, and the merge cut height was set at 0.25. Detected gene modules were assigned to different colors, and gene names from each module were extracted to a spreadsheet for pathway enrichment analysis. To obtain the relationship of each gene module to the clinical phenotypes, the correlation of each module and the phenotype as well as their p values were calculated using the moduleTraitCor and module TraitPvalue functions in the WGCNA R package. The relationship was presented as a heatmap using the value of coefficient. The continuous and categorical variables from the available clinical phenotypes were used for this relationship analysis. In addition, the Na/K-ATPase gene expression data were also used as a phenotype in this analysis. Consensus analysis between African Americans and Caucasians The FPKM data were sub-grouped into African Americans (AA) and Caucasians (CA). Topological characteristics in the two groups were compared using the aforementioned WGCNA package [14] following the step-by-step instructions. The one-step automatic network construction method WGCNA R package was used for network construction, gene module detection, and consensus analysis. A soft threshold power of 10 was applied for the analysis. The minimum gene module size was set at 30, and the merge cut height was set at 0.25. The consensus gene dendrogram, gene module correlation, and preservation were calculated and presented as indicators of similarity between the AA and CA groups. Transcriptional Factor (TF) enrichment analysis using ChEA3 online platform The ChEA3 TF enrichment analysis provides a platform (http://maayanlab.cloud/chea3/) to allow users to input their interested gene list and identify potential TFs that may coordinate the regulation of the gene list [41]. TFs are prioritized based on the overlap between userinputted gene sets and annotated sets of TF targets stored within the ChEA3 database. To perform the TF enrichment analysis, we extracted the gene list from the magenta gene module and input to the ChEA3 platform. The TFs that regulate Na/K-ATPase gene expression was then compared between healthy donors and patients. Network presentation using Cytoscape program The top module corresponding to each phenotype was identified and the genes included in these modules were uploaded to the Cytoscape program (version 3.8.1) against STRING database as described in our previous publication [42]. A network indicating potential protein-protein interaction was created and presented as a network map. For Na/K-ATPase-related pathway analysis, the log 2 FoldChange data derived from DESeq2 analysis were uploaded to Cytoscape program and searched against Na/K-ATPase/Src wikipathway (Wikipathway WP5051). Statistics The RNA-Sequencing analysis was performed using a statistical method as we previously reported [39,42]. Gene expression changes were presented as "volcano" plots with the -log 10 of the p-value for y-axis, while the log 2 FoldChange for x-axis. A p-value <0.01 was used as a threshold to select the differentially expressed genes (DEGs). The DEGs were then used for gene ontology (GO) analysis. For the KEGG pathway analysis, the entire gene dataset was used so that the GSEA algorithm can apply the log 2 FoldChange data to determine whether gene sets were coordinately over or under-expressed [43]. To construct the unsigned network, we applied soft-thresholding power following the WGCNA R package tutorial. A threshold power of 10 was chosen based on the scale independence and mean connectivity analysis. The relationship between a gene module and clinical phenotype was indicated by the coefficient number and p value calculated using functions from the WGCNA R package. Characteristics of patients and donors The published RNA-sequencing dataset (GEO141910) is a cohort of heart failure patients including patients who received heart transplant and their donors. The RNA sequencing was obtained from 366 samples of human heart left ventricles. Among these samples, 166 patients were diagnosed with dilated cardiomyopathy (DCM) and received heart transplants, while another 166 samples were from non-failing donors. The data also contained RNA-sequencing data of 28 hypertrophic cardiomyopathy (HCM) and 6 peripartum cardiomyopathy (PPCM) patients. In the cohort, 172 were females, and 124 were African Americans (242 Caucasian Americans). Among African Americans, 77 were DCM, 44 were donors, 1 was HCM, and 2 were PPCM. In Caucasians, 89 were DCM, 122 were donors, 27 were HCM, and 4 were PPCM. The basic characteristics of the cohort is summarized in Table 1. Gene expression and pathway changes in DCM patients receiving heart transplants compared to non-failing donors We performed the gene expression analysis using DESeq2 based on the published RNAsequencing data obtained from the GEO141910 cohort. The log 2 FoldChange and statistical significance between DCM patients (n = 166) and their donors (n = 166) derived from the DESeq2 is presented as a volcano plot in Fig 1. The log 2 FoldChange data was used for x-axis and the -log 10 P data was used for y-axis in the volcano plot. To identify the significantly changed genes (DEGs), we applied the threshold as described in the Method section (log 2 FoldChange> = 1 or < = -1 and p value<0.01). A total of 1469 genes were found to be significantly changed in DCM patients versus the donors. These DEGs were then uploaded to the Enrichr webpage for gene ontology (GO) analysis. The top 10 overrepresented pathways based on their p value are shown in Fig 2. Both GO Biological Process and Cellular Component analysis showed that extracellular matrix-related genes were overrepresented in the DCM patients, while the GO Molecular Function showed that different receptor-ligand activity-related and immune activity-related pathways were overrepresented. We also performed a gene set enrichment analysis (GSEA) using the whole set of log 2 FoldChange data derived from DESeq2 analysis. As shown in Fig 3, the pathways of Type I diabetes mellitus, immune system, and cell adhesion molecules related pathways were significantly upregulated while several metabolic pathways were downregulated in DCM patients compared to the donors. However, these downregulated pathways were not statistically significant (FDR>0.5). RNA sequencing data from left ventricle heart tissue of GEO dataset GSE141910 was used for foldchange analysis with DESeq2 R package. The volcano plot was created using EnhancedVolcano. The X-axis shows the log 2 FoldChange and the Y-axis shows the -log 10 of p value. Red color shows genes that are significantly changed (log 2 FoldChange> = 1, or < = -1, and p<0.01). Blue color shows genes that had more than or equal to two-fold change, but p> = 0.01. Black color shows genes that have less than two-fold change and p> = 0.01. https://doi.org/10.1371/journal.pone.0272117.g001 Gene module regulation in dilated cardiomyopathy To study the co-expressed gene network that may associate with the phenotype of DCM, we performed a weighted gene co-expression network analysis (WGCNA) using the RNAsequencing data of heart samples from the GEO141910 dataset. We first performed sample clustering and removed 3 outliers from the samples using a cutoff of 220 in the clustering tree. A soft threshold power of 10 and a minimum module size of 30 were applied for gene module detection. The module-trait relationship was analyzed with WGCNA program. We selected only continuous variables in the phenotype data for this analysis. The LV mass data was excluded because it has a significant number of missing values (207 out of 332 were missing). In addition to the clinical phenotypes, we also included the gene expression level of Na/ K-ATPase alpha 1, alpha 2, alpha 3 isoforms, and the Na/K-ATPase alpha1 antisense gene (alph1a1-as1) as phenotypes for this relationship analysis. As shown in Fig 4, and left ventricle ejection fraction (LVEF). Since DCM patients had lower LVEF, the correlation coefficient was in the opposite direction for DCM and LVEF. The association to HCM, age, weight, height, and heart weight was generally moderate. Interestingly, we found that several gene modules were also significantly associated with Na/K-ATPase gene expression. Specifically, selected gene modules such as the greenyellow and black modules were significantly related to both LVEF and Na/K-ATPase alpha 1 gene expression. To analyze the relationship of the genes included in these gene modules, we selected the top gene module corresponding to each phenotype and uploaded the genes from these top modules to the Cytoscape program. Using STRING database as a targeted network, we created a protein-protein interaction network map as shown on the left panel of Fig 4. The different colors in the network indicated the genes that included in modules from each phenotype. The network analysis suggested that these genes were highly interactive with each other and may have intrinsic relationship in the regulation of these gene expression. To further understand how these gene modules are related to the phenotype of DCM, we uploaded the top 5 gene modules (black, greenyellow, darkred, grey60, and grey) with strong association to DCM and LVEF and performed the pathway enrichment analysis using web based Enrichr platform. Gene names from each module were used as input for the pathway enrichment analysis. BioPlanet 2019 pathway database was used for pathway analysis. The top 10 enriched pathways (ranking was based on their p-value) for each gene module are summarized in Table 2. These results showed similar pathways as in GO and KEGG analysis, such as extracellular matrix (ECM)-related and immune response-related pathways, suggesting that fibrosis and inflammation may be common in heart tissue from DCM patients. On the other hand, it revealed that gene module such as the greenyellow module, which contains many small molecule transporters, was also strongly associated with the DCM phenotype and LVEF. Comparison between African Americans and Caucasian Americans To evaluate the racial effects of the genetic change in this patient cohort, we separated the dataset into two groups: African Americans (AA) and Caucasian Americans (CA). As shown in Fig 5A, a gene dendrogram was obtained by average linkage of hierarchical clustering for AA and CA groups. Gene expression similarity was determined by pair-wise weighted correlation metric and clustered using a topological overlap metric. Gene modules are colored at the bottom. The comparison of eigengene and group module characteristics showed an overall similarity of 0.94 (Fig 5B). To further study signaling pathway changes in DCM patients from the AA and CA groups, we compared the overall gene expression change and performed a GSEA analysis using the Web-based Gene Set Analysis tool kit (www.webgestalt.org). The log 2 FoldChange in DCM patients compared to donor controls were derived from the counts data from each group using Deseq2 R-package as we previously described [44]. The log 2 FoldChange of genes in AA and CA groups were for KEGG pathway analysis. As shown in Fig 6, the top 10 upregulated signaling pathways in the AA and CA groups were highly overlapped, though the p values were modestly different. The 10 downregulated pathways in both groups were mostly not significant (FDR>0.05) except for the pathogenic E Coli infection and complement and coagulation cascades pathways in the AA group. These data suggested that there were no significant differences between the AA and CA groups in terms of gene expression and pathway changes during DCM. Na/K-ATPase reduction in patients with dilated cardiomyopathy Our previous studies in animals showed that Na/K-ATPase α1 reduction caused cardiac cell apoptosis and tissue fibrosis and was closely related to cardiac dysfunction [19,20]. To evaluate the role of Na/K-ATPase in human cardiac dysfunction, we analyzed Na/K-ATPase gene expression and LVEF data in the above dataset of heart failure patients and their donors. As shown in Fig 7, Na/K-ATPase α1 expression was significantly correlated with LVEF in this cohort, while α2 had a relatively weaker correlation and α3 was not significantly correlated with LVEF. We then compared the Na/K-ATPase expression level in DCM patients (n = 166) to their non-failing donors (n = 166), the result showed that DCM patients had significantly lower expression of Na/K-ATPase (16.53±0.04 in donors and 15.82±0.04 in DCM patients, p<0.01). We also compared the Na/K-ATPase expression levels between African Americans and Caucasians and found no significant differences in both donor group and patient group. To understand how Na/K-ATPase was regulated in heart failure patients, we analyzed the potential regulators of genes in magenta module using ChEA3 transcriptional factor enrichment analysis as described in the Methods section. From this analysis, we found that three transcriptional factors (TFs) (BHLHE-40, RFX1, and EOMES) can directly regulate Na/ K-ATPase α1 gene (ATP1A1) expression. To verify that these TFs indeed contribute to the regulation of Na/K-ATPase gene and other genes in this gene module, we compared the expression level of the three TFs in the heart-transplant DCM patients and their donors from the RNA-sequencing data and found that EOMES level was significantly increased to more than two-fold in DCM patients compared to that in non-heart failure donors, but BHLHE-40 and RFX1 were not significantly changed. To study how Na/K-ATPase-related signaling pathway was changed in DCM patients, we uploaded the log 2 FoldChange data derived from DESeq2 analysis to the Cytoscape program and searched against the Na/K-ATPase/Src wikipathway (WP5051). As shown in Fig 8, the blue color indicates decreased expression of the corresponding gene while red color indicates increased gene expression, which showed that Discussion WGCNA is a useful tool that can define gene modules, intramodular hubs, and network nodes and allow the study of relationships between co-expression modules. A gene module generally refers to a group of genes with similar expression patterns, and they tend to be functionally related and co-regulated [15,45,46]. Our current work used a large dataset of heart failure patients and identified several gene modules and signaling pathways that are closely related to DCM and reduced LVEF. The genes included in these modules are enriched in small molecule transport, fibrosis, and metabolic pathways. A significant finding is that there is no significant association between any gene modules and the ethnic groups. We performed gene module consensus analysis and compared the gene expression and pathway changes in African American and Caucasian patients and showed more similarity than disparity between the two groups of patients. The complexity of genetic factors that impact the persistence of health disparities between blacks and whites has been observed and has affected treatment recommendations, presenting a challenge of epic proportion for the medical community to tackle. On the other hand, African Americans or blacks in general have been underrepresented in many clinical trials for heart failure treatment, and the clinical outcome were less representative for blacks and consequently affect the usage of certain drugs in this population [47]. However, our data analysis results suggest that the development of reduced LVEF or DCM is associated with similar gene and pathway changes in African Americans and in Caucasians despite the differences in their genetic variations. This phenomenon should be considered in the strategy of DCM treatment in different ethnic groups, and there may be no reason to limit the use of the available drugs for African Americans even though the clinical data were not representative to this population. To this note, including more African Americans in future clinical trials is critical given that the heart failure prevalence in African Americans is significantly higher than Caucasians [9]. Among the genes that were changed in heart failure patients compared to non-failing donors, we specifically looked at the expression level of Na/K-ATPase, which is a membrane protein that plays an important role in energy metabolism and membrane potential in muscle cells. It has been previously reported that Na/K-ATPase concentration and activity were reduced in patients with heart failure, and cardiac ejection fraction was correlated with the amount of Na/K-ATPase [16,[21][22][23]. Since these studies have smaller patient numbers, we took the advantage of the published datasets with a large number of patients on GEO Profiles from NCBI and analyzed the data of Na/K-ATPase gene expression and its relationship with heart failure. It is noticed that the~18% reduction in mRNA level of ATP1A1 was less than the protein amount change (~30%) reported in earlier studies [16,23]. However, it makes sense considering the cumulative effect in the translational process. The Na/K-ATPase reduction was also observed in other datasets of DCM patients as well as in ischemic heart failure patients (Gene datasets GSE1145 and GSE26887, which can be found in GEO profiles website: https:// www.ncbi.nlm.nih.gov/geoprofiles/5123633 and https://www.ncbi.nlm.nih.gov/geoprofiles/ 87379539). Our previous work has demonstrated that reduction of Na/K-ATPase causes tissue fibrosis and cell apoptosis in different cell types and mouse models of uremic cardiomyopathy [19,20,48]. Here, we showed that in DCM patients Na/K-ATPase reduction is associated with decreased LVEF, which is consistent with our previous studies on Na/K-ATPase reductioninduced cardiac dysfunction [19,20]. Regulation of Na/K-ATPase involves transcriptional, translational, and post-translational mechanisms [49,50]. We have previously reported that cardiotonic steroids such as ouabain can regulate Na/K-ATPase expression at the translational level [48]. The current study suggested that TFs may also be involved in Na/K-ATPase regulation in DCM patients. TFs are important regulators of gene expression. In human tissue, gene expression is controlled by about 1600 TFs [51]. A single TF can regulate multiple genes and a single gene could be simultaneously regulated by different TFs, and thus forms a regulatory network of TFs and their target genes. Since TFs are usually upstream regulators of gene expression, manipulation of the identified transcription factors could provide therapeutic targets for gene dysregulation in diseases. The ChEA3 TF enrichment analysis provides a platform to allow users to input their interested gene list and identify potential TFs that may coordinate the regulation of the listed genes [41]. The EOMES is a T-box transcriptional factor that is essential for embryonic cardiac development at mesoderm stage [52][53][54]. The increase of EOMES and decrease of Na/ K-ATPase α1 in the DCMpatients may suggest a role of the transcription factor in regulation of DCM-related gene expressions. Limitations The current study was based on one published database and the RNA-sequencing data may have batch effect. Since we used published dataset for this analysis, some information about the original experimental design such as inclusion and exclusion criteria as well as clinical presentations of these patients before heart transplant was not available. Future analysis using more datasets of DCM patients is needed to confirm current findings. In addition, we noticed that the donor group has a high hypertension rate (102 out of 166). Since hypertension could causes gene expression changes that are related with cardiac remodeling, the comparison of the donor and patients may not fully reflect the difference between the heart failure patients and normal healthy individuals. However, we think that our results still revealed important gene expression and signaling pathway changes between dysfunctional heart and functional heart.
5,710.6
2022-07-28T00:00:00.000
[ "Biology" ]
Large-scale vivid metasurface color printing using advanced 12-in. immersion photolithography Nanostructures exhibiting optical resonances (so-called nanoantennas) have strong potential for applications in color printing and filtering with sub-wavelength resolution. While small scale demonstrations of these systems are interesting as a proof-of-concept, their large scale and volume fabrication requires deeper analysis and further development for industrial adoption. Here, we evaluate the color quality produced by large size nanoantenna arrays fabricated on a 12-in. wafer using deep UV immersion photolithography and dry etching processes. The color reproduction and quality are analyzed in context of the CIE color diagram, showing that a vivid and vibrant color palette, almost fully covering the sRGB color space, can be obtained with this mass-manufacturing-ready fabrication process. The obtained results, thus, provide a solid foundation for the potential industrial adoption of this emerging technology and expose the limits and challenges of the process. High brightness color palette image The optical microscope image of the color palette in the main text (Figure 2b) was taken with brightness level is set to maximum possible, before the background color starts to deviate from black. While it provides a general overview of color quality, some areas with low reflection remain dark and it is difficult to estimate the color. Figure S1 is a color palette image taken at higher light source brightness (same magnification x10 and NA = 0.2), revealing the colors in the left bottom corner. Although colors become brighter, Si 3 N 4 layer reflection becomes observable: background deviates from target black and influences the color perception. Figure S1. Optical microscope image of color palette taken at high light source brightness Oblique incidence and large NA objectives We have performed optical microscope imaging of colour palette to trace the color changes as a function of the objective numerical aperture (NA). Results are shown in Figure S2. NA = 0.13 (panel a) and NA = 0.2 (panel b) color palettes were taken as a one shot image, while NA = 0.4 (panel c) was stitched from two images to include all colors. Besides obvious difference in the resolution and sharpness of the image, colors do not significantly deviate, nonetheless, become slightly less vivid and saturated for higher NA. Higher numerical aperture focuses/collects more angles of incidence, therefore we looked into behaviour of resonances at oblique incidence in order to explain the difference. Figure S3 shows numerical simulations of reflection spectra at various angles of incidence (β ) for p-polarized light (electric field vector parallel to the plane of incidence). Angles of incidence β from Figure S2. Optical microscope images of color palette taken with objective of different NA: (a) 0.13, magnification x5; (b) 0.2, magnification x10; (c) 0.4, magnification x20. 2/5 0 • to 30 • correspond to NA of the objectives used in this work (NA = 0.13 to 0.5). Panels (a) to (d) in Figure S3 correspond to reflection spectra of the selected designs in letters "N", "S", "L", "M" (Figure 3 of main text). Figure S3. Numerical simulations of reflection spectra as a function of p-polarized light incident angle for "N", "S", "L", "M" nanostructure designs (a,b,c,d correspondingly). Insets show structure schematics. Figure S3 provides evidence of noticeable red shift relative to incidence angle for larger Si disks ("S","L","M" designs), with an exception of "N" letter design where behaviour is more complex. Other polarization (s-polarization, electric field vector perpendicular to the plane of incidence) does not exhibit same behaviour as we increase the angle of incidence: spectra stays almost unchanged with negligible blue shift, therefore, we do not show it here. In the higher numerical aperture objective light is incident at all angles and the obtained reflection spectra is the result of the sum of those angles. Red shift of p-polarization as a result of oblique incidence can be one of the reasons of the experimental results red shift relative to numerical simulations spectra in Figure 3 in the main text. Numerical simulations there were estimated only for normal incidence. S-polarization oblique incidence may only contribute to slight broadening of the spectra. Figure S4 demonstrates the fabrication flow (panel a) combined with the SEM images of nanostructure shapes evolution (panel b) after the following the fabrication steps: photolithography mask, etch of SoC mask and Si, etch mask removal (final structure), all fabrication details are given in Methods. SEM images are given from top and side/ 30 • angle view relative to the sample surface. For evaluation we selected "M" letter design (D = 170 nm and G = 120 nm). SEM images of photoresist mask expose small irregularities in the target circular mask shape, disk sizes were taken as an average of two orthogonal measurements. After SoC and Si etch a straight sidewall is observed, showing the quality of photoresist pattern transfer into Si nanostructures. Final image after SoC removal exposes a slight tapering from the Si etch estimated to be 4.6 • , calculated from average bases and height of the truncated cone. Slight deviations from circular shape and tapering contributed into overall experimental spectra broadening relative to numerical simulations ( Figure 3 of the main text). Nanostructure shape analysis and tapering Numerical simulations of nanostructure tapering effect on reflection spectrum are shown in Figure S5. Simulations were performed at normal incidence with tapering angle α up to 10 • , almost double the estimated angle from experimental SEM images. In simulations the median diameter and height were fixed corresponding to design, varying angle determining the values of bases. Results demonstrate slight spectral broadening and blue shift of the tapered nanostructure resonances relative to cylinder shape. Therefore, we can conclude tapering and deviations from circular shape contributed into overall experimental spectra broadening relative to numerical simulations ( Figure 3 of the main text). Figure S4. (a) Fabrication process flow schematics; (b) SEM images of nanostrusture shape after different fabrication steps: after photolithography development, after SoC and Si etch, after mask removal (final structure); scale is 100 nm. Figure S5. Numerical simulations of reflection spectra as a function of tapering angle (α) for "N", "S", "L", "M" nanostructure designs (a,b,c,d correspondingly). Arrows show the plot shift direction as α increases. Insets show structure schematics.
1,436.4
2022-08-18T00:00:00.000
[ "Physics" ]
Evaluation of antioxidant and antiproliferative activity of Flueggea leucopyrus Willd (katupila) Background Flueggea leucopyrus Willd is a shrub grown in many parts of the dry zones in Sri Lanka. The leaves of F. leucopyrus has been used for treating cancer in the traditional system of medicine in Sri Lanka. Hence, this study was performed to analyze the antioxidant and antiproliferative properties of the aqueous extract of the leaves of F. leucopyrus on HEp-2 cells. Method The aqueous extract of F. leucopyrus leaves (AEFLL) was freeze dried. Total phenolic content was assayed using Folin Ciocalteu reagent. Antioxidant activities of the extracts were evaluated using in vitro assays: inhibition of DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging and 2-deoxy-D-ribose degradation assay. Nitric oxide radical scavenging activity was determined by using Griess reagent. The MTT, LDH assays and protein synthesis were used to study antiproliferative and cytotoxic activities against the Hep-2 cell after 24 hour exposure. DNA fragmentation and microscopic examination of cells stained with a mixture of ethidium bromide/acridine orange were used to visualize apoptosis in HEp-2 cells treated with the AEFLL. Results The total phenolic content of the extract was 22.15 ± 1.65 (w/w) % of gallic acid equivalent. The values for EC50 were 11.16 ± 0.37, 4.82 ± 1.82 and 23.77 ± 3.16 μg/mL for DPPH radical scavenging, nitric oxide radical scavenging activity and 2-deoxy-D-ribose degradation assay respectively. The EC50 with MTT and LDH assays were 506.8 ± 63.16 and 254.52 ± 42.92 μg/mL respectively. A dose dependent decrease in protein synthesis in HEp-2 cells was shown with an EC50 value of 305.84 ± 12.40 μg/mL. DNA fragmentation and ethidium bromide/acridine assays showed that the AEFLL induces apoptosis in HEp-2 cells. These results were in conformity with the morphological changes observed in the cells treated with the AEFLL. The brine shrimp bioassay showed that the AEFLL had no lethality over the concentration range of 50–500 μg/mL. Conclusions Aqueous extract of the leaves of F. leucopyrus extract demonstrated antioxidant activity in vitro. Further it showed antiproliferative properties and induced apoptosis in HEp-2 cells. Background Reactive oxygen species (ROS) and reactive nitrogen species (RNS) are constantly produced in the body and are neutralized and eliminated by many endogenous mechanisms. However, imbalance of neutralization of such products leads to the oxidative damage of DNA, lipids and proteins [1]. Thus over production of ROS and RNS leads to cancer, cardiovascular disease, atherosclerosis, hypertension, ischemia/reperfusion injury, diabetes mellitus, neurodegenerative diseases, autoimmune, rheumatoid arthritis, and ageing [1,2]. The evidence from epidemiological and laboratory studies have demonstrated that some edible plants as a whole, or their nutraceutical ingredients with antioxidant properties such as polyphenols have substantial protective effects on human carcinogenesis [3,4]. F. leucopyrus Willd (Katupila) belongs to the family Phyllanthaceae. The plant is found in many parts of Sri Lanka particularly in dry zones as shrubs [5]. The leaves of F. leucopyrus have been used in the treatment of cancer, boils, external ulcers and sores in traditional medicine in Sri Lanka. Various species of the genus Flueggea are used to treat many diseases including epilepsy, malaria, jaundice, intestinal worms, edema, heavy menstruation, sterility, poliomyelitis and aplastic anemia in many African and Asian countries [6]. More than ten secondary metabolites are identified in F.leucopyrus [7,8]. One of the major active constituent found in methanol-water (80:20) extract of the F.leucopyrus leaves is bergenin and it has shown antioxidant and immunomodulatory activities in vitro (8). Furthermore bergenin, isolated from F. microcarpa leaves has shown lipid lowering activity in hyperlipidaemic rats and also antifungal activity against plant pathogenic fungi [9,10]. Methanol extract of F. virosa leaves contain antiplasmodial activity against Plasmodium falciparum and bergenin isolated from the aerial parts of this species has shown anti-arrhythmic effects in rat [11,12]. Monkodkaew et al. [13] reported that betulinic acid, has a higher antiproliferative activity against K562 cells compared to friedelin, epifriedelanol, stigmasterol present in the leaves and twigs of F. virosa [13]. Further two dimeric indolizidine alkaloids extracted from roots of F virosa, have shown growth inhibitory activity against MCF-7 and MDA-MB-231 human breast and P-388 cell lines [14,15]. In Sri Lanka, the leaves of F. leucopyrus are used in the diet in the form of a salad or`porridge' in villages and the fruits are consumed in India and Africa [16,17]. Recently F. leucopyrus has become very popular among the people in Sri Lanka, after realizing its therapeutic efficacy. As a result it has become a common practice among cancer patients to use the decoction as a dietary substituent in addition to anticancer treatments. This study was carried out to rationalize the scientific basis of the ethnomedical use of F. leucopyrus leaves in cancer therapy. Shimadzu UV 1601 UV visible spectrophotometer (Shimadzu Corporation, Kyoto, Japan) was used to measure the absorbance. LFT 600 EC freeze dryer was used to obtain the freeze -dried residue of the AEFLL (LFT 600 EC, −90-95°C temperature, Hitachi pump with 10 valves). Cells were incubated at 37°C in a humidified CO 2 incubator (SHEL LAB/Sheldon manufacturing Inc. Cornelius, OR 97113, USA). Olympus (1X70-S1F2) inverted fluorescence microscope (Olympus Optical Co. Ltd. Japan) for observation of cells and photographs were taken using a Nikon D700 camera (Nikon D700, Japan). Deionized water from LABCONCO (waterproplus) UV ultra filtered water system (LABCONCO Corporation, Kansas city, Missouri 64132-2696) or distilled water was used in all experiments. Plant material The fresh plant material of F. leucopyrus (katupila) collected from Gampaha district, Sri Lanka was identified and confirmed by the Department of Botany, Bandaranayake Memorial Ayurveda Research Institute, Nawinna, Sri Lanka and the voucher specimens are deposited at the same Institute. Preparation of the extract The branches bearing the leaves of F. leucopyrus were washed with distilled water and dried under indirect sunlight. The dry leaves were ground in a domestic grinder until a fine powder was obtained. The powder was extracted with distilled water (250 g/l) under reflux for 2-3 hours. The crude aqueous extract obtained was filtered through cotton wool and Whatmann filter paper (No. 1), and freeze dried. Determination of total phenolic content and antioxidant activity The total phenolic content of the crude lyophilized sample (n = 3) of the AEFLL was determined using the Folin ciocalteu method [18]. Radical scavenging activity by 1, 1-diphenyl-2-picrylhydrazyl (DPPH), nitric oxide scavenging activity and nonsite-specific hydroxyl radical mediated 2-deoxy-D-ribose degradation were determined according to the methods described previously [19]. The percentage inhibition (%I) was calculated from the following equation: % I = [(absorbance of controlabsorbance of sample)/absorbance of control] x 100 %. The effective concentration of the sample required to scavenge the respective radical by 50% (EC 50 ) was calculated using the linear segment of the curve obtained with%I against concentration. Brine shrimp bioassay Brine shrimp bioassay is a general bioassay which is indicative of cytotoxicity, various pharmacological actions and pesticidal effects [20]. Extracts with EC 50 ≤ 30 μg/mL are considered to be cytotoxic [20]. The AEFLL (50-500 μg/mL) diluted with hatching medium was subjected to standard procedure for the brine shrimp assay in a 24 well plate (n = 10 live shrimps/well). The live healthy larvae with constant motion were counted after 24 hours. The percentage lethality was determined by comparing the mean surviving larvae of the test and the control. The percentage of hatch inhibition was calculated as: Number of live nauplii in the test/Number of live nauplii in the control. Cell lines and cell culture HEp-2 cells were routinely cultured in EMEM containing 10% heat inactivated fetal bovine serum and 1% penicillin/streptomycin. Cells were maintained at 37°C in humidified carbon dioxide incubator. The cells were grown in DMEM supplemented with 10% heat inactivated fetal bovine serum (FBS), 3% glutamine, sodium bicarbonate and antibiotic (penicillin/ streptomycin). The cells were incubated at 37°C in a humidified CO 2 incubator at all times. The cells (2 × 10 5 cells/well) were seeded in 24-well plates and incubated overnight with 2 mL of the medium described above to obtain a 70% confluent layer. The monolayer was treated with different concentrations of the plant extract and incubated for 24 hours at 37°C. In all experiments a negative control and a positive control were maintained. Negative control contained only growth media while the positive control contained camptothecin (5 mM, 20 μl). The cells were treated with different concentrations of the extract and incubated for 24 hours at 37°C as described above. The culture medium was replaced with fresh medium and MTT assay was performed [21]. The purple color product was measured at 570 nm. Percentage cell viability = [1-Absorbance of treated cells/Absorbance of untreated cells]*100. The net absorbance from the wells of the untreated cells (negative control) was taken as the 100% viability. Positive control was performed with camptothecin (5 mM, 20 μl). Lactate dehydrogenase (LDH) activity Lactate dehydrogenase is a cytosolic enzyme, which is released in to the surrounding culture medium upon cell lysis and used to assay cytotoxicity. The lactate dehydrogenase assay was performed to determine the rate of reduction of pyruvate to lactate by the enzyme Lactate Dehydrogenase [22]. The NADH that remained in the mixture was used to calculate the enzyme activity. The cells were treated with different concentrations of the AEFLL and incubated for 24 hours as described previously. The LDH activity of the cell lysate and the culture supernatant of the cells which were treated with the plant extracts, were measured according to manufacturer's instructions (Randox LDH assay kit). Negative control and positive control with camptothecin (5 mM, 20 μl) were also carried out along with the experiment to measure the LDH leakage. The absorbance was measured at 340 nm at intervals of 15 seconds for 1.5 minutes using an air blank. The rate of decline in NADH (gradient) concentration was used to calculate the LDH activity in the supernatant and the lysate. Percentage cytotoxicity ¼ ½1−LDH activity of the supernatant= Total LDH activity à 100 The total LDH activity is equal to the sum of the LDH activity obtained for the culture supernatant and cell lysate. Cell morphology The morphological changes of the cells were observed after treatment with different concentrations of the plant extract over 24 hours as previously described. The changes were compared with the positive and negative controls. Ethidium bromide and acridine orange staining Ethidium bromide and acridine orange staining was carried out to determine the induction of apoptosis by AEFLL. Acridine orange (AO) permeates both live and dead cells, stains DNA and makes the nucleus appear green while ethidium bromide (EB) is only taken up by cells with damaged cell membranes [23]. Thus, live cells will be uniformly stained green, apoptotic cells will be stained as orange or displayed orange fragments, when observed under fluorescence microscope depending on the degree of loss of membrane integrity due to costaining with ethidium bromide. Cells were seeded in 24 well plates and the confluent layer was treated with the AEFLL at different concentrations for 24 hours at 37°C as described previously. The adherent cells were washed with 1.0 mL of PBS and then removed by adding 1 mL of trypsin-EDTA solution. The supernatant was removed after centrifugation and the cell pellet was resuspended in 25 μl of PBS and 2 μl of the dye mixture containing ethidium bromide (100 mg/mL) and acridine orange (100 mg/mL). A 10 μl aliquot of stained cell suspension was placed on microscope slides, covered with glass slips, and examined immediately under the florescence microscope. Images were photographed using a Nikon D700 camera or digital imaging system connected to microscope. DNA fragmentation Cells were seeded in culture flasks (25 mL) and the confluent layer was treated with the AEFLL at different concentrations for 24 hours at 37°C as described previously. Cells were lysed with 5 mL of lysis buffer (10 mM Tris-HCl, 5 mM EDTA, 200 mM NaCl, 0.2% SDS) and incubated at 37°C for 5 minutes. The contents were centrifuged and the pellet was washed with ice cold SE buffer (5 mL). The pellet was then resuspended with ice cold SE (75 mM NaCl; 25 mM Na 2 EDTA; pH 8.0) buffer (5 mL) with 10% SDS (500 μl) and proteinase K (25 μl) and incubated at 56°C for 1 hour. A volume of 2 mL of NaCl (5 M) was then added to the mixture and incubated on ice for 5 minutes to precipitate proteins. Cells were then centrifuged for 15 minutes at 10,000 rpm and the supernatant was transferred to a fresh tube. Two volumes of ethanol were added to precipitate the DNA, and the sample was centrifuged for 10 minutes at 10,000 rpm. The supernatants were then discarded, and the pellets were washed with 70% cold ethanol. DNA was dissolved in 15 μl of TE (10 mM tris, pH 8.0 and 1 mM EDTA) buffer and subjected to agarose gel (1.5%) electrophoresis for 2 hours. Finally, the gel was photographed using a UVI pro gel documentation system (UVItec UK) following ethidium bromide staining to determine DNA fragmentation. Calculations and statistics All the results of the experiments were expressed as the mean ± standard deviation (Mean ± SD). The measurements were performed in triplicate and values shown are representative for at least three independent experiments. Least square linear regression analysis was applied using Microsoft excel to determine the EC 50 values and for the calibration curves. R 2 > 0.99 was considered as linear for the calibration curves. The linear segment of the percentage inhibition/cytotoxictiyconcentration curve was used to calculate the EC 50 in each experiment. Total phenolic content The yield of the lyophilized sample of the AEFLL was 6.56% (w/w) and its phenolic content was 22.15 ± 1.65% of Gallic acid equivalents (GAE). Antioxidant activity The effective concentration of the AEFLL required to scavenge DPPH radicals by 50% (EC 50 ) was 11.16 ± 0.45 μg/mL (Table 1). L-Ascorbic acid was used as the positive control to compare the EC 50 values of the test samples. However the value was higher than that of ascorbic acid which was 4.28 ± 0.32 μg/mL. Nitrite generated by sodium nitroprusside was reduced by the AEFLL in a dose dependent manner. Gradual increases in NO inhibition was seen at very low concentrations (0.24-3.9 μg/mL) of the plant extract. It was also observed that the concentration reaches a maximum and was maintained at 60% inhibition over concentrations higher than 30.0 μg/mL. The EC 50 value of AEFLL was 4.82 ± 1.82 μg/mL (Table 1) showing very high nitric oxide scavenging ability compared to the positive control, ascorbic acid (54.26 μg/mL). The concentration of the AEFLL required to scavenge the reactive hydroxyl radicals by 50% (EC 50 ) evaluated by non site-specific hydroxyl radical mediated 2-deoxy-D-ribose degradation was 23.77 ± 3.87 μg/mL ( Table 1). The value was greater than the positive control, gallic acid (8.27 μg/mL). Cytotoxicity and apoptosis assays Brine shrimp bioassay No lethality towards the brine shrimp was observed after 24 hour exposure to the AEFLL within the concentration range of 50-500 μg/mL. MTT assay The cell viability after 24 hour treatment with the AEFLL was determined by MTT reduction assay. A dose response curve for the percentage of viable cells was obtained against the concentration (Figure 1). The EC 50 value obtained for the mean of the four independent sample preparations was 506.80 ± 72.93 μg/mL (Table 2). Positive control (camptothecin) showed 76.07 ± 1.72% growth inhibition at the concentration (5 mM, 20 μL) used. LDH leakage assay A dose dependent increase in LDH release to the culture media was observed at concentrations up to 600 μg/mL and a decline of LDH release was shown over 700 μg/mL ( Figure 2). Further it was observed that there was a decrease of enzyme activity in culture media as well as in the lysate at higher concentrations. The mean EC 50 of the percentage cytotoxicity over 24 hour exposure to the plant extract was 254.52 ± 42.92 μg/mL ( Table 2). The percentage LDH found in the supernatant of negative control and the positive control with camptothecin were 24.62 ± 6.21 and 50.51 ± 7.67% respectively. The data indicate that, compared to the negative control, there is no significant increase (p > 0.05) in LDH release at a concentration of 100 μg/mL. Morphological changes The cytotoxic effects of the AEFLL on HEp-2 cells were analyzed using inverted fluorescence microscope and the images are presented in Figures 3. The untreated cells (negative control) shows elongated cells that adhered to the culture plate in comparison with cells that were treated with camptothecin (positive control) where the cells became oval or irregular in shape with highly condensed contents. The HEp-2 cells which were treated with different concentrations of AEFLL extract showed a dose dependent increase in cell death with increasing concentrations of the drug in comparison to the negative control. The highest concentration of the AEFLL (1000 μg/mL) displayed the rounding, detachment and of cell death as indicated in Figure 3 (C and D). Ethidium bromide/acridine orange staining The microscopic examination of cell morphology of drug treated cells following 24 hour incubation showed characteristics of cell death which were then further investigated using ethidium bromide/acridine orange staining method to determine whether the growth inhibitory activity of the leaf extract was related to the induction of apoptosis ( Figure 3). On the basis of the staining procedure live cells with normal nuclei presented bright green nuclei, early apoptotic cells showed green nuclei and late apoptotic cells displayed condensed orange nuclei. In addition dying cells (green but fading of colour) and decreasing of cell density were also observed. DNA fragmentation DNA fragmentation was observed at a concentration of 200 μg/mL of AEFLL after 24 hour exposure ( Figure 4). Discussion Phenolics compounds are secondary metabolites in plants, which are very important for their essential functions in reproduction and growth, in defense mechanisms, for its survival against pathogens, parasites, predators and solar radiation [24,25]. Phenolic compounds also provide us natural antioxidants for protection against many diseases. It has been determined that the antioxidant effect of plant products is mainly due to the radical scavenging activity of phenolic compounds such as flavonoids, polyphenols, tannins, and phenolic terpenes. They have the capability to scavenge reactive oxygen species (ROS), which include radical and nonradical oxygen species such as O 2 −. , HO . , NO . , H 2 O 2 , HOCl, as well as oxidatively generated free radicals RO . and ROO . , ONOO − which derive from biomolecules including lipoproteins (LDL), proteins, and oligonucleic acids [2,25]. The total phenolic content (mean ± SD) of the AEFLL was 22.15 ± 1.65% of GAE. It is reported that the total phenolic content of the F. leucopyrus fruit was 31.7 ± 4.92 GAE// 100 g and DPPH scavenging activity expressed in mg of ascorbic acid equivalent antioxidant capacity (AEAC) per 100 grams of fruit was 76.8 (14). ROS/RNS can cause DNA base changes, strand breaks, damage to tumoursuppressor genes and enhanced expression of protooncogenes (1). The capacity for nitric oxide scavenging activity Brine shrimp (n = 3) >500 Figure 2 The percentage LDH released after 24 hour treatment with the AEFLL on HEp-2 cell line. The data are presented as mean ± SD of four independent experiments. The linear segment of the dose response curve was used to determine EC 50 value. by the AEFLL was found to be highest, followed by DPPH and hydroxyl radical scavenging activities respectively (Table 1). Therefore there might be a direct involvement by the AEFLL in the inhibition of DNA and lipid oxidation by peroxynitrite anion (ONOO − ) which are generated by over production of nitric oxide and the superoxide anion. Brine shrimp bioassay is considered as prescreening assay for cytotoxicity, various pharmacological actions and pesticidal effects and it determines the lethality of materials towards brine shrimp larvae. However it is not specific for antitumor activity (20). The AEFLL showed no single death of larvae over a concentration of 50-500 μg/mL for brine shrimp lethality bioassay. The methods used to study the inhibition of cell proliferation were the MTT and LDH assays on HEp-2 cells. The MTT and LDH assays are well-established methods to assess mitochondrial competence and cell membrane integrity, respectively [26]. As evident in Figure 1, concentration dependent increase was observed in cytotoxicity over a range of 100-600 μg/mL of AEFLL with an EC 50 of 506.80 ± 72.93 μg/mL for the MTT assay. The percentage of LDH release for the AEFLL was increased steadily in HEp-2 cells over the concentration range of 100-600 μg/mL with an EC 50 of 254.52 ± 42.92 μg/mL. A maximum of 80% inhibition of cell proliferation was observed at concentrations over 600 μg/mL for both MTT and LDH assays. Interestingly there is a decline in LDH release in to the culture medium at concentrations over 700 μg/mL. This indicates that there is a reduction in cell density after treatment by the AEFLL at concentrations >700 μg/mL, and also closely associates with MTT results showing maximum inhibition at these concentrations. It is observed that at 100 μg/mL, the percentage leakage of LDH was 26.54% which is similar to that of the negative control (26.52 ± 1.69%,). This indicates that F. leucopyrus shows cytotoxicty on HEp-2 cells over a concentration of 100 μg/mL after 24 hour exposure. Morphological changes and lower cell density observed with the concentration further supports the inhibition of cell proliferation. To determine whether the growth inhibitory activity of the AEFLL was related to the induction of apoptosis, the cells were investigated using the AO/EB double staining under fluorescence microscopy after 24 hour treatment by the plant extract. The fluorescent images show, the signs of early apoptosis (yellow) and late apoptotic cells (orange red). This is further confirmed by the DNA ladder pattern. A separate study has been carried out in Sri Lanka to evaluate cytotoxicity of the aqueous extract of leaves by comet assay after exposure to whole blood (20 μl). Fragmented DNA (comets) has been observed at a concentration of 20 μg/ml but not at 05 μg/ml [27]. Phytochemicals gallic acid quercetin, kaempferol, coumarin which are reported previously as some of the constituents in the twigs and leaves of F. leucopyra have shown in vitro growth inhibition of breast, prostate, ovarian, liver cancer cells and acute leukemias [16,[28][29][30][31][32][33][34][35][36]. It is reported that dietary polyphenols inhibit key signal transduction protein kinases, such as mitogen activated protein kinase, and certain cyclin-dependent kinases which are necessary for cell growth and transformation (4). The contribution of penolics present in F leucopyrus needs further investigations to explore the mechanisms of inhibition related to cell proliferation and induction of apoptosis. Conclusion F. leucopyra is considered as a plant containing anticancer activity and the water extract of leaf is consumed as a dietary supplement. The high antioxidant activity and phenolic content shown by the aqueous extract of the plant suggest that it is a potential therapeutic agent for the control of oxidative damage caused by reactive oxygen species and especially nitrogen species. F. leucopyra showed DNA fragmentation in HEp2 cells even after 24 hour exposure of the leaf extract indicating its ability to induce apoptosis. This study provides the scientific proof of the traditional knowledge in using the leaf extract as an anticancer agent.
5,392.8
2014-07-30T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Maturation-dependent expression of AIM2 in human B-cells Intracellular DNA- and RNA-sensing receptors, such as the IFN-inducible protein Absent in Melanoma 2 (AIM2), serve as host sensors against a wide range of infections. Immune sensing and inflammasome activation by AIM2 has been implicated in innate antiviral recognition in many experimental systems using cell-lines and animal models. However, little is known about the expression and function of AIM2 in freshly isolated human cells. In this study we investigated the expression of AIM2 in different cell types derived from human cord and adult peripheral blood, in steady state and following in vitro-activation. Adult but not cord blood B-cells expressed high levels of AIM2 mRNA at steady state. In adults, AIM2 was primarily expressed in mature memory CD27+ B-cells. Both adult and cord blood derived B-cells could induce their transcription of AIM2 mRNA in response to type II IFN but not type I IFN or the AIM2 ligand poly dA:dT. Upon B-cell receptor stimulation, B-cells from adult blood expressed reduced levels of AIM2 mRNA. In addition, we show that adult B-cells were able to release IL-1β upon stimulation with synthetic DNA. We conclude that functional AIM2 is preferentially expressed in adult human CD27+ B-cells, but is absent in cord blood mononuclear cells. Introduction The innate human immune system is equipped with a variety of pattern recognition receptors (PRRs) that are able to sense the presence of nucleic acids. Among these are the inflammasome receptor proteins that upon binding to its ligand form the inflammasome. The most commonly discussed PPRs that are able to form inflammasomes includes the nucleotide-binding domain, leucine-rich repeat containing proteins (NLRs) and the absent in melanoma 2 (AIM2) like receptors (ALRs) (i.e. the PYHIN-family). The ALRs include the DNA sensors IFI16 and AIM2 that both recognize double stranded DNA (dsDNA) inside the cell. IFI16 can sence the presence of dsDNA in both the cytoplasm and the nucleus, whereas AIM2 sense dsDNA that is located in the cytoplasm [1][2][3][4]. PLOS Upon receptor-binding, AIM2 merge with pro-caspase-1 by its DNA-binding HIN200 domain that bind directly to the dsDNA, and the pyrin domain that allows binding to the adaptor protein ASC. In turn, the carboxy-terminal CARD of ASC binds the CARD of procaspase-1, which leads to the activation of caspase-1, and the subsequent formation of the AIM2 inflammasome [3,5]. The activation of caspase-1 allows for the cleavage of the cytokine precursors pro-IL-1β and pro-IL-18 into their active forms, i.e. IL-1β and IL-18. In addition to the release of pro-inflammatory IL-1β and IL-18, AIM2 inflammasome activation also leads to a lytic form of programmed cell death, which is referred to a pyroptosis [2]. The AIM2 inflammasome has been ascribed an important role in infections with a variety of pathogens, as well as in several forms of cancers and different inflammatory diseases [6]. Still, little is known about the induction and function of AIM2 in human leucocytes. In newborns, the innate immune recognition is impaired, which is evident by an inability to provide certain vital cytokines such as IFNs and inflammatory cytokines [7]. Cord blood derived cells also have impaired expression of PRRs. For instance, cord NK cells have deficient TLR3 expression and are unable to respond to poly(I:C) and HSV activation, both in terms of cytokine secretion (IFN-γ) and cytotoxic capacity [8]. Furthermore, newborns also lack functionally experienced and expanded antigen-specific T-and B-cells [7]. B-cells from newborns have reduced strength of B-cell receptor signaling [9], and impaired CD40-mediated responses including antibody production and class switching [10]. In addition to being antigen-specific, B-cells also possess innate immune functions [11], such as the expression of TLRs that are expressed in both cord and adult B-cells [12]. In this paper, we have studied the expression and function of the DNA sensor AIM2 in in freshly isolated and in vitro activated cells derived from neonatal cord blood and adult peripheral blood. We found that AIM2 was preferentially expressed in adult B-cells, primarily by the mature CD27 + B-cell subset. Primary B-cells were induced to express AIM2 in response to IFN-γ (but not IFN-α), and refrained from AIM2 expression after cognate B-cell receptor engagement. Study subjects Fresh buffy coats of anonymized healthy blood donors and cord blood from anonymized healthy newborns born at gestation weeks 38-42 were obtained from Sahlgrenska University Hospital (Gothenburg, Sweden). In accordance to Swedish legislation section code 4 § 3p SFS 2003:460 ("Lag om etikprövning av forskning som avser människor"), no ethical approval was needed for buffy coats, since the buffy coats were provided anonymously and could not be traced back to a specific donor. All participants provided informed consent for blood donation. For the cord blood, all mothers were given oral information and gave oral consent to participate in the study. As no personal information or identity was recorded, no written consent or approval by the Human Research Ethics Committee was needed (Swedish law 2003: 460, paragraphs 4 and 13). AIM2 mRNA expression The relative levels of AIM2 mRNA were analyzed in freshly isolated cells or in cells that had been activated in vitro. Briefly, cells were lysed with 350 μl lysis buffer (Qiagen, Hilden, Germany). Total RNA was extracted with an RNeasy Micro kit (Qiagen) and treated with DNase (Qiagen) to remove genomic DNA, using the QIACube (Qiagen). cDNA was prepared in a random hexamer-primed Superscript (Invitrogen, Carlsbad, CA, USA) RT reaction. The mRNA levels were determined by RT-PCR on a ViiA™ 7 Real-Time PCR System (TaqMan; Applied Biosystems, Foster City, CA, USA) using MicroAmp Optical 96-well reaction plates (Applied Biosystems). The primer-probe pairs were AIM2 (Hs00915710_m1), IFI16 (Hs00986757_m1), NLRP3 (Hs00918082_m1) and GAPDH (Hs99999905_m1) (TaqMan, Applied Biosystems). The samples (10 ng of cDNA) were run in duplicates in a 20-μl reaction mix (with TaqMan Universal PCR Master Mix; Applied Biosystems) using the comparative ΔΔCT method of relative quantification to calculate the differences in gene expression between control and antigen stimulated cells. As an endogenous control, GAPDH was used to correct for variations in sample loading. The samples were normalized to a standard consisting of a pool of cDNA from 10 adults that were set to 1. Western blot Western blot analysis was conducted with whole B cell lysates prepared from buffy coats. Protein extracts were prepared in hot 1% SDS and protein concentration was determined using the Pierce BCA protein assay kit. 25 ug of protein per sample were resolved on a 4-15% gradient SDS-PAGE gel and transferred onto a PVDF membrane. Nonspecific binding was blocked by soaking the membrane in a PBS-Tween (0.1%) buffer containing 5% bovine serum albumin for 1 h. AIM2 was detected using the rabbit monoclonal anti-AIM2 antibody diluted 1:500 (CST #12948). The membrane was then incubated with goat anti-rabbit IgG conjugated with horseradish peroxidase as a secondary antibody (diluted 1:10000). Imaging of protein bands was achieved by using enhanced chemiluminescence (clarity western ECL substrat, BioRad) and the ChemiDoc XRS system (BioRad). Cytokine detection IL-1β and IFN-α secretion in supernatants from cell cultures were analyzed using a Duo Set ELISA kit (R&D Systems) and a VeriKine human IFN-α ELISA kit (PBL assay science, NJ, US), respectively, both according to manufacturer's instructions. Statistics Statistics were calculated using one-way ANOVA followed by Tuckey's or Dunnett's multiple comparison test, Students paired t-test, Mann-Whitney U-test, and Wilcoxon matched-pairs signed rank test (PRISM 6.0 1 Graph Pad Software Inc., San Diego, CA). AIM2 is preferentially expressed in adult B-cells We compared the expression of AIM2 in freshly isolated PBMC and CBMC. CBMC expressed low levels of AIM2 mRNA, whereas the AIM2 mRNA expression in PBMC was significantly higher (Fig 1A). A more detailed analysis of different mononuclear cells (B-cells, NK cells, CD4 + T-cells, CD8 + T-cells, pDC, myDC and monocytes) revealed that AIM2 was preferentially expressed in adult B-cells (Fig 1B), whereas other mononuclear cells derived from both cord and adult blood expressed no, or low, levels of AIM2 (Fig 1B and S2 Fig) assess the AIM2 mRNA expression in B-cells, we sorted cells derived from adult blood as naïve CD19 + CD27and memory CD19 + CD27 + cells. CD19 + CD27 + B-cells expressed significantly higher levels of AIM2 mRNA compared to CD19 + CD27 -B-cells (Fig 1C). To confirm the AIM2 mRNA expression in B-cells on protein level, we stained adult B-cells for FACS analysis. In line with the mRNA expression, we found that the AIM2 protein was primarily expressed by CD19 + CD27 + B-cells but not by CD19 + CD27 -B-cells (Fig 2A and 2B). 92% (range: 90-98%) of the CD19 + CD27 + B-cells expressed AIM2 compared to only 8% (range: 3.4-8.7%) of the CD19 + CD27 -B-cells (p<0.0001) (Fig 2B). The AIM2 protein in B-cells was also visible as a 37 and a 53 kDa band on Western blot (S4 Fig). NK cells expressed somewhat higher levels of AIM2 mRNA, as compared to the other cell types (Fig 1B). This was however not reflected on a protein level, as no AIM2 protein was detected in NK cells (S5 Fig). IFN-γ induce AIM2 mRNA expression in adult and cord blood B-cells To asses if AIM2 expression could be induced via autokrine activation of AIM2, cytokines or B-cell receptor engagement, we cultured human B-cells derived from adult and cord blood with the AIM2 ligand poly dA:dT, the cytokines IFN-α and IFN-γ or with anti-IgGAM plus CD40L for 24 hours and measured their AIM2 mRNA expression. INF-γ was the only cytokine that was able to increase the AIM2 mRNA expression in B-cells from adults when compared to of 10 PBMCs was used as a calibrator sample and set to a value of 1. Data is expressed as the mean AIM2 mRNA expression +SEM from 2-7 individuals/group. Statistics were calculated using Mann-Whitney U-test (A), ordinary one-way ANOVA and Tuckey's multiple comparison test (B) (* = p<0.05 compared to B-cells), and Wilcoxon matched-pairs signed-rank test (C). * = p<0.05. https://doi.org/10.1371/journal.pone.0183268.g001 control stimulated cells (Fig 3, S1 and S6 Figs) (p<0.02). Furthermore, adult B-cells activated by anti-IgGAM+CD40L expressed significantly lower levels of AIM2 mRNA, compared to control stimulated cells (Fig 3B) (p<0.01). AIM2 mRNA expression was not significantly reduced by IgGAM or CD40L stimulation alone, even though there was a tendency of reduced expression in both IgGAM and CD40L stimulated B-cells (S7 Fig). Similar to adult B-cells, cord B-cells failed to upregulate AIM2 mRNA in response to poly dA:dT, IFN-α or anti-IgGAM plus CD40L (Fig 3D and S1 Fig), but cord blood B-cells showed a 16-fold increase of AIM2 mRNA expression following IFN-γ exposure (Fig 3C) (p<0.05). The expression of AIM2 mRNA in IFN-γ stimulated cord B-cells were still considerably lower (15 times) compared to the levels of AIM2 mRNA in IFN-γ stimulated adult B-cells. When we analyzed AIM2 protein expression in the cultured cells, we found that CD27 + B-cells were the main contributors to the AIM2 expression. CD27 + adult B-cells expressed 9-12 times more AIM2 protein, as compared to CD27-adult B-cells (Fig 3E) (p<0.01). However, none of the stimuli used (i.e. poly dA:dT or IgGAM+CD40L) were able to enhance the frequency of AIM2 expressing B-cells (Fig 3E). Additionally, we also asked whether stimulation Synthetic DNA induce secretion of IL-1β in in vitro stimulated B-cells To assess the function of AIM2 expression in B-cells, we measured IL-1β secretion, cell death and mitochondrial superoxide production in cells cultured with poly dA:dT. The IL-1β secretion was significantly increased (9-fold) in poly dA:dT stimulated adult B-cells compared to control stimulated B-cells (Fig 4A) (p<0.001). To assess if the AIM2 expression in adult B-cells had an effect on cell death, we stained cultured B-cells for active caspase-1 (i.e. Fam-Flica) (Fig 4B) or mitochondrial superoxide (Fig 4C). Stimulation with poly dA:dT did not induce increased frequencies of CD27or CD27 + B-cells that expressed active caspase-1 (Fig 4B). Similarly, stimulation of B-cells with poly dA:dT did not affect the release of mitochondrial superoxide, as the frequency of MitoSOX positive B-cells did not differ between control and poly dA:dT stimulated B-cells (Fig 4C). Discussion In this paper we show that adult but not cord blood mononuclear cells express AIM2. In adults, B-cells and in particular the CD27 positive subset was the main contributors to the AIM2 expression. These cells further upregulated their AIM2 mRNA expression in response to IFN-γ. In cord B-cells, the steady-state levels of AIM2 were low but could be induced upon exposure to IFN-γ. We have previously shown that newborns have impaired innate immune responses, where NK cells from cord blood have reduced expression of TLR3 as well as deficient TLR3 mediated IFN-γ secretion and cytotoxic capacity [8]. In the current study we show that mononuclear cells from newborns also lack the DNA sensor AIM2. In our hands, none of the investigated cell types (i.e. CBMC, B-cells, CD4 + and CD8 + T-cells, monocytes, NK cells, plasmacytoid dendritic cells and myeloid dendritic cells) expressed AIM2 mRNA. Similar to the cord blood derived cells, AIM2 expression was absent also in the majority of the analyzed cell types derived from adult peripheral blood, with the exception of B-cells. Previous studies have shown that AIM2 is expressed in B-cells derived from adult peripheral blood at steady state, whereas monocytes and PBMC express no or very low levels of AIM2 mRNA [13], which is in line with our findings. We found that AIM2 was primarily expressed by the "mature" CD27 + B-cells, and to a lesser extent in "naïve" CD27 -B-cells. Thus, the poor expression of AIM2 mRNA in neonatal B-cells is most likely due to the lack of antigen experienced CD27 + B-cells in cord blood [14]. Of note, the expression of AIM2 mRNA may also vary with gender [15]. Expression of AIM2 can be induced in different cell types by both double stranded DNA and IFNs. We activated purified B-cells derived from cord and adult blood in vitro with the AIM2 ligand poly dA:dT and IFNs (IFN-α and IFN-γ). Stimulation with IFN-γ did induce AIM2 mRNA expression in both cord and adult cells, which is in line with previous studies in other cell types, i.e. THP-1 cells and human keratinocytes [13,16]. However, the IFN-γ induced AIM2 mRNA expression in cord B-cells were modest compared the steady state levels in adult B-cells. B-cells did not upregulate AIM2 transcription in response to type I IFN or to the AIM2 ligand poly dA:dT. This is in contrast to previous observations, showing that AIM2 can be induced in response to both IFN-α and poly dA:dT [1][2][3]5,13]. However, the previous studies were conducted in cell lines or in murine cells/models, which are considerably different from freshly isolated human (B-) cells. Furthermore, the impact of type 1 IFNs on AIM2 expression and inflammasome activation is not clear cut, as type 1 IFN has been shown to inhibit inflammasome activation in murine cells [17]. Interestingly, adult B-cells down regulated the expression of AIM2 upon activation through the B-cell receptor and CD40. This suggests that AIM2 is expressed in resting memory B-cells, but is down-modulated upon activation following re-exposure to cognate antigen. We can only speculate in why AIM2 is down modulated upon B-cell receptor activation. Given that AIM2 can suppress cell proliferation [18][19][20][21], down modulation of AIM2 may be of importance to allow the B-cell to enter cell division (i.e. the normal response that is induced upon stimulation of the B-cell receptor). This remains to be further investigated. It has previously been shown that activation of AIM2 leads to the release of mature IL-1β and caspase-1 activity [3,5]. We show that primary B-cells that were stimulated with synthetic DNA did secrete IL-1β, albeit at moderate levels. The release of IL-1β was however not correlated to caspase-1 activity, as it remained unaffected upon stimulation with synthetic DNA. The release of IL-1β can occur independently of inflammasome activation, at least in neutrophils, where other enzymes than caspase-1 induce secretion of IL-1β [22]. Human B-cells has been shown to lack active caspase-1 at steady state [13]. We found low levels of caspase-1 activity after 4 hours of in vitro culturing. The differences may be due to the different methods used (i.e. immunoblotting or Fam-Flica), or by the events occurring during in vitro culturing. In conclusion, we show that CD27 positive B-cells are the main cell type expressing AIM2 in adults, whereas cord B-cells was devoid of AIM2 mRNA. We also show that neither type I IFN nor synthetic double stranded DNA could induce AIM2 transcription, whereas type II IFN did promote AIM2 expression in both cord and adult B-cells.
3,823.4
2017-08-15T00:00:00.000
[ "Biology", "Medicine" ]
PlenoptiSign: An optical design tool for plenoptic imaging Plenoptic imaging enables a light-field to be captured by a single monocular objective lens and an array of micro lenses attached to an image sensor. Metric distances of the light-field's depth planes remain unapparent prior to acquisition. Recent research showed that sampled depth locations rely on the parameters of the system's optical components. This paper presents PlenoptiSign, which implements these findings as a Python software package to help assist in an experimental or prototyping stage of a plenoptic system. Motivation and significance Plenoptic cameras gain increasing attention from the scientific community and pave their way into experimental three-dimensional (3-D) medical imaging [1,2,3,4]. A limitation of light-field cameras is that the maximum distance between two viewpoint positions, the so-called baseline, is confined to the extent of the entrance pupil [5,6]. With triangulation, small baselines are mapped to depth planes close to the imaging device making them suitable for medical purposes such as in a microscope [1,2] or an otoscope [3]. When capturing depth with a plenoptic system, it is essential to place available light-field depth planes on the targets of interest. Hence, it is a key task in plenoptic data acquisition to choose suitable specifications for the micro lenses, the objective lens and the sensor early to save time and costs at the conceptual design stage of a prototype. PlenoptiSign enables a priori depth plane localization in plenoptic cameras for stereo matching from sub-aperture image disparities [5,7,6] as well as computational refocusing via shift-and-integration [8]. PlenoptiSign can be used to pinpoint object distances in light-field images rendered by our complementary open-source software PlenoptiCam [9]. The underlying physical model of PlenoptiSign was devised and experimentally proven in studies by Hahne et al. [6,10] and applies to the Lytro-type setup [11] at this stage. Given an experiment involving plenoptic image acquisition, a researcher may want to simulate the influence of the Micro Lens Array (MLA), the objective lens and its focus as well as optical zoom settings to investigate the depth resolution performance. With this software, the user is able to optimize an experimental setup as required. In its current state, the tool can be called from a Graphical User Interface (GUI), a web-server capable of handling the Common Gateway Interface (CGI) or from the Command Line Interface (CLI) where a user is asked for all input parameters. It is the goal of this paper to raise awareness of the light-field model's capabilities when implemented as a software tool. In Section 2, we sketch the software architecture while turning the focus onto the ray function solver by means of linear algebra to complement previous publications where this has not been explained in much detail. This is followed by usage instructions and exemplary result presentations in Section 3 after which the potential influence on future applications is discussed in Section 4. Architecture Plenoptic camera parameters are passed to PlenoptiSign using either the CLI (cli script.py), a tkinter GUI (gui app.py) or a CGI web-server (cgi script.py). Based on Python's ddt package, a unit test was added to support potential future development. An overview of the code structure is depicted in Fig. 1. An object of the light-field geometry estimator is instantiated from a user interface by calling mainclass.py, which inherits mixin classes, namely refo.py, tria.py, plt refo.py and plt tria.py. Functionalities The core task of this software is to find intersections of light-field ray pairs. Since its implementation has not been entirely covered before [6,10], it is demonstrated hereafter. A concise form of finding an intersection of two ray functions is by solving a System of Linear Equations (SLE) given as where A and b make up ray functions and x contains unknowns representing locations of ray intersections in the y − z plane. Using two ray functions, we obtain a unique solution to the SLE by the algebraic inverse A −1 where A ∈ R 2×2 is invertible. To cover cases of overdetermined SLEs, a more generic solution is provided by the Moore-Penrose pseudo-inverse A + with ⊺ denoting the matrix transpose. Algebraic computations are implemented by means of the NumPy module in the solver.py file. For plenoptic triangulation, matrices A and b may be defined according to the notation provided in [6] with Eq. (18) as a ray function, which writes with U i, j as y-intercepts at the object-side principal plane and q i, j as chosen chief ray slopes in object space. Here, i is an arbitrary micro image pixel index and j represents a micro lens index in one direction. As seen in Fig. 2, we find the separation of two light-field viewpoints, so-called baseline B G , by two intersecting linear ray functions [6] B where our reference viewpoint depends on i and is separated by a scalar G of light-field viewpoints. After rearranging the two functions, we write which can be solved using the SLE given by where z here corresponds to the longitudinal entrance pupil position A ′′ H 1U [6]. Similarly, we can trace a pair of light-field rays to find an object plane to which a plenoptic image is focused using the shift and integration algorithm [8,10]. In accordance with the model presented in [10], an image-side ray function f c+i,j (z) is given by where m c+i,j denotes an image-side ray slope and s j the respective micro lens position. A ray is chosen by i = −c and j = a(M − 1)/2 with its counterpart ray having negative indices. Applying this to Eq. 10 and rearranging yields −m c+i,+j × z = s +j (11) −m c−i,−j × z = s −j (12) which can be represented in matrix form by where y is the vertical position and d a ′ is an elongation of the image distance which is mapped to respective refocusing distance d a using the thin lens equation while taking thick lenses and their principal planes into account [10]. Usage After downloading PlenoptiSign from the online repository, installation is made possible with the setuptools module which can be run by \$ python plenoptisign/setup.py install in the download directory, provided that Python is available and granted necessary privileges. As indicated in Fig. 1, PlenoptiSign can be accessed from three different interfaces, namely web-based CGI, a GUI and bash command line. Executing PlenoptiSign from the command line is done by where −p sets the plot option with rays and depth planes being depicted as seen in Figs. 3 and 4. Starting the tkinter-based GUI application is either done by running a bundled executable file or by \$ plenoptisign −g For more information on available commands, use the help option −h. Results Light-field geometry results are provided as text values or graphical plots with two types of views. An exemplary cross-sectional plot is shown in Fig. 3. In addition, the results can be displayed in 3-D, as depicted in Fig. 4, with triangulation planes Z (G,∆x) for disparities ∆x. For details on the design trends, scientific notations and deviations due to paraxial approximation, we may refer to our preceding publications [6,10] for further reading. Impact The presented framework provides answers and generic solutions to a question a researcher raised in a forum [12] and received recommendations from peers, indicating demands and potentials for future projects. Research directions that may benefit from this software include experimental photography [11], cinematography [13] or scientific fields ranging from volumetric fluid particle flow [14] to the many kinds of clinical studies [1,2,3,4]. It may be argued that depth plane localization can be done via extrinsic calibration as traditionally employed in the field of stereo vision. Despite its widespread use, this method cannot be performed prior to the camera manufacturing process, but solely after the fact. The same applies to a calibration of light-field metrics via least-squares fitting [15]. Due to the heuristic nature of this approach, zoom lens and optical focus settings as well as temperature fluctuations introduce degrees of freedom to the fitted curve that make calibration sophisticated. After all, it is essential to comprehend and exploit the underlying optical model in the development stage of a plenoptic camera for optimal performance. This part would turn out to be costly with the aforementioned alternatives. A probably deviant attempt of the presented refocusing distance algorithm has been used in the auspicious Lytro cameras [11]. However, Lytro's method remains undisclosed up to the present day, which once more underlines the need and practical use of our open-source software. Conclusions Thanks to provided software, the light-field geometry of a plenoptic camera can be accurately predicted with ease of use. This is of crucial interest in the design stage of a prototype where distance range and depth plane density need to be determined and optimized in advance. This tool has been made available for web-servers, a graphical user-interface and the command line tool. It is the first open-source software to do so and may lay the groundwork for future research on plenoptic imaging in the medical field. Future development may lead towards the implementation of an automatic parameter optimization or an extension of a focused plenoptic camera model.
2,125.2
2019-07-01T00:00:00.000
[ "Computer Science", "Engineering", "Physics" ]
Global solutions to a haptotaxis system with a potentially degenerate diffusion tensor in two and three dimensions We consider the potentially degenerate haptotaxis system \begin{equation*} \left\{ \begin{aligned} u_t&= \nabla \cdot (\mathbb{D} \nabla u + u \nabla \cdot \mathbb{D}) - \chi \nabla \cdot (u\mathbb{D}\nabla w) + \mu u(1-u^{r- 1}), \\ w_t&= - uw \end{aligned} \right. \end{equation*} in a smooth bounded domain $\Omega \subseteq \mathbb{R}^n$, $n \in \{2,3\}$, with a no-flux boundary condition, positive initial data $u_0$, $w_0$ and parameters $\chi>0$, $\mu>0$, $r \geq 2$ and $\mathbb{D}: \overline{\Omega} \rightarrow \mathbb{R}^{n\times n}$, $\mathbb{D}$ positive semidefinite on $\overline{\Omega}$. Our main result regarding the above system is the construction of weak solutions under fairly mild assumptions on $\mathbb{D}$ as well as the initial data, encompassing scenarios of degenerate diffusion in the first equation. As a step in this construction as well as a result of potential independent interest, we further construct classical solutions for the same system under a global positivity assumption for $\mathbb{D}$, which ensures the full regularizing influence of its associated diffusion operator. In both constructions, we naturally rely on the regularizing properties of a sufficiently strong logistic source term in the first equation. Introduction As the movement of cells plays a significant role in many biological systems and processes, analyzing the underlying mechanisms can prove useful in understanding these systems and processes themselves. One such process, which is naturally of extensive interest, is the invasive movement of tumor cells into healthy tissue along gradients of tissue density during the progression of certain types of cancer, which is governed by a mechanism generally called haptotaxis (cf. [8]). Similarly to the efforts made to understand the related process of chemotaxis (cf. [3]), which models movement along gradients of a diffusive chemical as opposed to non-diffusive tissue, mathematical modeling of haptotaxis has proven to be a fruitful area of study. In both cases, by far the most attention at this point has been paid to approaches employing a Fickian diffusive movement model for the organisms in question, which assumes some homogeneity of the underlying medium. But bolstered by experiments regarding cell aggregation near interfaces between grey and white matter in mouse brains (cf. [6]), it has recently been suggested that especially in more heterogeneous environments, such as brain tissue, cell movement might be better described by non-Fickian diffusion (cf. [4]), which is far less mathematically studied in these taxis settings. In an effort to add to the base of knowledge in this area, we will focus our efforts here on a haptotaxis model of cancer invasion featuring such non-Fickian myopic diffusion, which was introduced in [11]. More specifically, we consider the system u t = ∇ · (D∇u + u∇ · D) − χ∇ · (uD∇w) + µu(1 − u r−1 ), in a smooth bounded domain Ω ⊆ R n , n ∈ {2, 3}, with a no-flux boundary condition and appropriate parameters χ > 0, µ > 0, r ≥ 2 and D : Ω → R n×n , D positive semidefinite on Ω. The first equation models the invading cancer cells moving according to the aforementioned myopic diffusion, which is represented by the term ∇ · (D∇u + u∇ · D), as well as according to haptotaxis, which is represented by the term −χ∇ · (uD∇w). Apart from this, the equation further incorporates a logistic source. The second equation models the remaining healthy tissue cells and only features a consumption term. The key feature of interest in the above system from both an application as well as a mathematical perspective is of course the parameter matrix D, which represents a space dependent coupled diffusion and taxis tensor. In practice, this tensor can be derived from the underlying tissue structure by employing direct imaging methods (cf. [11]) and represents the influence of said underlying structure on the movement of cells through it. To account for situations of both locally very dense as well as locally very sparse tissue, which both occur in concrete applications and hinder cell movement significantly, we allow D to be potentially degenerate. Notably in one dimension, solutions to a closely related system with degenerate diffusion have already been shown to reflect the aggregation behavior in interface regions seen in experiments (cf. [6], [39]) while to our knowledge in systems with non-degenerate diffusion long-time behavior results seem to generally be restricted to homogenization (cf. e.g. [37]). This seems to indicate that models of this kind featuring degenerate diffusion could potentially be a better representation of real world behavior. As such, the development of methods to cope with the challenges of this model, namely the reduced regularizing effect of the degenerate diffusion operator and the destabilizing effects of taxis, while still allowing for a sufficiently large class of matrices to enable the modeling of real world scenarios seems to be a worthwhile endeavor, which to our knowledge has thus far only been addressed in one dimension. Thus, the aim of this paper is the investigation of the apparently still open question whether solutions exist in two and three dimensions even if the diffusion operator is degenerate. Results. Our main results regarding the haptotaxis model described above are twofold: First, we establish the existence of global classical solutions given a uniform positivity condition for D, which allows us to basically treat it as we would any other elliptic diffusion operator, as well as a condition ensuring sufficient regularizing influence of the logistic source term. Second, we establish that it is still possible to construct fairly standard weak solutions under much more relaxed conditions for D. More specifically, we drop the assumption that D must be globally positive in Ω and replace it with a set of assumptions much more tailored to our methods for constructing said weak solutions, which are strictly weaker than the prior positivity assumption in allowing for matrices that are in some (small) parts of Ω only positive semidefinite. Given that the definitions necessary to properly formulate the above results take up significant space, we will not go into more detail here but instead refer the reader to the very next section for the pertinent details regarding said results. As an addition to stating precise versions of our results in the next section, we will further discuss the prototypical examples of a matrix with a single point degeneracy as well as a matrix with degeneracies on a manifold of higher dimension and derive some conditions, under which they still allow for the construction of weak solutions. We do this to help build some additional intuition in parallel to the rather abstract regularity properties introduced in said section as well as to illustrate that our results can work for some scenarios with real world relevance such as e.g. a domain divided by an impenetrable membrane. Approach. Let us now give a brief sketch of the methods employed to achieve our two main results. For our classical existence result, we begin by using standard contraction mapping methods to gain local solutions with an associated blow-up criterion as the operator in the first equation is strictly elliptic for globally positive matrices D. We then immediately transition to analyzing the function a := ue −χw , which together with w solves the closely related problem (3.3). We do this because, in a sense, this transformation eliminates the problematic cross-diffusive term from the first equation by integrating it into the function a and its diffusion operator. Using a fairly classical Moser-type iteration argument, we then establish an L ∞ (Ω) bound for a, which translates back to u. Using this bound combined with two testing procedures then yields a further W 1,4 (Ω) bound for w, which together with the already established bound for u is sufficient to ensure that finite time blow-up is in fact impossible in two and three dimensions and thus completes the proof of our first result. Regarding our second result, we begin by approximating the initial data, the matrix D as well as the logistic source term in such a way as to make the already established global classical existence result applicable to the in this way approximated versions of (1.1). For the family of solutions (u ε , w ε ), ε ∈ (0, 1), gained in this fashion, we then establish a bound of the form by way of an energy-type inequality, which already proved useful in the one-dimensional case discussed in [39]. Using this as a baseline, we then derive the bounds necessary for applications of the Aubin-Lions compact embedding lemma to gain our desired weak solutions as limits of the approximate ones. Prior work. As haptotaxis models (cf. [36] for a general survey) as well as the closely related chemotaxis models (cf. [3] for a general survey) have been extensively studied in many possible variations since the introduction of their progenitor in the seminal 1970 paper by Keller and Segel (cf. [16]), there is of course a lot of prior art available regarding global existence theory for said models. While it is certainly out of scope for this paper to cover prior results in their entirety, we will nonetheless give an overview of some notable ones. Let us first note that for the one-dimensional case, where D simplifies to a real-valued function, there are already some results available for a variant of our scenario without a logistic source term (including potential spacial degeneracy) dealing with existence theory as well as long time behavior (cf. [39], [40] and [41]). Weak solutions have also been constructed in very similar haptotaxis systems featuring porous-medium type and signal-dependent degeneracies as opposed to spacial ones (cf. [45]). Regarding haptotaxis system with non-degenerate diffusion operators, e.g. D ≡ 1 in our system, global existence and sometimes boundedness theory has been studied in various closely related settings (cf. [7], [19], [21], [29], [34], [35], [42]). Notably, these systems often feature an additional equation modeling a diffusive (potentially attractive) chemical and the fixed parameter choice r = 2 for the logistic term in addition to the more regular diffusion. In many of these scenarios, it has further been established that solutions converge to their constant steady states (cf. [20], [21], [25], [29], [37], [44]) under varied but sometimes restrictive assumptions. There has also been some analysis of haptotaxis with tissue remodeling, which is represented in the model by some additional source terms in the equation for w (cf. [24], [27], [30]). Apart from haptotaxis models, there has also been significant analysis of chemotaxis models featuring degenerate diffusion (cf. [10], [18], [43] including degeneracies depending on the cell density itself). Lastly, let us just briefly mention that the regularizing effects of logistic source terms we rely on in this paper have already been very well-documented in various chemotaxis systems (cf. [17], [38] among many others) as well as haptotaxis systems (cf. [31]). Main Results and Related Definitions As already alluded to in the introduction, we will focus our attention in this paper on the system in a smooth bounded domain Ω ⊆ R n , n ∈ {2, 3}, with parameters χ > 0, µ > 0, r ≥ 2, D : Ω → R n×n , D positive semidefinite on Ω, and some initial data u 0 , w 0 : Ω → [0, ∞). Our results concerning this system are twofold. We will first derive the following existence result concerning global classical solutions in two and three dimensions under the assumptions that D and the initial data are sufficiently regular, D is positive definite on Ω and the logistic source term is sufficiently strong. This result, while of course also of independent interest, will then serve as a building block for the construction of weak solutions to the same system under much more relaxed restrictions on D and the initial data. Chiefly, global positivity of the matrix D is not necessarily needed anymore and is instead replaced by a set of much weaker but more specific regularity assumptions. The first such regularity property concerns the divergence of D (applied column-wise) and how it can be estimated by the (potentially degenerate) scalar product induced by D. Definition 2.2. Let Ω ⊆ R n , n ∈ N, be a bounded domain with a smooth boundary. We then say a positive semidefinite for all Φ ∈ C 0 (Ω; R n ). Remark 2.4. It is fairly easy to verify that any smooth, positive definite D allows for such an estimate with the optimal exponent β = 1 2 . Let us therefore now briefly illustrate that the above property is also achievable for less regular D, which are e.g. at some points in Ω only positive semidefinite, by giving some examples. While we will not necessarily fully explore these examples and leave out some of the more cumbersome corner cases for ease of presentation, they will accompany us throughout this section as a tool to give some intuition for later introduced definitions as well as to give concrete examples for degenerate cases in which weak solutions can still be constructed. We will first take a look at the prototypical case of a matrix-valued function D 1 on a ball with a single degenerate point in the origin, or more precisely we will consider D 1 (x) := |x| s I on Ω := B 1 (0) ⊆ R n , n ∈ N, I being the identity matrix and s being some positive real number. To illustrate that our framework also supports analysis of singularities occurring on higher dimensional manifolds, let us further consider the similar prototypical example D 2 (x 1 , . . . , x n ) := |x 1 | s I on the same set Ω with s now being a real number greater than 1. As here ∇ · D 2 (x 1 , . . . , x n ) = (s|x 1 | s−2 x 1 , 0, . . . , 0) almost everywhere, we gain that D 2 has the property laid out in Definition 2.2 for all β ∈ ( 1 s , 1) ∩ ( 1 2 , 1) by a similar argument as for the previous example. As to be expected in both cases, smaller values of s result in the divergence estimate only holding for ever larger exponents β. As we will see in our theorem regarding the existence of weak solutions at the end of this section, these larger values of β will necessitate stronger regularizing influence from the logistic source term to compensate. Before we can now approach the second regularity property of this section as well as properly defining what we in fact mean by weak solutions in this paper, we need to first introduce a set of function spaces. Said spaces are generally fairly straightforward generalizations of standard Sobolev and Lebesgue spaces incorporating D as well as some spaces derived from them, which are more specific to our setting. For a more thorough discussion of e.g. the degenerate Sobolev spaces introduced below, we refer the reader to [26]. We will further take the introduction of said spaces as an opportunity to present some of their most important properties for our purposes immediately after defining them. Definition 2.5. Let Ω ⊆ R n , n ∈ N, be a bounded domain with a smooth boundary and p ∈ [1, ∞). We then define the Sobolev-type space Let now D ∈ C 0 (Ω; R n×n ) be positive semidefinite everywhere. We then define the Lebesgue-type space L p D (Ω) as the set of all measurable R n -valued functions Φ on Ω with finite seminorm Furthermore, we define the Sobolev-type spaces W 1,p D (Ω) as the completion of C ∞ (Ω) in the norm in the same vain as the standard Sobolev spaces. It is straightforward to see that each space W 1,p D (Ω) can be interpreted as a subspace of L p (Ω) × L p D (Ω) in a natural way and thus elements of these spaces can be written as tuples (ϕ, Φ). As such, there exist the natural continuous projections associated with this representation. Remark 2.6. For a more comprehensive exploration of these spaces and their properties see e.g. [26]. We will now give a brief overview of the properties the above spaces retain from the standard Sobolev and Lebesgue spaces as well as some of the differences. As most of the proofs translate directly from standard Sobolev theory or are laid out in [26], we will only list the properties we are interested in without extensive argument. First of all by construction, W 1,p div (Ω; R n×n ), L p D (Ω) and W 1,p D (Ω) are Banach spaces, which are reflexive if p ∈ (1, ∞), by essentially the same arguments as for the standard Sobolev and Lebesgue spaces and, for p = 2, they are in fact Hilbert spaces with the natural inner products. It is further easy to see that, if (ϕ, Φ) is a strong or weak limit of a sequence (ϕ n , Φ n ) n∈N ⊆ W 1,p D (Ω), the function ϕ ∈ L p (Ω) coincides with the pointwise almost everywhere limit of the sequence (ϕ n ) n∈N if it exists due to P 1 being continuous regarding both topologies and well-known results about strong and weak convergence in L p (Ω). As opposed to the classical Sobolev spaces, the spaces W 1,p D (Ω) can not necessarily be understood as subspaces of the spaces L p (Ω) because their equivalents to the weak gradients in the classical Sobolev spaces are not necessarily unique here, meaning essentially that P 1 is not always injective. (For an example of this, see [26, p. 1877]). Given that this can be problematic when deriving analogues to the (compact) embedding properties of Sobolev spaces for our weaker variants, let us now briefly note that, under sufficient regularity assumptions for D, the spaces W 1,p D (Ω) do in fact embed into the spaces L p (Ω) again. In particular if p = 2, which is the parameter choice we are most interested here, this is the case if √ D ∈ W 1,2 div (Ω; R n×n ) according to Lemma 8 from [26]. While it presents a slight abuse of notation, we will in a similar fashion to [26] use ϕ to mean P 1 (ϕ) ∈ L p (Ω) for elements ϕ ∈ W 1,p D (Ω) when unambiguous and generally use the convention ∇ϕ = P 2 (ϕ) even if ∇ϕ is not necessarily the actual weak derivative. We will further often write to simplify the notation in later arguments. If ϕ is additionally an element of C 1 (Ω), we will always assume ∇ϕ to be equal to the classical derivative, of course. Having established these function spaces, we can now clearly state the second and last regularity property for D we are interested in. It is a simple compact embedding property, which is mainly used in this paper to facilitate application of the well-known Aubin-Lions lemma. Definition 2.7. Let Ω ⊆ R n , n ∈ N, be a bounded domain with a smooth boundary. We say a positive semidefinite D ∈ C 0 (Ω; R n×n ) allows for a compact L 1 (Ω) embedding if W 1,2 D (Ω) embeds compactly into L 1 (Ω). Remark 2.8. Let us briefly note that any D, which is equal to zero on any open subset U of Ω, cannot fulfill the property laid out in Definition 2.7 as it is well documented that L 2 (U ), which is equal to W 1,2 D (U ) in this case, does not embed compactly into L 1 (U ). We will now give some additional criteria for the above compact embedding property to not only make our results easier to use in application but also to help us prove that both of the examples discussed in Remark 2.4 in fact fulfill it. Lemma 2.9. Let Ω ⊆ R n , n ∈ N, be a bounded domain with a smooth boundary and N ⊆ Ω be a relatively closed set in Ω with measure zero. Let then then D allows for a compact L 1 (Ω) embedding. Proof. Due to our assumption that √ D ∈ W 1,2 div (Ω; R n×n ), Lemma 8 from [26] immediately yields that the projection P 1 : W 1,2 D (Ω) → L 2 (Ω) ⊆ L 1 (Ω) from Definition 2.5 is injective and thus provides us with a continuous embedding of W 1,2 D (Ω) into L 1 (Ω). It thus only remains to show that this embedding is in fact compact given the various criteria outlined above. To do this, we first fix a bounded sequence (ϕ k ) k∈N ⊆ W 1,2 D (Ω). We then only need to construct a subsequence of (ϕ k ) k∈N that converges in L 1 (Ω) to some function ϕ to prove our desired outcome. As it is further possible to find another sequence ( , we can further assume that ϕ k ∈ C ∞ (Ω) for all k without loss of generality. If W 1,2 D (Ω) now embeds compactly into L 1 loc (Ω \ N ), we can choose a subsequence (ϕ kj ) j∈N and function ϕ : Ω → R such that ϕ kj → ϕ in all L 1 (Ω N,ε ), ε > 0, as j → ∞. As by our assumptions N ∪ ∂Ω is closed and thus Ω \ N = k∈N Ω N,1/k , we can then employ a standard diagonal sequence argument to gain yet another subsequence, which we will again call (ϕ kj ) j∈N for convenience, with the property that ϕ kj → ϕ almost everywhere in Ω \ N and thus almost everywhere in all of Ω as j → ∞ because N is a null set. Given that the thus constructed subsequence is further bounded in L 2 (Ω) due to it being bounded in W 1,2 D (Ω), we can use Vitali's theorem and the de La Valleé Poussin criterion for uniform integrability (cf. [9, pp. 23-24]) to conclude that ϕ kj → ϕ in L 1 (Ω) as well, yielding the first part of our result. If D is positive definite on Ω \ N , then for every ε > 0 there exists K(ε) > 0 such that D > K(ε) on Ω N,ε due to the continuity of D and the fact that Ω N,ε ⊆ Ω \ N is compact. Thus, the norms of the spaces W 1,2 (Ω N,ε ) and W 1,2 D (Ω N,ε ) are equivalent. As such, the sequence (ϕ k ) k∈N is bounded in all of the spaces W 1,2 (Ω N,ε ), ε > 0. Due to our further assumption that there exists ε 0 > 0 such that W 1,2 (Ω N,ε ) embeds compactly into L 1 (Ω N,ε ) for all ε ∈ (0, ε 0 ) and the fact that any compact set K ⊆ Ω \ N is a subset of some Ω N,ε as another consequence of Ω \ N being equal to ε>0 Ω N,ε , a standard diagonal sequence argument yields a subsequence along which the functions ϕ k converge to some ϕ in L 1 loc (Ω \ N ). Combining this with the arguments from the previous paragraph then yields the second part of our result. To now complete the proof, we first note that [1, Theorem 6.3] states that a Lipschitz boundary condition for the sets Ω N,ε ensures the Sobolev embedding necessary for our second result and thus the third result follows directly from the second. √ D 1 , √ D 2 ∈ W 1,2 div (Ω; R n×n ) in dimensions two or higher. Furthermore due to the fairly straightforward geometry of the degeneracy set N in both cases, it is easy to verify that both examples also fulfill the third criterion in Lemma 2.9 and thus both D 1 and D 2 allow for a compact L 1 (Ω) embedding in accordance with Definition 2.7. While we have now invested some effort into formalizing the restrictions on D necessary for our later construction of weak solutions, we have yet to clarify what we in fact mean by a weak solution to (2.1). Let us now rectify this in the following definition. Definition 2.11. Let Ω ⊆ R n , n ∈ N, be a bounded domain with a smooth boundary and let χ > 0, µ > 0 We then call a tuple of functions ). As we have at this point clearly defined the target and some of the preconditions, let us now outright state the second main theorem we endeavor to prove in this paper. be positive semidefinite everywhere. Let further D allow for a divergence estimate with exponent β (cf. Definition 2.2) and let D allow for a compact L 1 (Ω) embedding (cf. Definition 2.7). Finally, let u 0 ∈ L z[ln(z)]+ (Ω) and w 0 ∈ C 0 (Ω) be some initial data with Then there exist a.e. non-negative functions that are a weak solution to (2.1) in the sense of Definition 2.11. the same holds true for D 2 . Further due to the arguments presented in Remark 2.10, both D 1 and D 2 allow for a compact L 1 (Ω) embedding. Therefore, the above theorem means that, for sufficiently regular initial data u 0 , w 0 and if either D = D 1 and r and s satisfy (2.5) or D = D 2 and r and s satisfy (2.6), weak solutions to (2.1) in fact exist in two and three dimensions. Existence of Classical Solutions As the existence of classical solutions to (2.1), apart from being an interesting result by its own merits, plays an important role in our construction of their weak counterparts, we will in this section first focus on their derivation. As such, our ultimate goal for this section will be the proof of our first main result, namely Theorem 2.1. The methods presented here will in many ways mirror those for similar systems with a standard Laplacian as diffusion operator. We mainly verify that the differing elements in our systems do not impede said methods. Comparing the very strong regularity assumptions for D in this section to the much weaker ones in the following section devoted to the construction of weak solutions, the question why the gap in assumed regularity between these sections is as large as it is naturally presents itself. Let us therefore briefly address this issue. It is certainly possible to derive most of the a priori estimates, which are used in this section to argue that blow-up of local solutions is impossible, under similarly specific regularity assumptions as seen in Definition 2.2 or Definition 2.7 (albeit with some additions). But generalizing the theory employed by us to first gain said local solutions with less regular D would necessitate Schauder and semigroup theory for potentially very degenerate operators, which is out of scope for this paper. Furthermore, we think that this result is already of interest in and of itself. Existence of Local Solutions After this introductory paragraph giving our rational for the assumptions about D in this section, we will now focus on the construction of local solutions to the system (2.1) as a first step in constructing global ones. As for a positive definite matrix D, the diffusion operator in the first equation is strictly elliptic and therefore accessible to most of the same existence and regularity theory as the Laplacian, we will not go into detail concerning the construction of local solutions but rather refer the reader to a local existence result for a similar haptotaxis system with our operator replaced by the Laplacian in [31]. is a classical solution to (2.1) on Ω × (0, T max ) with initial data (u 0 , w 0 ) and satisfies the following blow-up criterion: For ease of further discussion, we now fix such a maximal local solution (u, w) on (0, T max ) with initial data (u 0 , w 0 ) and the parameters as stated in the above introductory paragraphs. Before diving into the derivation of more substantial bounds for the above solution, we derive a straightforward mass bound for the first solution component as well as an L ∞ (Ω) bound for the second solution component. These bounds will not only prove useful when ruling out blow-up in this section but also serve as a baseline for bounds derived in our later efforts focused on the construction of weak solutions. Proof. Integrating the first equation in (2.1) and applying partial integration yields for all t ∈ (0, T ) and therefore immediately give us the first half of our result by time integration. Given that further w t ≤ 0 due to the second equation in (2.1), the second half of our result follows directly as well. A Priori Estimates The next natural step after establishing local solutions with an associated blow-up criterion is of course arguing that finite-time blow-up is impossible and the maximal local solutions were in fact global all along. To do this, we will devote this section to a set of a priori estimates, which increase in strength as the section goes on until they rule out blow-up of both u and w. As is not uncommon in the analysis of these kinds of haptotaxis systems (cf. [31]), we will from now consider the function a := ue −χw defined on Ω × [0, T max ) and its associated initial data a 0 := u 0 e −χw0 defined on Ω in addition to the actual solutions components u and w themselves. A simple computation then shows that (a, w) is a classical solution of the following related system: The key property of the above system, which makes it so useful for our purposes, is that it in a sense eliminates the taxis term or at least the explicit gradient of w from the first equation (by in a sense integrating it into a and its diffusion operator). This alleviates many of the normal problems associated with the taxis term in testing or semigroup based approaches used to derive a priori estimates. A second useful property of this transformation is that, by definition, bounds that do not involve derivatives are easily translated back from a to u as we will see later. Note however that, as soon as we want to back propagate bounds about the gradient of a to u, the complications introduced by the taxis term come back into play, making this transformation much less useful for endeavors of this kind. We now begin by translating the baseline estimates given in Lemma 3.2 to our newly defined function a as we will henceforth focus on (a, w) as our central object of analysis for quite some time. We will further for the foreseeable future work under the assumption that T max < ∞ as this is exactly the case we want to rule out by leading this assumption to a contradiction with the blow-up criterion. Proof. As Ω a = Ω ue χw ≤ e χ w L ∞ (Ω) Ω u, this is a direct consequence of Lemma 3.2 if T max < ∞. In preparation for a later Moser-type iteration argument for the first solution component a (cf. [2] and [23] for some early as well as [14] and [28] for some more contemporary examples of this technique), which will later be used to rule out its finite-time blow-up, we will now derive a recursive inequality for terms of the form Ω a p . This recursion will in fact allow us to estimate each term of the form Ω a p by terms of the form ( Ω a p 2 ) 2 with constants independent of p, which will prove sufficient to later gain an L ∞ (Ω) bound for a. The method employed to gain said recursion is testing the first equation in (3.3) with e χw a p−1 followed by some estimates based on the Gagliardo-Nirenberg inequality. To facilitate this derivation of said recursion, we will from now on assume that the regularizing influence of the logistic source term in the first equation of (2.1) is sufficiently strong, or more precisely we assume that either r > 2 or µ is sufficiently large in comparison to χ and the L ∞ (Ω) norm of w 0 . However at this point and therefore for the whole of the Moser-type iteration argument, we will not use our assumed restriction to two or three dimensions just yet. Proof. We test the first equation in (3.3) with e χw a p−1 and apply partial integration to see that for all t ∈ (0, T max ) and p ≥ 2. Given our assumptions for D in (3.1), we can use Young's inequality to further estimate that as well as more elementary that for all t ∈ (0, T max ) and p ≥ 2, which when applied to (3.4) results in for all t ∈ (0, T max ) and p ≥ 2. If r > 2, we can now further estimate that for all t ∈ (0, T max ) and p ≥ 2 by Young's inequality. If, however, r = 2 and µ ≥ χ w 0 L ∞ (Ω) , it is immediately obvious that with K 1 := 1 for all t ∈ (0, T max ) and p ≥ 2. As such, we can in both cases conclude from (3.5) that with K 2 := (µ + 2M 3 )e χ w0 L ∞ (Ω) for all t ∈ (0, T max ) and p ≥ 2. We can now use the Gagliardo-Nirenberg inequality to fix a constant K 3 > 0 such that for all t ∈ (0, T max ) and p ≥ 2 with . Applying this to (3.6) then implies for all t ∈ (0, T max ) and p ≥ 2. Time integration then yields for all t ∈ (0, T max ) and p ≥ 2 as T max < ∞, which after estimating the sum on the right-hand side by thrice the maximum of its summands completes the proof. We will now proceed to give the actual iteration argument yielding an L ∞ (Ω)-type bound for a and therefore u, which is sufficient to rule out finite-time blow-up for the first solution component u. Proof. Let p i := 2 i , i ∈ N 0 , and J i := sup t∈(0,Tmax) Ω a pi (·, t) 1 p i . Then J 0 is finite because of Corollary 3.3 and the fact that p 0 = 1. We further know that Due to Lemma 3.4, we can conclude that there exists a constant K 2 ≥ 1 such that the numbers J i conform to the following recursion: Iterating this recursion finitely many times ensures that all J i are finite. If there exists an incrementing sequence of indices i ∈ N, along of which J i ≤ max(K 1 K 2 , K 3 2 ), we immediately gain our desired result by taking the limit of J i along said sequence. As such, we can now assume that there exists i 0 ∈ N with to cover the remaining case. Given these assumptions, the above recursion simplifies to for all i ≥ i 0 with some K 3 > 0 (only depending on K 2 ) as the function z → (zK 2 ) K 2 √ z is bounded on [1, ∞). By now again iterating this recursion finitely many times, we gain that for all i ≥ i 0 due to the series on the right side being of geometric type, we can conclude from (3.7) that the sequence J i is uniformly bounded. Therefore, taking the limit i → ∞ gives us our desired bound for a. As u = ae χw , the corresponding bound for u follows directly from this and Lemma 3.2. To now establish that finite-time blow-up of the second solution component w is equally as impossible, we will begin by testing the first equation in (3.3) with −∇·(D∇a) and combining the result with the differential equation associated with d dt Ω |∇w| 4 . The key to extracting a sufficiently strong bound for w is to then use the strength of the absorptive terms originating from the fully elliptic operator −∇ · (D∇·) to counteract the influence of potentially destabilizing terms due to the haptotaxis interaction. Note that the ellipticity of the operator is ensured because we assume that D is positive definite everywhere in Ω. Lemma 3.6. If T max < ∞ and further r > 2 or µ ≥ χ w 0 L ∞ (Ω) , then there exists a constant C > 0 such that ∇w(·, t) L 4 (Ω) ≤ C for all t ∈ (0, T max ). Proof. Given Lemma 3.5, we can fix a constant K 1 ≥ 1 such that a(·, t) L ∞ (Ω) ≤ K 1 and Ω a 2 (·, t) + a 2r (·, t) + a 4 (·, t) ≤ K 1 for all t ∈ (0, T max ). (3.8) Using the Gagliardo-Nirenberg inequality and standard regularity estimates (cf. [ This in turn implies that for all t ∈ (0, T max ) with K 3 := K 3 1 K 2 . After establishing these preliminaries, we now note that the first equation in (3.3) can also be written as We then test this variant of said equation with −∇ · (D∇a) and employ partial integration (using the fact that (∇ · D) · ν = 0 on ∂Ω) as well as Young's inequality to conclude that for all t ∈ (0, T max ) with K 4 := 8 max µ, µe χ(r−1) w0 L ∞ (Ω) , χ w 0 L ∞ (Ω) e χ w0 L ∞ (Ω) 2 . Using the bounds outlined in (3.1) and (3.8), we can now further derive that for all t ∈ (0, T max ). Applying these three estimates combined with the second bound in (3.8) to (3.10) then yields 1 2 As our second step, we now obtain the following estimate for the time derivative of certain gradient terms of the second solution component w as follows: for all t ∈ (0, T max ) with K 7 := w 0 L ∞ (Ω) e χ w0 L ∞ (Ω) . Now combining this with (3.11) (using an appropriate scaling factor) we gain for all t ∈ (0, T max ) with K 8 := K 5 + 1 4K3 . The application of (3.9) to the inequality above then yields with K 9 := 16K 3 K 7 K 8 for all t ∈ (0, T max ), which, by a standard comparison argument and the assumption that T max is finite, directly gives us our desired result. Remark 3.7. The result of the above lemma only ensures that finite-time blow-up of the second solution component is impossible in two and three dimensions according to our blow-up criterion (3.2). As such, it is at this point and only this point in this section, where our restriction to two or three dimensions becomes necessary. This, of course, in turn means that any extension of the results of this section to a higher dimensional setting would only need to extend the above argument to one providing better bounds for the gradient of w. Given that Lemma 3.5 and Lemma 3.6 rule out any kind of finite-time blow-up for our local solutions, the proof of the first central result of this paper can now be stated quite succinctly. Proof of Theorem 2.1. If we assume T max < ∞, Lemma 3.5 and Lemma 3.6 in combination contradict the consequence of the blow-up criterion (3.2) in this case. Therefore, T max = ∞ and thus the local solutions constructed in Lemma 3.1 must be in fact global. This is sufficient to prove Theorem 2.1 as the fixed assumptions of this section were in fact identical to those of said theorem. Remark 3.8. It is also possible to construct classical solutions in the two dimensional case without relying on logistic influences by using some methods that have previously been used when for example dealing with standard diffusion and some slightly modified versions of our arguments (cf. [3]). Essentially, the argument boils down to using an estimate of the form with ε being potentially arbitrarily small (cf. [5, p.1199]) in combination with an additional baseline Ω u ln(u) estimate based on an energy-type inequality (cf. Lemma 4.2) to establish an L 2 (Ω) estimate. From there, the arguments are very similar to the Moser-type iteration argument presented above, only with some slight complications added, which are easily surmountable. Lemma 3.6 translates basically verbatim. We decided not to present this result here as it will not be needed for our later construction of weak solutions and is not appreciably different from what we have done here or has already been done in the classical diffusion case. Existence of Weak Solutions We have at this point established all the classical existence theory we want to address in this paper and therefore will now transition to our construction of weak solutions, which is in part based on said classical theory. Approximate Solutions As is fairly common, our construction of weak solutions will centrally rely on approximation of said solutions by classical solutions, which solve a suitably regularized version of the original problem. As we already derived global existence of classical solutions for the system (2.1) with very strong assumptions on D, we of course want to construct our weak solutions under much weaker assumptions on D because there would be almost nothing gained otherwise. As such, the central regularization employed by us will be concerned with approximating a potentially quite irregular D by matrices D ε that are sufficiently regular to ensure classical existence of solutions. Apart from this, we will use approximated initial data. We will also slightly modify the logistic source term to ensure r > 2 in our approximated system because we can then further eliminate the assumption concerning the parameters χ and µ needed for the classical theory when r = 2. One central advantage of this approach is that our approximate systems are very close to the system we actually want to construct solutions for and thus our regularizations only minimally interfere with the structures present in the system, which we want to exploit for e.g. a priori information. As for any β ∈ [ 1 2 , 2 3 ] the condition β 1−β ≤ r is always fulfilled independent of our choice of r ∈ [2, ∞) and as it is easy to see that, if D allows for a divergence estimate in accordance with Definition 2.2, it also allows for a divergence estimate with any larger exponent, we can assume that the parameter β seen in the second of the above properties is in fact an element of [ 2 3 , 1) ⊆ ( 1 2 , 1) without loss of generality. Then according to Remark 2.3, the aforementioned divergence estimate directly implies that with q := 2β 2β−1 . Given these assumptions, we now choose an approximate family (D ε ) ε∈(0,1) ⊆ C 2 (Ω; R n×n ) with D ε positive definite on Ω, (∇ · D ε ) · ν = 0 on ∂Ω for all ε ∈ (0, 1) and We can further choose this family in such a way as to ensure that for all Φ ∈ C 0 (Ω; R n ) and ε ∈ (0, 1). These additional properties for the approximation D ε essentially mean that the regularity properties assumed for D are also valid for said approximation in an ε independent fashion. Remark 4.1. Let us briefly illustrate how such an approximation of D = (d i,j ) i,j∈{1,...,n} can be achieved. This will be a two-step process. We first approximate D in our desired function space with the appropriate boundary conditions and then, as a second step, we show that, with only slight modification, we can gain the remaining properties from that approximation. For the initial approximation, we assume without loss of generality that D is smooth. We can do this as it is well-known that a standard convolution argument would give us a smooth approximation of D in our desired space, which we can then approximate again to gain all additional desired properties. In our case, the key property not covered by such a convolution based method is that we want all our approximate matrices to have very specific boundary values. As such, we will now demonstrate how an approximation of a smooth D by matrices with exactly this property can be achieved using the continuity properties of semigroups associated with carefully chosen sectorial operators (cf. [12]). To this end, we fix functions d ′ i,j such that for all i, j ∈ {1, . . . , n}. As can be easily seen, the functions d ′ i,j are linear combinations of the components of D and therefore smooth as well. We then set d ′ i,j,ε = e εLi,j d ′ i,j , ε ∈ (0, 1), where L i,j is the negative Laplacian on Ω with boundary conditions ∇ϕ·ν+ 1 2 (∂ xi ϕ)ν j + 1 2 (∂ xj ϕ)ν i = 0 and (e tLi,j ) t≥0 is the associated semigroup. Due to the well-known continuity properties of said semigroup (cf. [15], [22], [33]), we know that d ′ i,j,ε → d ′ i,j and therefore d i,j,ε → d i,j in W 1,q (Ω) ∩ C 0 (Ω) as ε ց 0 with d i,j,ε defined in an analogous fashion to (4.4). Thus, D ε := (d i,j,ε ) i,j∈{1,...,n} → D in our desired way. Further, on ∂Ω for all ε ∈ (0, 1) due to the prescribed boundary conditions of the operators L i,j . Thus, we have constructed a suitable approximate family for D with the correct boundary conditions. Having now presented the full argument used to achieve the boundary condition (4.5), let us briefly note that we introduced the functions d ′ i,j to ensure that the operators L i,j have sufficiently non-tangential boundary conditions and are therefore sectorial (cf. [22], [33]), which is of course necessary for our semigroup based arguments. As our second step, we will now fix one such family of approximations of D and call it D ′ ε , ε ∈ (0, 1), as we still want to slightly modify it. We can assume that for all ε ∈ (0, 1) without loss of generality. If we then set D ε : for all ε ∈ (0, 1) without affecting any of the desired properties that we already derived as we only modify D ′ ε by adding constants that converge to zero as ε ց 0. This gives us (4.3). To derive the divergence estimate, we first observe that for all ε ∈ (0, 1) and Φ ∈ C 0 (Ω; R n ). We can then further estimate for all ε ∈ (0, 1) and Φ ∈ C 0 (Ω; R n ) using our assumed divergence estimate for D and (4.3). This gives us (4.2) and thus completes the discussion of our construction. We will now proceed to construct our approximate initial data. To do this, we first fix families (u 0,ε ) ε∈(0,1) , (w ′ 0,ε ) ε∈(0,1) ⊆ C 3 (Ω) of positive functions with (D ε ∇u 0,ε ) · ν = (D ε ∇w ′ 0,ε ) · ν = 0 on ∂Ω and as ε ց 0. These families can again be constructed by using convolutions or by a similar semigroup based method as seen before in the much more challenging case of the family (D ε ) ε∈(0,1) . Positivity of both families can further be achieved by first approximating the function in a non-negative way, which is a property of both convolution and semigroup based methods, and then adding ε to the resulting approximation as a secondary step. One important consequence of the above approximations is that we can fix a uniform constant M > 0 such that and for all ε ∈ (0, 1). Uniform A Priori Estimates We will now derive the bounds necessary to ensure compactness of our families of approximate classical solutions in function spaces conducive to the construction of our desired weak solutions to (2.1) as limits of said approximate solutions along a suitable sequence of ε ∈ (0, 1). Apart from the baseline established in Lemma 3.2 for the classical existence theory, which can be easily translated to our approximate solutions in an ε-independent fashion, we will now derive some extended bounds based on an energy-type inequality as an additional baseline for later arguments in this section. This type of energy inequality was already used in the one-dimensional case in [39]. By another testing procedure for the first equation in (4.8), which is very similar to the one already used by us in the proof of Lemma 4.2, we will now derive our final preliminary set of bounds for this section. Proof. Fix T > 0. We first note that as ε = ε j ց 0. Proof. Given that both the families (u (Ω)) * ) according to Lemma 4.5, we can apply the Aubin-Lions lemma (cf. [32]) to the above families using the triple of embedded spaces W 1,2 D (Ω) ⊆ L 1 (Ω) ⊆ (W n+1,2 (Ω)) * . Note that this is only possible as the first embedding is in fact compact by our assumptions (cf. Definition 2.7). Therefore, there exists a null sequence (ε j ) j∈N ⊆ (0, 1) and functionsũ, w : Ω × [0, T ) → R such that as ε = ε j ց 0. This sequence is constructed by applying the Aubin-Lions lemma countably infinitely many times on time intervals of the form [0, T ], T ∈ N, combined with a straightforward extension and diagonal sequence argument. We can further choose the above sequence in such way as to ensure that u 1 2 ε →ũ and w ε → w pointwise almost everywhere as ε = ε j ց 0 by potentially switching to another subsequence. Due to the family (w ε ) ε∈(0,1) furthermore being uniformly bounded in L ∞ (Ω × (0, ∞)) (cf. (4.7) and Lemma 3.2), the above convergence properties directly imply (4.20) as well as the fact that w is non-negative almost everywhere and w ∈ L ∞ (Ω × (0, ∞)). We now set u :=ũ 2 and observe that the above almost everywhere pointwise convergence for the already constructed sequences then ensures that As all not yet explicitly established regularity properties for u and w directly follow from the convergence properties and we have at this point proven all said properties, this completes the proof. For the remainder of this section, we will now fix the functions u, w as well as the sequence (ε j ) j∈N constructed in the preceding lemma. While the convergence properties derived in Lemma 4.6 are in fact already sufficient to allow us to translate the weak solution property from our approximate solutions to our now established solution candidates, we will as a last effort before the proof of Theorem 2.12 derive some more specifically tailored convergence properties to handle some of the more complex terms in the weak solution definition. We now similarly estimate that
12,136.6
2022-02-15T00:00:00.000
[ "Mathematics" ]
Analysis of an on-demand food delivery platform: Participatory equilibrium and two-sided pricing strategy Abstract The on-demand food delivery platform assigns orders from customers to independent couriers, known as agents. An agent’s decision to provide services depends on the wage and the order assignment. The platform can adjust the actual supply and demand through two-sided pricing, such that an equilibrium between customers and agents that maximizes the platform’s profit can be formed. We develop a stylized model to investigate the optimal price and wage of a platform facing delay-sensitive customers and income-sensitive agents. We find three possibilities for participatory equilibrium in the platform as the pair of wage and price changes. Meanwhile, all participatory equilibrium regions have two asymptotically stable equilibria, one of which is a nonparticipatory equilibrium. The platform can form a participatory equilibrium only when it attracts enough customers and agents in the initial phase. Our analysis shows that the optimal pricing strategy and the maximum revenue for a platform depend on the valuation and delay sensitivity of the target customers. Furthermore, contradicting the intuition, our results show that the optimal wage is non-increasing in the total demand rate and that the optimal price is non-decreasing in the service capacity of the platform. Introduction With the advances in computer technology and mobile Internet, the sharing economy impacts service industries in the form of on-demand service platforms that coordinate resources from two sides in real-time, i.e., by recruiting independent service providers to meet the demands of customers (Zhong et al., 2020).Ondemand food delivery platforms have boomed in recent years, especially during the COVID-19 crisis.In 2019, the worldwide revenue from online food delivery reached 94.385 billion dollars, of which China accounted for more than 42% (Mao et al., 2019).Meituan, the largest online food delivery platform in China, currently has more than 50 million registered customers, 4.7 million couriers, and 7 million restaurants.On-demand food delivery platforms implement a more advanced system for ordering and delivering food via computer technology and mobile Internet.Specifically, customers, restaurants, and couriers are connected through a platform.Restaurants display their food on the platform, and customers pick and place their orders.Afterward, the platform assigns the delivery of orders to independent couriers registered on the platform, which we refer to as agents.The current practice in the industry is that the platform shares a portion of the restaurant's food revenue and charges a flat delivery fee per order from customers (Chen et al., 2022).At the same time, the agent will receive wages from the platform after completing each food delivery. This study aims to investigate the optimal twosided pricing strategy of the food delivery platform when facing delay-sensitive customers and incomesensitive agents.Given the fierce competition among platforms, customers have multiple platforms to choose from and switching from one platform to another is trivial (Williams et al., 2020).Moreover, they can order from the same restaurant at similar prices on different platforms.As a result, compared with the price of food, the fee and efficiency of delivery have a greater effect on the customers' platform choice.The delivery fee is directly determined by the platform, whereas the delivery efficiency is influenced by the number of agents involved.Agents within on-demand service platforms are usually independent, i.e., they can decide whether and when to work (Taylor, 2018).On-demand food delivery platforms must attract enough agents to provide delivery services and avoid losing customer demand.However, agents only choose to provide services when the expected income is sufficient.Factors that affect agents' expected income include the wage paid by the platform for each delivery service and the probability of being matched to a customer's request.Therefore, through two-sided pricing, the platform can directly or indirectly regulate the decisions of customers and agents and thus balance supply and demand. We construct a model framework that characterizes the cross-and within-group network effects of customers and agents.Customers' and agents' choices influence each other, which refers to the cross-group network effect.Meanwhile, the withingroup network effect among customers is reflected by the fact that an increased customer demand reduces customer's utility by increasing their waiting time (Bai et al., 2019).Similarly, when a large number of agents choose to provide services at the same time, not all of them will be assigned to delivery work, such that the agent's utility decreases in the number of agents providing the service.As another key feature, our model captures the equilibrium state of the food delivery platform, which is determined by customers' and agents' choices after they are informed about the price (i.e., delivery fee) and wage. This study establishes a stylized model of the food delivery platform that enriches the related research from the perspective of operations management.Our contributions are fourfold.First, our paper examines the relationship between customers and agents on the food delivery platform, in which they are affected by both the cross-and withingroup network effects.The platform, as a medium of transaction, can regulate the actual supply and demand and improve its profitability by two-sided pricing after understanding these two effects.Second, for some wage-price region, there exist two asymptotically stable equilibria for the platform, demonstrating that the platform should guide the choice between customers and agents, to induce the service system to reach the more profitable equilibrium.Furthermore, our analysis shows that when the platform cannot attract enough customers and agents in the initial phase, a nonparticipatory equilibrium is formed.It is worth noting that setting a low price and high wage does not completely free the platform from this dilemma.It is also essential that customers and agents have high enough expectations of service and demand rates.Third, our study shows that the optimal pricing strategy and the maximum revenue for a platform depend on the valuation and delay sensitivity of the target customers.When the customers' valuation is high enough, the higher their delay sensitivity is, the higher the optimal wage and price of the platform are.Finally, in contrast to the intuition that the platform should raise the wage for agents when the total demand rate increases and lower the price of service when the platform's service capacity expands, our result shows that the optimal wage is non-increasing in the total demand rate and that the optimal price is non-decreasing in the service capacity of the platform.This finding reflects the influence of the cross-group network effect on the platform's optimal decisions. The rest of this paper is organized as follows.Section Literature review reviews the related literature.By analyzing the model described in Section Model, Section Optimal strategy and revenue derives the optimal strategy and revenue for the platform.Section Discussion further discusses the effective service capacity of the platform.Section Conclusion concludes the paper.All proofs are relegated to the Appendix, which is provided as supplementary material. Literature review Our work relates to the literature on on-demand service platforms in a sharing economy environment.On-demand service platforms cover a variety of industries, such as ride-sharing, food delivery, healthcare, household services, and grocery delivery service (Xu et al., 2021).Three main streams of platform literature relate to our study: (i) food delivery platforms, (ii) ride-sharing platforms, and (iii) grocery delivery platforms. Our study relates to the literature on food delivery platforms.On-demand food delivery platform is a three-sided market with network interaction among three parties: customers, restaurants, and couriers.Due to the complexity of its modeling and analysis, most of the literature focuses on a certain aspect of the food delivery platform.For example, Chen et al. (2022) and Feldman et al. (2023) investigate the impact of food delivery services on the restaurant industry.See a brief review of food delivery platforms provided by Seghezzi et al. (2021), the existing research on on-demand food delivery services can be summarized as follows: (i) associated pickup and delivery issues in the logistics and operational research domains (Tu et al., 2020); (ii) discussions of this business model from the finance, strategy, and microeconomics perspectives (Seghezzi & Mangiaracina, 2020); (iii) research on increasing consumer purchase intentions by scholars in marketing; see e.g., Mao et al. (2019) and Williams et al. (2020); and (iv) discussions of related labor policy and legal issues; see e.g., Belanche et al. (2021) and Li and Wang (2021).In practice, the customer's willingness to pay is affected by the fee and efficiency of delivery.However, previous studies have ignored this fact.Hence, differing from the mentioned works, we restrict attention to the relationship among customers, couriers, and the food delivery platform. Our study also relates to the previous literature on ride-sharing platforms highlighting the impact of price and wage on the operation of platforms; see e.g., Bai et al. (2019), Lin and Zhou (2019), Guda and Subramanian (2019), Krishnaprasad and Tripathi (2020) and Zhong et al. (2020).Previous studies focused on finding a more effective pricing strategy by comparing dynamic and static pricing.Riquelme et al. (2015) reveal that static pricing performs well when customers are heterogeneous and the payout ratio is exogenous.By contrast, Guda and Subramanian (2019) state that surge pricing can play a good scheduling role for free service providers, especially when the platform requires them to provide services across regions.Different from ride-sharing platforms that need to consider customers' across region service requirements, food delivery platforms have a smaller order delivery range.Thus, our model focuses on how a static two-sided pricing strategy affects the platform equilibrium evolved by customers' and agents' decisions.Furthermore, our model takes into account the case where multiple orders are matched to one agent at the same time, as orders on the delivery platform are more concentrated at meal times. Another stream of relevant literature examines the mechanism of on-demand grocery delivery platforms.Couriers for grocery delivery services typically have more time and route flexibility than for food delivery services (Tao et al., 2023).The relevant works mainly focus on routing operations and matching drivers with delivery tasks, such as Arslan et al. (2019), Bahrami et al. (2021), andArslan et al. (2021).The paper most relevant to our work on delivery services is Kung and Zhong (2017), which also studies the optimal pricing strategy of the twosided platform composed of shoppers (i.e., couriers) and consumers.They explore the optimal equilibrium of the platform with cross-side network effect under three pricing strategies.In our work, we focus on the effect of the delay time on the customer's utility, rather than assuming that it is affected by the quality of delivery service.Furthermore, our model considers both within-and cross-group network effects, which affect the customer's and courier's decision-making during meal times. Model The on-demand food delivery platform is a three-sided platform composed of customers, restaurants, and couriers (called agents hereafter).To explore the equilibrium formed by the evolution of customers' and agents' decisions, we focus on the interaction between customers and agents in a food delivery platform and do not concentrate on the role of the restaurant.In our model, restaurants, as third-party food suppliers, are assumed to have sufficient variety and quantity.Customers order according to their location and taste, without the congestion caused by the insufficient capacity of restaurants. Assume that the potential market demand rate is K (called total demand rate hereafter).Meanwhile, suppose that the upper limit of service rate for agents registered within the platform is l (called service capacity hereafter).When all agents participate in the service, the platform's service rate reaches its service capacity.In our model, both K and l are assumed to be constants.Customers' arrival rate, k, varies in the price and efficiency of service, which is upper bounded by K. Similarly, the platform's actual service rate l is no higher than its service capacity l: For each delivered service, the platform charges the customer a price p and pays a wage w to the agent who provides the service. We assume that each customer has a valuation v of receiving the service and is delay sensitive.Therefore, the customer's expected utility function is given by U c ðk, lÞ ¼ v À p À t Á Wðk, lÞ, where t is the delay cost and Wðk, lÞ is the expected delay.Assume that Wðk, lÞ strictly increases in k and decreases in l with Wð0, lÞ ¼ 0: The customer chooses to wait for service if and only if U c is nonnegative, i.e., v !t Á Wðk, lÞ þ p: On the other hand, the income of agents will be influenced by wage and cost.Suppose they will incur an opportunity cost c when providing services.Note that not all idle agents can be matched immediately with the customers' demand if too many participating agents are available.Accordingly, we define qðk, lÞ as the matching ratio, which is strictly increasing (resp., decreasing) in k (resp., l).If an agent participates in the service, then his/her expected utility is given by U a ðk, lÞ ¼ w Á qðk, lÞ À c: Consequently, the agent chooses to provide services if and only if qðk, lÞ > c=w: Considering the interaction between the customers' and agents' decisions, the actual arrival and service rates are determined based on their expectations for the other party.Hence, we use the notation k a ðlÞ to indicate that the actual arrival rate is a function of the service rate l and use l a ðkÞ to indicate that the actual service rate is also a function of arrival rate k.Given that Wðk, lÞ increases in k and decreases in l strictly, for a given service rate l, we observe three possibilities for the actual arrival rate.If the expected utility is nonnegative for a customer even if all customers join the platform (i.e., v À p À t Á WðK, lÞ !0), then every customer will choose to wait for a service, which implies that k a ðlÞ ¼ K: Another extreme situation is that the expected utility is negative even if no customer joins the queue (i.e., v À p < 0).Therefore, all customers will choose to leave, which yields k a ðlÞ ¼ 0: In other cases, when 0 v À p < t Á WðK, lÞ, each customer plays a mixed strategy in equilibrium, which indicates that each customer has a certain probability of joining or leaving the platform, with the equilibrium arrival rate k a ðlÞ 2 ð0, KÞ satisfying v À p À t Á Wðk a ðlÞ, lÞ ¼ 0: Similarly, for a given arrival rate k, we observe three possibilities for the actual service rate.The above discussion leads to the following result: Lemma 3.1.a.For a given service rate l, the actual arrival rate is given by which by convention takes a value of 0 if the set is empty.b.For a given arrival rate k, the actual service rate is given by which by convention takes a value of 0 if the set is empty. Lemma 3.1 shows the decisions of customers and agents, respectively.We assume that neither customers nor agents have full information.Customers' and agents' decisions will be based on their expectations of service and arrival rates.Moreover, they constantly revise their expectations of each other until their expectations are in line with reality.In other words, the equilibrium state of the platform is formed through continuous learning and adjustment by the customers and the agents, which is consistent with evolutionary game theory.Suppose that the proportion of customers choosing to stay on the platform to request services is x, and thus the proportion choosing to leave is 1 À x: Hence, the platform's actual arrival rate is k a ðlÞ ¼ xK: Similarly, supposing that the proportion of agents choosing to participate in services on the platform is y, the platform's actual service rate satisfies l a ðkÞ ¼ y l: According to the evolutionary game theory, the replicator dynamic equations are given by _ x ¼ xð1 À xÞU c ðxK, y lÞ and _ y ¼ yð1 À yÞU a ðxK, y lÞ, where both x and y are time varying, and _ x and _ y denote their rates of change over time.By solving the above two replicator dynamic equations, we can obtain the equilibrium state of the platform, which consists of the effective arrival rate and the effective service rate.Introducing the concept of evolutionary game to study innovation platforms has been applied in the literature, such as Wang (2020) and Mai et al. (2023). In our model, the equilibrium state of the platform is composed of the arrival and the service rates, which we call the equilibrium rate pair. Definition 3.2.The equilibrium rate pair is ðk e , l e Þ, where l e ¼ l a ðk e Þ and k e ¼ k a ðl e Þ: Note that there is always a nonparticipatory equilibrium, i.e., ðk e , l e Þ ¼ ð0, 0Þ: First, for a high price or a low wage such as p > v and w < c=qðk, lÞ, customers and agents are bound to leave.However, a nonparticipatory equilibrium is still possible even if the platform sets the appropriate price and wage.The reason is that the platform does not attract enough customers and agents in the initial phase.When actual service and demand arrival rates consistently fall below customers' and agents' expectations, they continue to lower their expectations.Eventually, no customer will request a service or no agents will participate in the service, forming a nonparticipatory equilibrium.After analyzing the equilibrium rate pairs, we arrive at the following theorem. Theorem 3.3.There are four possibilities of participatory equilibrium: ðK, lÞ, ðK, l 1 Þ, ðk 1 , lÞ, and lÞ, c=qðk 2 , 0Þg, c=qðk 2 , lÞ and p 2 ðv À t Á WðK, lÞ, v, then the participatory equilibrium rate pairs are ðk 1 , lÞ and ðk 2 , l 2 Þ: Theorem 3.3 demonstrates that, given the wage and price, the equilibrium may not be unique.In the face of multiple equilibria, it is crucial to determine which equilibrium is the most preferable for the platform before determining its optimal pricing strategy.Next, we need to introduce expressions for Wðk, lÞ and qðk, lÞ to derive the specific multiple equilibrium rate pairs and discuss their stability. Optimal strategy and revenue In this section, we explore the optimal strategy and revenue for the platform.For analytical simplicity, we take the following assumption for the remaining of this paper. Assumption 1. a.The expected delay of the customer Wðk, lÞ is given by k=ðlðl À kÞÞ for k < l and is 1 for k !l: b.The matching ratio of the agent qðk, lÞ is given by ck=ðal þ kÞ, where both a and c are positive constants. We make the following remarks about Assumption 1. First, both the expected delay of the customer and the matching ratio of the agent are consistent with the basic assumptions of our model.Second, Assumption 1(a) shows that the expected delay of the customer increases with the utilization (i.e., k=l) of the agents and approaches infinity when the utilization is greater than 1.Finally, Assumption 1(b) demonstrates that agents' matching ratio is also affected by external factors.For example, unexpected events, such as customer order change, restaurant order cancellation due to insufficient food preparation capacity, and poor communication network connection, will have a negative impact on the agents' working status.Therefore, we adopt a parameter a to measure the platform's matching inefficiency, such that a large value of a corresponds to a low agents' matching ratio.Meanwhile, we use parameter c to capture this feature of the food delivery platform considering that an agent may match multiple orders in one trip.We find that the matching ratio of the agent is greater than one when c > 1 þ al=k occurs. Equilibrium rate pairs Under Assumption 1, the equilibrium of the platform under different wage and price conditions can be characterized as follows. Proposition 4.1.a.When K < l, there are four cases: i.If w > ðac l þ cKÞ=cK and p v À tK= lð l À KÞ, the asymptotically stable equilibria are (0, 0) and ðK, lÞ: ii.If w 2 ðð1 þ aÞc=c, ðac l þ cKÞ=cK and p v À a 2 c 2 t=Kðcw À cÞðcw À c À acÞ, the asymptotically stable equilibria are (0, 0) and ðK, Kðcw À cÞ=acÞ: iii.If w !ðac l þ cKÞ=cK and p 2 ðv À tK= lð l ÀKÞ, v À act= lðcw À c À acÞ, the asymptotically stable equilibria are (0, 0) and ð l 2 ðv À pÞ=ð lðv À pÞ þ tÞ, lÞ: iv.Otherwise, the asymptotically stable equilibrium is (0, 0).b.When K !l, there are two cases: i.If w > ð1 þ aÞc=c and p v À act= lðcw À c À acÞ, the asymptotically stable equilibria are (0, 0) and ð l 2 ðv À pÞ= ð lðv À pÞ þ tÞ, lÞ: ii.Otherwise, the asymptotically stable equilibrium is (0, 0).Proposition 4.1 demonstrates three possibilities of a participatory equilibrium within a certain wage-price region.Once the service capacity of the platform is sufficient, i.e., K < l, all three participatory equilibria are likely to occur.When the wage is high (w > ðac l þ cKÞ=cK) and the price is low (p v À tK= lð l À KÞ), the platform achieves a participatory equilibrium in which all customers choose to request the service and all agents participate in providing the service.The supply and demand sides receive positive utility due to the high wage and low price, respectively.By contrast, the low wage (w 2 ðð1 þ aÞc=c, ðac l þ cKÞ=cK) triggers such a participatory equilibrium in which part of the agents serves all customers, whereas the high price (p 2 ðv À tK= lð l À KÞ, v À act= lðcw À c À acÞ) forms another participatory equilibrium in which part of the demand is lost when all agents participate in the service.If the service capacity of the platform is insufficient, i.e., K !l, it is only possible to form the participatory equilibrium in which all agents serve part of the customers.Moreover, the necessary condition for this participatory equilibrium is that the wage and price satisfy w > ð1 þ aÞc=c and p v À act= lðcw À c À acÞ: To clarify this result, we use a concrete example to examine the effect of both wage and price in the participatory equilibrium.Figure 1(a) shows the situation in which the platform's service capacity exceeds the total demand, whereas Figure 1(b) depicts the opposite situation.For K < l, we define wage-price regions A 1 , A 2 , and A 3 as follows: Meanwhile, for K !l, we define the wage-price region A 4 by These regions with different-colored shadows in Figure 1 correspond to different equilibrium rate pairs.The blue-colored region corresponds to ðK, lÞ covering A 1 , the red-colored region represents ðK, Kðcw À cÞ=acÞ covering A 2 , and the green-colored region corresponds to ð l 2 ðv À pÞ=ð lðv À pÞ þ tÞ, lÞ covering A 3 and A 4 .As mentioned in Section Model, there is always a nonparticipatory equilibrium.Therefore, as shown in Figure 1, wage-price regions can be divided into two categories: 1) the only-nonparticipatory equilibrium region, which is the blank area in the figure; and 2) the participatory equilibrium regions, which are denoted by A 1 , A 2 , A 3 , and A 4 marked with colors. Recall from Theorem 3.3 that there exist multiple participatory equilibria in the participatory equilibrium regions.To avoid trivial discussions, as in Wang and Fang (2022), we focus only on the stable equilibrium of the platform.Proposition 4.1 illustrates that all participatory equilibrium regions have two asymptotically stable equilibria, and one of them is a nonparticipatory equilibrium.According to Li et al. (2020), these equilibrium rate pairs are considered evolutionarily stable strategies.If there are two evolutionarily stable strategies under the same wage-price condition, which one is formed will depend on the initial state of the platform.To avoid nonparticipatory equilibrium, the platform must ensure that it attracts enough customers and agents in the initial phase.Note that there are also two evolutionarily stable strategies in region A 1 .It implies that setting a low price and high wage does not completely make the platform attract enough customers and agents to get rid of nonparticipatory equilibrium.It is essential that customers and agents have sufficiently high expectations of service and demand rates. Optimal strategy of the platform Let R(p, w) denote the platform's expected revenue, which can be written as Rðp, wÞ ¼ ðp À wÞk e : The platform's objective is to find the optimal wage and price to maximize its expected revenue R(p, w).For the only-nonparticipatory equilibrium region, the unique equilibrium arrival rate k e ¼ 0 leads to Rðp, wÞ ¼ 0: If the wage and price are located in the participatory equilibrium regions in which two stable equilibria exist, then the equilibrium that brings the higher expected revenue is treated as the optimal equilibrium rate pair.To investigate the optimal strategy for the platform, we need to solve the optimal equilibrium rate pair and corresponding revenue for participatory equilibrium regions A 1 -A 4 . For wage-price regions A 1 and A 2 , the arrival rate of participatory equilibrium is K. Therefore, the expected revenue of the platform is R K ðp, wÞ :¼ ðp À wÞK: By contrast, in regions A 3 and A 4 , the arrival rate of participatory equilibrium is l 2 ðv À pÞ=ð lðv À pÞ þ tÞ: Therefore, the expected revenue of the platform is R k ðp, wÞ :¼ ð l 2 ðp À wÞðv À pÞÞ=ð lðv À pÞ þ tÞ: As a result, we formulate the following three optimization problems for regions A 1 , A 2 , and A 3 when K < l : For K !l, we formulate the following optimization problem for region A 4 : , and R à 4 by solving these optimization problems and then deduce the optimal strategy for the platform.If R à i > 0 (i ¼ 1, 2, 3, 4), the participatory equilibrium rate pair is optimal for region A i .Otherwise, the nonparticipatory equilibrium, which generates zero profit, will be optimal for region A i . First, we consider the case where the platform's service capacity is sufficient, i.e., K < l: Theorem 4.2 below characterizes the optimal decisions for a platform in each wage-price region with participatory equilibrium, and Theorem 4.3 determines the optimal price and wage by comparing each region's maximum expected revenue obtained from Theorem 4.2.For brevity, we introduce the following notations: where w 2 is the unique root of FðwÞ :¼ a 2 c 2 tcðcw À c À ac þ cw À cÞ À Kðcw À cÞ 2 ðcw À c À acÞ 2 ¼ 0 on ðð1 þ aÞc=c, ðac l þ cKÞ=cKÞ, and p 3 is the unique root of GðpÞ :¼ c lðv À pÞ 2 þ ðcv À 2cp þ cÞt ¼ 0 on ðv À tK=ð lð l À KÞÞ, vÞ: Theorem 4.2.When K < l, the platform's optimal decisions for each participatory equilibrium region as shown in Table 1. Theorem 4.2 shows the optimal decisions of the platform in each participatory equilibrium region when the service capacity is sufficient.To show the results (including Theorems 4.2 and 4.3) clearly, we plot Figure 2, where the first three subgraphs show the maximum expected revenue of wage-price regions A 1 À A 3 : Similar to Figure 1, we use different color shades to indicate the optimal equilibrium rate pair for each participatory equilibrium region.We derive several findings from Theorem 4.2 and Figure 2. First, the platform's expected revenue must be nonpositive among all wage-price regions when customers have a low valuation or significantly high delay sensitivity.These customers do not accept the high price or have the patience to wait.To attract more customers and agents, the platform should decrease the price to offset the negative effect of low valuation and increase the wage to reduce the customers' expected delay.However, the platform is hardly profitable in such a case.Second, (w 1 , p 1 ) is likely to be optimal among regions A 1 , A 2 , and A 3 when K < l (i.e., R à 1 in panels (a), (b), and (c) of Figure 2).This wage and price vector is located at the intersection of these three regions, and at this point, Kðcw À cÞ=ac ¼ l and l 2 ðv À pÞ=ð lðv À pÞ þ tÞ ¼ K: In other words, at this time, the optimal equilibrium rate pair in regions A 2 and A 3 is ðK, lÞ: Third, when customers' valuation is sufficiently high, the optimal equilibrium rate pair will shift as customers' delay sensitivity increases.For example, when v > v 1 in region A 2 (Figure 2(b)), the optimal equilibrium rate pair changes from ðK, Kðcw 2 À cÞ=acÞ to ðK, lÞ and then to (0, 0) as the delay sensitivity t rises because customers tend to leave rather than wait when they are highly impatient (increased delay sensitivity).Therefore, the platform should attract all agents by increasing the wage to reduce service delays.However, when customers' delay sensitivity continues to increase, the platform's service capacity can no longer meet the customers' demands in time, and the platform also cannot benefit by paying too high wages to attract agents.As a result, the optimal equilibrium in this region becomes a nonparticipatory equilibrium.Similarly, when v > v 2 in region A 3 (Figure 2(c)), the optimal equilibrium rate pair shifts from ðK, lÞ to ð l 2 ðv À p 3 Þ=ð lðv À p 3 Þ þ t, lÞ and then to (0, 0) as t rises. Theorem 4.3.For K < l, we have: w and t < t, then the optimal wage and price vector, denoted by ðw à , p à Þ, is (w 2 , p 2 ).Thus, the maximum expected revenue of the platform is R à ¼ R à 2 ; b.If v > v 1 and t 2 ½ t, minft à , tgÞ, then ðw à , p Ã Þ is (w 1 , p 1 ).Thus, the maximum expected revenue of the platform is R lðcv À c À acÞ 2 =ð4accÞÞ, then ðw à , p Ã Þ is (w 3 , p 3 ).Thus, the maximum expected revenue of the platform is R à ¼ R à 3 ; d.Otherwise, the maximum expected revenue of the platform is R à ¼ 0: Theorem 4.3 demonstrates the impact of customers' valuation and delay sensitivity on the platform's optimal decisions when its service capacity is sufficient.If the customers' delay sensitivity is low, then all customers' demands can be met even if only some agents provide the service.For moderately delay-sensitive customers, the platform should ensure that all agents provide services to reduce delay.When the customers' delay sensitivity is sufficiently high, some customers still leave the platform even if all agents provide services.Besides, the higher the customers' delay sensitivity, the higher the bottom of the valuation can make the platform earn positive revenues.Figure 2(d) shows the maximum expected revenue of the platform for different types of customers when the service capacity is sufficient, which is equivalent to overlapping panels (a), (b), and (c) of Figure 2 and keeping the largest revenue in these panels.This panel again shows the results of Theorem 4.3 intuitively.It also reveals that the lower the customers' valuation, especially when v < v 2 , the equilibrium rate pair ðK, Kðcw 2 À cÞ=acÞ (R à 2 corresponding region) is more likely to become the optimal participatory equilibrium rate pair of the platform.This result implies that when its service capacity is sufficient, the platform tends to set a moderate wage (ð1 þ aÞc=c < w ðac l þ cKÞ=cK) even though doing so would reduce the effective service rate.The R à 1 region in Figure 2(d) shows that the high wage (w > ðac l þ cKÞ=cK) and low price (p v À tK= lð l KÞ) pair can also bring maximum revenue to the platform under certain conditions.In such a case, both the arrival and service rates reach their peak, i.e., ðK, lÞ: We then consider the case where the platform's service capacity is insufficient. Theorem 4.4 illustrates the optimal decisions of the platform when its service capacity is insufficient.The platform's expected revenue is positive if and only if the customers' valuation is high (i.e., v > ð1 þ aÞc=c) and their delay sensitivity is low (i.e., t < lðcv À c À acÞ 2 =ð4accÞ).Figure 3 clearly illustrates this result.In this case, the platform should set w 3 and p 3 to achieve the equilibrium rate pair ð l 2 ðv À p 3 Þ=ð lðv À p 3 Þ þ t, lÞ and obtain the maximum expected revenue R à 3 : When customers have low valuation or high delay sensitivity, the platform cannot obtain positive returns regardless of the wage and price in or outside region A 4 . For the platform, the optimal decisions can be determined when the type of customers (i.e., (v, t)) is clear.If the platform wants to obtain revenue R à i (i ¼ 1, 2, 3), it should set the wage to w i and the price to p i , and pay attention to the attitudes of customers and agents toward each other, especially in the initial phase.To avoid the formation of other non-optimal equilibria, the platform can take some measures to guide customers and agents, such as the real-time announcement of the actual demand/supply information.Once it is realized that the equilibrium is moving towards a non-optimal equilibrium, the platform can prevent this trend and guide the evolution of the optimal equilibrium by the realtime announcement of information. Discussion In this section, we discuss how the total demand rate and service capacity affect the optimal decision and revenue of the platform.The impact of K and l on p à , w à , and R à will be analyzed in the following proposition: Proposition 5.1.The optimal wage w à is nonincreasing in K, and the optimal price p à is nondecreasing in l.Thus, the platform's maximum revenue R à is always non-decreasing in K and l.Specifically, the impact of K and l on the optimal decisions of the platform is shown in Table 2. At first glance, the wage pay for agents should increase along with the total demand rate, and the price of the service should decrease along with an increasing service capacity of the platform.The former can be ascribed to the fact that a higher wage attracts a sufficient number of agents to provide services to meet the increased customer demand, whereas the latter can be explained by the sufficient number of agents who can provide services despite lower prices when the total service rate increases.However, Proposition 5.1 shows that the optimal wage is non-increasing in K and that the optimal price is non-decreasing in l, which seems to be counterintuitive.Two opposite drivers explain the impact of the increasing total demand rate on the optimal wage.On the one hand, the platform should increase the wage to attract more agents to provide services so as to reduce the customers' expected delay.On the other hand, an increase in the matching ratio makes agents more willing to participate in the service, and the platform can maintain the Table 2. Monotonicity of the optimal solution and revenue with respect to system parameters. Parameter " optimal equilibrium service rate even if the wage decreases.These two drives demonstrate the influence of within-and cross-group network effects of customers and agents on platform decisions.Proposition 5.1 states that the cross-group network effect exerts a stronger influence on the optimal wage when the total demand rate increases. We now analyze why the optimal price is nondecreasing as the service capacity of the platform increases.The reason is that the expansion of the platform's service capacity will result in a shorter expected delay for customers when all agents provide services.At this time, price increases can still preserve the equilibrium demand rate expected by the platform.Therefore, Proposition 5.1 demonstrates that only when the optimal equilibrium rate pair is ðK, Kðcw 2 À cÞ=acÞ, due to the limited total demand, the increased service capacity does not affect the equilibrium service rate and thus does not affect the optimal decisions of the platform.In two other cases, the optimal price increases as the platform's service capacity expands. Note that the valuation and delay sensitivity thresholds mentioned in Theorem 4.3 also change along with the total demand rate or service capacity.For notational brevity hereinafter, we keep in mind that the thresholds (i.e., t, t, t à , v 1 , v 2 , and w 2 þ w) are all functions of K and l even if not explicitly stated.We further analyze below the impact of K and l on these thresholds. Proposition 5.2.a.Both t and t are strictly decreasing in K and increasing in l: t à is strictly increasing in l if and only if v > v 1 and decreasing in K if and only if v > v 2 : b.Both v 1 and v 2 are strictly increasing in l and decreasing in K.Moreover, w 2 þ w is strictly decreasing in K. Proposition 5.2 shows that the delay sensitivity and valuation thresholds are strictly increasing in the service capacity and decreasing in the total demand rate.By observing Figure 2 (d), we find that the increase of the total demand rate K expands the area of region R à ¼ R à 3 , while the area of region R à ¼ R à 2 shrinks.Note that p 2 < p 1 < p 3 and w 2 < w 1 < w 3 : Thus, R à ¼ R à 2 indicates that the platform's optimal strategy is to set a low price to attract all customers and set a low wage to retain part of the agents to participate in the service.On the other hand, R à ¼ R à 3 implies that the optimal strategy is to set a high wage to attract all agents to participate in the service and set a high price to retain part of customers' demands.Therefore, as the total demand rate increases, the platform's optimal strategy should gradually shift to one with both a higher price and wage.The increase in service capacity l has the opposite effect on the platform decision.Specifically, in Figure 2(d), the area of region R à ¼ R à 2 expands while the area of region R à ¼ R à 3 shrinks as l increases.In other words, the platform should gradually shift to a strategy with both a lower wage and price as the service capacity rises. The above discussions can be formally summarized in the following corollary. Panels (a) and (b) in Figure 4 show the optimal strategy of the platform when the total demand rate K and the service capacity l change, respectively.The results in Figure 4 and Table 2 are consistent with Corollary 5.3.The increase in the total demand rate gradually shifts the optimal strategy of the platform towards directing all agents to participate in the service, while the increase in service capacity makes the platform more inclined to retain all demand.Overall, increases in both the total demand rate and service capacity positively impact the platform's maximum revenue.Note that if the total demand rate is large enough, further increases will no longer change the platform's optimal decision, and the same is true for service capacity.As shown in Corollary 5.3, once the service capacity of the platform exceeds l 3 , the optimal equilibrium rate pair will remain at ðK, Kðcw 2 À cÞ=acÞ, which means that always only part of the agents choose to provide the services.In our model, a proportion strictly between 0 and 1 of agents will provide the service, such that the utilities of providing and not providing the service are both zero.In practice, when the offered service capacity is larger than the demand size, part of the agents involved in the service can be screened out in two scenarios.In the first scenario, the model takes into account the effect of random factors on the agent's decision.For example, we can set the utility function of agents as U a ðk, lÞ þ , where is a random parameter.Parameter represents the agent's heterogeneous factors that are not captured by the current model.In the second scenario, the platform prioritizes the allocation of orders to agents with higher ratings through the carefully designed matching mechanism, and thus agents with low ratings leave because they cannot be assigned orders.The rating here can depend on the agent's history of order fulfillment, customer reviews received, etc. Conclusion On-demand food delivery platforms have become popular in recent years, providing a convenient and safe way for customers to enjoy their food, especially during the COVID-19 crisis.Although some scholars are interested in food delivery platforms, only a few have explored the mechanism of these on-demand platforms.To fill this gap, this paper explores the optimal two-sided pricing strategy of the on-demand food delivery platform and analyzes the equilibrium results from the operations management perspective.We initially build a model that takes into account the within-and cross-group network effects of customers and agents.After a preliminary calculation, we summarize the wage-price regions into two categories, the participatory equilibrium, and the only-nonparticipatory equilibrium regions.We derive the optimal strategy in each participatory equilibrium region and then obtain the platform's optimal two-sided pricing strategy through further comparison.After that, we discuss the impact of the total demand rate and service capacity on the platform decision, respectively.Our theoretical analysis provides insights into the operation and management of the on-demand food delivery platform. Our modeling framework has several limitations.First, for practical reasons and tractability, we assume that both customers and agents are homogeneous.If the assumption of customer heterogeneity and agent heterogeneity is introduced, the equilibrium state of the platform will become more diversified.In addition to two-sided pricing, platform decisions also include the design of a matching mechanism.Incorporating matching priorities and equilibrium analysis is an interesting future direction.Second, we focus on food delivery services and ignore the connection between the platform and restaurants.However, in practice, cooperation with restaurants is an important part of food delivery platform operations.Moreover, some restaurants tend to arrange for their employees to deliver the orders they receive.This topic deserves further exploration in future research by establishing a new model.Third, in real life, the customer flow of food delivery platforms is also affected by external factors, such as weather and transportation. Figure 2 . Figure 2. The maximum expected revenue of the platform when the service capacity is sufficient, where c ¼ 3, a ¼ 0:4, c ¼ 1, K ¼ 11, and l ¼ 15: Table 1 . Conditions and outcomes for each participatory equilibrium region.
9,899.4
2023-07-28T00:00:00.000
[ "Economics", "Environmental Science", "Business" ]
Study of fluid flow inside closed cavities using computational numerical methods The temperature distribution and distortion of fluid flow inside the closed cavities, square and triangle, are studied for different boundary conditions. Two different conditions of thermal boundary conditions are used for studying square cavities: (i) Left wall is hot, right wall is cold, top and bottom walls are adiabatic. (ii) Left and right walls are cold, top wall is adiabatic, bottom wall is hot. For triangular enclosure, the boundary conditions are (i) the vertical wall is insulated, bottom wall is hot. (ii) The vertical wall is hot, the bottom wall insulated and the inclined walls are kept cold in both conditions. The velocity of the flow is observed by means of stream function and the temperature distribution is displayed in the form of contours. The study is carried out in ANSYS software. The mathematical procedure for solving the nonlinear system of partial differential equations by penalty finite element method involving bi-quadratic elements is also discussed in detail. Introduction The temperature distribution and distortion of fluid flow inside the square and triangular cavities are mathematically formulated by means of finite elements and studied with the help of ANSYS. The cross section of a flow in rectangular and triangular ducts results in a square and triangular cavity. In the current study, behavior of the temperature and velocity of fluid flowing inside the closed cavities with different boundary conditions is considered. The literature explains a vivid usage of the natural convection flow within the closed entities because of its practical relevance in various applications, such as heat exchangers [1] and [2], room heating and ventilation design [3][4][5], melting [6], etc., Different shapes of cavities, circle [7], trapezoid [8], square [9] and [10], triangle [11][12][13] has been grabbing the attention of researchers since decades. The non-dimensional governing equations of the 2D flow problem is formulated with the penalty finite element method. Detailed solution procedure to obtain a finite element equation from a nonlinear system of partial differential equations is discussed. The studies of fluid flow inside the square and triangular enclosures is performed with ANSYS, a renowned, trustworthy and widely used tool by researchers. Two occurrences in square cavities and two from triangular structure are studied varying the temperature boundary condition. Thermal boundary condition is varied throughout the study, but the velocity of the fluid at the solid boundary will be zero. Study 1: Square cavity, Case 1: Left wall is hot, right wall is cold and the top and the bottom walls are adiabatic and. Case 2: Left and right walls are maintained cold, top wall is adiabatic and the bottom wall is heated. Study 2: Triangular enclosures, Case 1: The vertical wall is insulated, bottom wall heated up and the inclined wall is kept cold. Case 2: The vertical wall is hot, the bottom wall insulated and the inclined wall is kept cold. The governing equations, mathematical formulation and the study in ANSYS can be observed in the article. 2 Mathematical formulation 2.1 Governing equations and solution procedure The nonlinear system of partial differential equations, involving, Navier-Stokes and energy balance equation governs the fluid flow and the temperature distribution. The density due to temperature variation is calculated using Boussineq approximation. All the physical quantities are constant but for density. Equations (1)-(4) are the non-dimensional form of the governing equations. where u ¼ tÀt c t h Àt c , (u dimensionless temperature components, T h and T c are the temperatures at hot and cold walls respectively) The penalty finite element method [14] aids the formulation of governing differential equations, introducing the penalty parameter (g). To eliminate pressure P in equations (2) and (3), equation (5) containing the relationship between the penalty parameter (g) and the incompressibility is substituted. Generally, g = 10 7 , for reliable solutions. Hence equations (2) and (3) becomes, The square and the triangular cavities considered for the study is discretized into biquadratic elements. Figure 1, depicts the discretization of the domains and mapping from X-Y plane to s-t plane. Galerkin finite element method is employed in solving the system of governing differential equations (6), (7) and (4). The thermal and the velocity components are expanded through basis set given in equation (8). Equation (8) expands the governing equations. A weighted function (N i ) is multiplied and integrated over the domain, resulting in the nonlinear residual partial differential equations. Equations (9)-(11) are the nonlinear residues, from which the finite element equation should be formed. Grouping the the co-efficient of X 9 j¼1 U j from all three residues Grouping the co-efficient of V j from all three residues Grouping the co-efficient of u j from all three residues Rewriting the governing equations, with the above substitutions, in matrix form Let It is evident that every node (i) has 3 degrees of freedom, resulting in 27 unknowns for one biquadratic element. The local nodal numbering with global nodal numbers yields the element connectivity which will support in 2D assembly. Jacobian transformation The integrand is a function of the global coordinates X and Y. Figure 1 shows the co-ordinate transformation for the discretized elements from the X À Y plane to the s À t plane. The integrand contains not only functions but also derivatives with respect to the global coordinates ðx j ; y j Þ. Therefore, The matrix [J] is called the Jacobian matrix of the transformation. To evaluate ∂X ∂s , ∂X ∂t , ∂Y ∂s and ∂Y ∂t we use the transformation Therefore Equation (28) requires that the Jacobian matrix [J] should be nonsingular. Thus, given the global coordinates (x j , y j ) of element nodes and the interpolation functions, Nj used for geometry [15], the Jacobian matrix can be evaluated using equation (27). Consequently, solving equation (21) for every node in the domain provides the thermal and velocity components. The commonly used numerical integration methods for the definite integrals can be classified into two groups: (i) Newton-Cotes formulae that employ values of the integrand at equally spaced points and (ii) Gaussian quadrature formula that employs unequally spaced points. Stream function The stream function is used to display the fluid flow and is acquired from velocity components U and V. The relationships between stream function, c and velocity components for 2D flows are It yields the governing equation for stream function Expanding the stream function c using the basis set and the relation for U, V from the Galerkin finite element method yields in the linear residual equations. The no-slip condition is imposed at all boundaries as mentioned earlier and there is no cross flow too, hence c = 0 at the nodes of the walls. The bi-quadratic basis function is used to evaluate the integrals in equation (33) and c's are obtained by solving the same. Stream functions (c's) thus obtained might be positive or negative. The positive and negative signs of c denotes anti-clockwise and clockwise circulation respectively. Studies of fluid flow inside closed cavities using ANSYS How heat moves from point A to B, precisely explains heat transfer. Three ways in which the heat transfers, is as follows. Conduction, heat transfer by molecular contact; Convection, result of density differences and Radiation happening by wave motion. This article concentrates only on natural convection, driving force is the natural gravity as always. Fluid flow inside closed triangular and square cavities is calculated employing ANSYS, Workbench 2020 R1. Geometry is built in the Design Modeler. Many flow problems solved in engineering practice involve complex geometries; Here the simple 2D geometry is meshed with quadrilateral elements. This simulation is limited to steady state. The free convection dealing with the gravity is added in y-direction. Enable heat transfer by checking energy in the model. The energy dialogue box favors the input of parameters related to energy or heat transfer. The fluid taken for this study is air, Pr = 0.71. In all the cases laminar nature of the fluid is sustained. The solid within which the flow takes place is aluminum. In the cell zone conditions, the operating conditions are set. Boundary conditions for square and the triangular cavities are detailed in their respective sections. Post initialization, the calculations are carried out. Temperature distribution and flow distortion are visualized as contours and are substantiated with the existing literature. Square cavity The square cavity, cross section of a rectangular duct, is built with 1 m all sides. The fluid with Pr = 0.71 and Ra = 10 5 is taken for studying the flow with varying thermal boundary conditions. The fluid in contact with the walls are at rest. Quadrilateral elements are involved in the meshing. The heat inputs are given on the walls, hence, the edges are meshed with bias factor 5.0. For a square cavity with the left wall, DA, experiencing the heat source, right wall, BC, cold and the top wall, CD, and the bottom wall, AB, are adiabatic, Figures 2 and 3 explains the temperature distribution and velocity respectively. From the contour it is evident that the temperature reduces from the left wall to right. The stream function clearly states that the clockwise flow is laminar. The results obtained are aligned with Singh et al. [10]. In other circumstance of square cavity taken for study, walls DA and BC are maintained cold, wall CD is adiabatic and the wall AB is alone heated up. There is a temperature flow from the bottom to the top, the temperature descends as it moves upward, can be seen in Figure 4. The laminar flow for this case, displayed in Figure 5, is unique consisting of clockwise and anticlockwise flows. This pattern is observed to be in relevance with Basak et al. [9]. Triangular enclosure The base and the height of the right angled triangle is 1 m. The quadrilateral elements are used for meshing the triangular enclosure. Bias factor 5.0 is applied for the edges while meshing to get a thick mesh considering the thermal inputs in the boundary. Two different sets of boundary condition are taken for study. In both situations the inclined wall is kept cold. In the triangular entity, the velocity boundary condition is restricted to no slip condition on all sides. The triangle with fluid parameters Pr = 0.71 and Ra = 710, and the boundary conditions on wall AB, bottom wall has the heat source, wall BC, inclined wall is cold and the wall CA, is adiabatic. A clear picture of the anticlockwise laminar flow can be seen in Figure 7. The temperature distribution is observed in Figure 6; the heat reduces from the base as it moves upwards. This can be related to [12] and the similarity is perceived. The other situation dealing with the triangle, wall AB insulated, BC cold and CA hot with Pr = 0.71 and Ra = 10 3 . Figure 8 briefs the temperature increase, as it moves to the vertical wall. Figure 9 delineates the flow of stream function with its laminar nature maintained and the flow is clockwise as in [13] makes the contour look promising as it is identical with those in the literature. The temperature distribution and distortion of fluid flow inside the square and triangular cavities, for different boundary conditions are analyzed with ANSYS. Fluids in the present study for specific boundary conditions is observed to be laminar. Absence of variation with temperature spreading and fluid alteration is observed from the contours of the closed cavities, Figures 2-9 and the result agrees well with [9,10,12,13]. In future, the closed form solution of the current work will be pondered using a programming language in a numerical environment with penalty finite element method involving bi-quadratic elements. Upcoming studies in the same category can be done by varying a set of Rayleigh number and Prandtl number. When making it time dependent, the flow inside the full length of the pipe can be investigated. Keeping the flow laminar throughout, at various boundary conditions will support practical real time problems like injection molding processes [16].
2,804.2
2021-01-01T00:00:00.000
[ "Engineering", "Physics" ]
An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation In this paper, we propose an interval iteration multilevel thresholding method (IIMT). This approach is based on the Otsu method but iteratively searches for sub-regions of the image to achieve segmentation, rather than processing the full image as a whole region. Then, a novel multilevel thresholding framework based on IIMT for brain MR image segmentation is proposed. In this framework, the original image is first decomposed using a hybrid L1 − L0 layer decomposition method to obtain the base layer. Second, we use IIMT to segment both the original image and its base layer. Finally, the two segmentation results are integrated by a fusion scheme to obtain a more refined and accurate segmentation result. Experimental results showed that our proposed algorithm is effective, and outperforms the standard Otsu-based and other optimization-based segmentation methods. Introduction Image segmentation is a key step in image processing and image analysis [1][2][3]. The process of image segmentation refers to dividing an image into several disjoint regions based on features such as intensity, color, spatial texture, and geometric shapes, so that these features show consistency or meaningful similarity in the same region, but show obvious differences between different regions [4,5]. Image segmentation is widely used in many fields, such as computer vision, object recognition, and medical image applications [6,7]. In the field of medical research and practice, image segmentation technology can be applied to computer-aided diagnosis, clinical surgical image navigation, and image-guided tumor radiotherapy [8,9]. Segmentation of organs and their substructures from medical images can be used to quantitatively analyze clinical parameters that are related to volume and shape [10]. For instance, a brain MR image can be segmented into five main regions, namely, the gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), the skull, and the background. In diagnosis of brain disease, WM abnormalities are closely related to multiple sclerosis, schizophrenia, and Alzheimer's disease. Autism is relevant to changes in the volume of the GM [8,11]. Central nervous system lesions and metabolic disorders of nerve cells change the properties and composition of CSF. When the central nervous system is damaged, the detection of CSF is one of the important auxiliary diagnostic methods. Therefore, accurate segmentation of different object regions in a brain MR image is believed to be one of the most significant tasks for clinical research and treatment. A large number of image segmentation methods have been previously researched. In [12], Fu et al. classified image segmentation techniques, such as characteristic feature thresholding [13][14][15] or clustering [16,17], edge detection [18,19], and region extraction [20,21]. Other approaches include graph cut methods [22,23] and deep neural network-based methods [24]. Among the existing segmentation methods, thresholding is considered to be an efficient and popular techniques because of its simplicity and high efficiency [25][26][27][28]. Thresholding can be classified into two groups: bi-level thresholding and multi-level thresholding [29]. The former segments an original image into two regions (foreground and background) by searching for an optimal threshold based on gray histogram. Pixels with gray values greater than the threshold are classified as the foreground, whereas pixels with gray values lower than the threshold are classified as the background. When such a simple binary classification is insufficient for subsequent processing, bi-level thresholding is extended to multi-level thresholding, which refers to partitioning the image into several different regions using more thresholds [30]. He et al. proposed an efficient krill herd method to identify optimal thresholding values by maximizing three different objective functions: between-class variance, Kapur's entropy, and Tsallis entropy [29]. Lei et al. defined square rough entropy in a new form, and presented a novel image segmentation thresholding method based on minimum square rough entropy [31]. The optimal threshold was selected as the value that made the roughness of the object region and the background zero. Yan et al. proposed a novel multilevel thresholding using Kapur's entropy based on the whale optimization algorithm [32]. This can overcome premature convergence and obtain the global optimal solution. Singh proposed an adaptive thresholding algorithm based on neutrosophic set theory for segmenting Parkinson's disease MR images [33]. The gray value that maximizes neutrosophic entropy information is selected as the optimal threshold. Omid Tarkhaneh et al. presented a differential evolution-based multilevel thresholding algorithm for MR brain image segmentation [34]. Inspired by Levy distribution, Cauchy distribution, and Cotes' Spiral, a novel mutation scheme was designed to model swarm intelligence optimization. To solve the increasing complexity of optimization problems, Zhao et al. proposed an improved ant colony optimization algorithm based on the chaotic random spare strategy for multilevel thresholding [35]. The random spare strategy was applied to improve the convergence speed, and the chaotic intensification strategy was used to improve the convergence accuracy and avoid falling into a local optimum. Cai et al. proposed an iterative triclass Otsu thresholding algorithm for microscopic image segmentation [36]. In contrast to the standard Otsu method, it firstly segments an original image into the foreground, the background, and a third region, namely, the "to-be-determined (TBD)" area, based on two class means as obtained by Otsu's optimal threshold. Then, similar processing is iteratively applied to the TBD region until the preset criterion is met. This single thresholding method performs well for weak objects and segmentation of fine details, but is not applicable to complicated medical image segmentation. However, medical image segmentation is still regarded as an important yet challenging work due to the complexity of the medical image itself, such as low tissue contrast, irregular shape, and large location variance [37]. To improve the quality of image segmentation, we proposed an interval iteration-based multilevel thresholding algorithm for brain MR images. In the algorithm, hybrid L 1 − L 0 layer decomposition is adopted to reduce the influence of noise on the segmentation effect. Traditional Otsu multilevel thresholding processes the full image as a whole region, and is inclined to the class with a large variance. To overcome this problem, we extended Cai's method [36] to multilevel thresholding and proposed a novel interval iteration method to identify optimal thresholds. In addition, a fusion strategy is used to integrate different segmentation images to obtain finer segmentation results. In general, the key contributions of our work can be summarized as follows: (1) A hybrid L 1 − L 0 layer decomposition method is used to achieve the base layer of an original image, which can remove noise and preserve edge information in the segmentation process. (2) An interval iteration multilevel thresholding method is proposed in this paper. In the grayscale histogram of an original image, iterations are separated by the combination of class means and thresholds, and Otsu single thresholding is iteratively applied to each iteration. (3) A fusion strategy is adopted to fuse different segmentation results. It takes both spatial and intensity information into account, and makes segmentation more accurate. The rest of this paper is organized as follows. Section 2 details the interval iterationbased multilevel thresholding method. The framework of the proposed algorithm and related processing are described in Section 3. Section 4 depicts the experiments on brain MR image segmentation including results and analysis. Finally, conclusions and future work are presented and discussed in Section 5. Interval Iteration Based Multilevel Thresholding In this section, we propose a novel multilevel thresholding algorithm based on interval iteration. The iterative process is illustrated in the following. Otsu Method Let I be an image with size of M × N, and the gray level denoted as G = {0, 1, . . . , 255}. We define n j as the number of pixels with gray level j, and define P j = n j M×N , (p j ≥ 0, j ∈ G) as the probability of such pixels, in which 255 ∑ j=0 P j = 1. Assuming that I is to be segmented into K + 1 (K ≥ 1) classes (C 1 , C 2 , . . . , C K+1 ) by K thresholds (t 1 , t 2 , . . . , t K ), the Otsu method searches the histogram of I to find one or more thresholds that minimize intra-class variance or maximize the between-class variance, i.e., Otsu can be defined as it is referred to as single thresholding; otherwise, multilevel thresholding. The between-class variance σ 2 B is calculated as follows: where ω i and µ i denote the probability and mean of class C i , respectively. µ T represents the total mean of K + 1 classes. The First Iteration Given an original image I, we can obtain its gray histogram curve. Here, an artificial example is shown in Figure 1. In the first iteration, traditional Otsu multilevel thresholding is performed on the original image to search for K thresholds. K + 1 class means and K initial thresholds can be achieved by computing Equation (1). Figure 2 illustrates the results of Otsu multilevel thresholding. In Figure 2a, K + 1 class means are denoted as µ 1,i (i = 1, . . . , K + 1), and K initial thresholds are denoted as T 1,i , (i = 1, . . . , K). Then, we design a manner of classification. Pixels whose gray values satisfy p ≤ µ 1,1 are partitioned into class C 1 ; pixels whose gray values satisfy q ≥ µ 1,K+1 are partitioned into class C K+1 . The remaining pixels are divided into K intervals [µ 1,1 , µ 1,2 ], [ µ 1,2 , µ 1,3 ], . . . , [µ 1,K , µ 1,K+1 ] according to their gray values, and they are classified in the next iteration. Figure 2b shows an example of the classification. In Figure 2b, the green part denotes C 1 and the yellow part represents C K+1 ; the part between C 1 and C K+1 needs to be determined in subsequent iterations. The Framework The framework of the proposed algorithm is shown in Figure 6. It is illustrated as follows. (1) A hybrid L 1 − L 0 layer decomposition method is performed on the original image to obtain its base layer. Hybrid L 1 − L 0 Layer Decomposition Given an image I with size M × N, the hybrid L 1 − L 0 layer decomposition model can be defined as follows: where I B and I D denote the base layer and the detail layer, respectively, and I D = I − I B . They are obtained by the L 1 gradient sparsity term ∂ k I B i,j and the L 0 gradient sparsity term F(∂ k I D i,j ) accordingly. ∂ k refers to the partial derivative operation along the horizontal gradient (H) or the vertical gradient (V). F is an indicator function, which is defined as: For the convenience of calculation, Equation (5) can be rewritten in matrix vector form as follows: min where b, d ∈ R MN×1 denote the concatenated vector form of I B and I D , respectively. represent two gradient operator matrices in the x and y directions, respectively. F(∇d) refers to a binary vector. By means of the Lagrangian multiplier method, Equation (7) can be converted to solve the following function: where c 1 , c 2 ∈ R 2MN denotes two auxiliary variables. y 1 , y 2 represent two Lagrangian dual variables. The optimal solution is obtained by a few iterations (15 iterations in paper [38]). After hybrid L 1 − L 0 layer decomposition, the base layer of original image is used for segmentation in the framework of the proposed algorithm. Figure 7 displays an example of decomposition. In Figure 7, the first column contains two original images, and the second column contains two corresponding base layers. From Figure 7b, it can be seen that the base layers are visually smooth, and eliminate some weak edges. Segmentation Fusion A segmentation fusion method [39] is adopted to fuse different segmentation results. In the process of fusion, both spatial and intensity information is taken into account. The final segmentation result after fusion is more accurate. Let M 1 , M 2 represent two different segmentation maps of original image I, respectively. The pixels in image I can be grouped into two different classes by comparing M 1 and M 2 . One is named the uncontested class, in which the class labels of the pixel in M 1 and M 2 are the same. The other one is named the controversial class, in which the class labels of the pixel in M 1 and M 2 are different. Generally, the uncontested pixels do not need to be reclassified, and the controversial pixels are considered to be misclassified and thus need to be reclassified. Assuming that p is the location of a controversial pixel in image I, l(p ∈ M 1 ) = l a and l(p ∈ M 2 ) = l b denote p's two different labels in M 1 and M 2 , respectively. The reclassified class label of pixel p is calculated by: where N r p denotes p's effective neighborhood with radius r. SI M(p, q) refers to the similarity coefficient between p and q, and is defined as: where Dis(p, q) denotes the spatial distance between p and q. I(•) refers to the gray value of pixel •. α and β are two parameters which compromise the distance and intensity difference in constructing similarity coefficient (α = 1, β = 1 in paper [36]). Figure 8 shows a simple example of segmentation fusion. In Figure 8, it can be observed that all the pixels p ij i,j=1,...,5 are partitioned into three classes l 1 , l 2 , l 3 . The uncontested pixels are shown in Figure 8a. Pixels p 11 , p 12 , p 13 , p 23 , p 24 , p 51 , p 52 , p 53 , p 54 , p 55 belong to class l 1 . Pixels p 21 , p 22 , p 31 , p 32 , p 41 belong to class l 2 . Pixels p 15 , p 25 , p 34 , p 35 , p 44 , p 45 belong to class l 3 . The remaining pixels p 14 , p 33 , p 42 , p 43 are controversial pixels, as shown in Figure 8b. The class labels of each controversial pixel in M 1 and M 2 are inconsistent. Taking pixel p 14 as an example, p 14 s class label in map M 1 is l(p 14 ∈ M 1 ) = l 1 . However, it is classified into class l 3 in map M 2 , i.e., l(p 14 ∈ M 2 ) = l 3 . The four controversial pixels need to be reclassified by Equation (9). In Figure 8c, it can be seen that their final class labels are l(p 14 ) = l 3 , l(p 33 ) = l 2 , l(p 42 ) = l 1 , l(p 43 ) = l 3 . Finally, the segmentation fusion result F (Figure 8d Segmentation maps obtained by IIMT may contain islands or isolated holes. The fusion scheme is employed to integrate the two segmentation maps to reduce misclassification pixels. It may eliminate the islands or isolated holes to obtain a better segmentation result. Experimental Protocols Transaxial MR-T2 brain images with various slices downloaded from "The Whole Brain Atlas" of Harvard Medical School (http://www.med.harvard.edu/aanlib/home. html, accessed on 17 May 2021) were used in the segmentation experiments. Because space is limited, the ten brain slices #022~#112 displayed in Figure 9 were chosen to demonstrate the performance of our proposed algorithm. Parameters for the proposed algorithm are listed in Table 1. All experiments were performed on a computer with Intel(R) Core(TM) i7-7500U CPU, 2.70 GHz, 8GB RAM, Windows 10 using MATLAB 8.1.0.604 (R2013a). Weight of base layer for hybrid L 1 − L 0 layer decomposition λ 2 = 0.1λ 1 Weight of detail layer for hybrid L 1 − L 0 layer decomposition r = 12 Radius for segmentation fusion K = 1, 2, 3, 4, 5 Number of the thresholds (1) Uniformity measure The uniformity measure can reflect the intensity difference of pixels in the same segmented class or in different segmented classes. It is defined as follows: where K denotes the number of thresholds; I i represents the gray value of pixel i in original image I; S j refers to the jth segmented class of image I; Ave(S j ) denotes the average gray value of all pixels in S j ; M × N represents the size of image I; I max and I min denote the maximum gray value and the minimum gray value of pixels in image I, respectively. The values of uniformity measure U are between 0 and 1. The higher the value, the better the performance, and vice versa. To fully assess the performance of the proposed algorithm, three common metrics in addition to the uniformity measure were used in the comparison experiments. Let R 1 denote the automatic segmentation of image I, and R 2 denote the ground-truth segmentation. (2) Misclassification error Misclassification error refers to the probability of pixels being misclassified, namely, the ratio of foreground pixels incorrectly classified as background pixels and background pixels incorrectly classified as foreground pixels, to all pixels. Misclassification error is defined as: where R (3) Hausdorff distance The Hausdorff distance is defined as: Hence, a satisfactory segmentation corresponds to a low Hausdorff distance. (4) Jaccard index The Jaccard index is defined as: The value of Jaccard index varies from 0 to 1. Higher values of J indicate better segmentation. Comparison with Otsu-Based Method In this paper, the newly proposed segmentation algorithm (subsequently referred to as "Proposed") is based on the Otsu method. To verify its effectiveness, this subsection compares it with three Otsu-based algorithms in terms of single thresholding (K = 1) and multilevel thresholding (K = 2, 3, 4, 5). The comparison algorithms include (1) the original Otsu method (Otsu), (2) the newly proposed interval iteration multilevel thresholding method (IIMT), and (3) IIMT based on Hybrid L 1 − L 0 layer decomposition (HL-IIMT). Figures 10 and 11 display segmentation results of different algorithms for slice #042 and slice #082, respectively. For single level of thresholding K = 1, it can be observed that segmentation results obtained by the Otsu method have many fragmented small areas, such as the lower soft tissue in the first row of Figure 10a, whereas IIMT performs slightly better. However, the edges segmented by HL-IIMT and Proposed are much clearer. In the case of K ≥ 2, it can be seen that Otsu and IIMT have similar segmentation effects. HL-IIMT and Proposed are better than Otsu and IIMT in terms of edge-preserving and denoising, as shown in the segmentation results in Figure 11 (K = 2, K = 4). Table 2 shows the values of uniformity measure (U) of Proposed, HL-IIMT, IIMT, and Otsu algorithms for slice #042 and slice #082. The best evaluation results are marked in bold. It can be noted that the U values achieved by Proposed are the highest for both of the two test images. To more clearly present the results, Figure 12 illustrates the comparison of U for different algorithms based on Table 2. In Figure 12, it can be clearly noted that Proposed achieves the highest values, and HL-IIMT comes second, followed by IIMT and Otsu. This indicates that the novel thresholding method IIMT presented in this paper is effective, and our Proposed based on IIMT can obtain satisfactory segmentation results with clear edges and little noise. Experimental Results on Images Containing Noise This subsection compares segmentation results of different algorithms (Proposed, Otsu, IIMT, and HL-IIMT) on images containing noise. Figure 13 displays five images with Gaussian noise N (0, 0.001) added to images #022, #042, #062, #082, and #102, which were selected from Figure 9. A comparison of the evaluation results for different segmentation algorithms on images containing noise with K = 1, 4 is shown in Table 3, and corresponding comparison charts are given in Figure 16. In Table 3, the best results are marked in bold. It can be noted that Proposed consistently has the highest U values. For images containing noise, both the IIMT-based algorithms (HL-IIMT and Proposed) are superior to the original Otsu method in single threshold segmentation; furthermore, Proposed can achieve satisfactory results in multilevel threshold segmentation compared to the other three algorithms (IIMT, HL-IIMT, and Otsu). Comprehensive Comparison To comprehensively evaluate the performance of our proposed algorithm, segmentation results of "Proposed" were compared with those of six other multilevel thresholding algorithms in this experiment, namely, the local Laplacian filtering and discrete curve evolution-based method (LLF-DCE) [39], the particle swarm optimization-based method (PSO), the bacterial foraging-based method (BF) and adaptive bacterial foraging-based method (ABF) [43], the Nelder-Mead simplex-based method (NMS), and the real coded genetic algorithm (RCGA) [40]. Brief descriptions of the eight algorithms are as follows. (1) Proposed In the proposed algorithm, the initial thresholds and mean value of each class are obtained by Otsu multilevel thresholding. Then, Otsu single thresholding is iteratively performed on each interval to search for the optimal threshold in the sub-region. (2) LLF-DCE In LLF-DCE method, discrete curve evolution (DCE) is used to simplify the curve shape of the image histogram, and important points are reserved that are generally in peak or valley regions [39]. Gray levels corresponding to these points comprise a series of intervals. Then, Otsu single thresholding is performed in each interval to search for the optimal threshold. (3) PSO PSO is a stochastic global optimization algorithm and simulates the foraging behavior of birds. The bird is simulated by a massless particle which has two attributes: speed and position. The optimal solution can be sought by continuously updating the speed and position. (4) BF BF is a heuristic algorithm. In the process of maximizing Kapur's entropy and betweenclass variance, BF is adopted to search for optimal thresholds by simulating the foraging behavior of Escherichia coli in the human gut. The behavior specifically includes four actions: chemotaxis, swarming, reproduction, and elimination-dispersal. (5) ABF In the ABF method, an adaptive step size is employed in the traditional bacterial foraging method to improve the exploration and exploitation capability. NMS is a direct search method for multi-dimensional unconstrained minimization. NMS is used to optimize maximum entropy method to identify optimum thresholds. (7) RCGA In the RCGA method, simulated binary crossover (SBX) is employed in crossover and mutation mechanisms of a real coded genetic algorithm. SBX is essentially adaptive, and it creates child solutions proportionally based on the difference in parent solutions. Then, the optimal thresholds are found by maximizing Kapur's entropy. Figure 17 depicts the segmentation results of Proposed for brain slices #022~#112 with the number of thresholds K from 2 to 5. It can be seen that segmentation results with different threshold numbers have different effects. In general, the higher the level of thresholding, the better segmentation quality. Table 4 displays the comparison of optimal threshold values obtained by different algorithms with K = 2, 3, 4, 5. The proposed algorithm and LLF-DCE are based on the fusion scheme. The former combines two different segmentation results obtained by IIMT and HL-IIMT; the latter combines two different segmentation results obtained by LLF-Otsu and DCE-Otsu. In Table 4, it can be seen that the final thresholds selected by different algorithms are different from each other. Table 5 shows the uniformity measure (U) values of different segmentation algorithms. The best results are marked in bold. It is clear that the U value of Proposed is the highest for each test image and each level of thresholding. The proposed algorithm is superior to PSO, BF, ABF, NMS, and RGA in most cases. Taking test image #062 as an example, in the case of K = 2 and 4, U values of Proposed are more than 0.98, whereas the best evaluation result of the above five algorithms is merely 0.9236 (PSO, K = 4). For K = 3 and 5, U values of Proposed are more than 0.99, whereas the best results obtained by PSO, BF, ABF, NMS and RGA are 0.9835 (NMS, K = 5) and 0.9855 (RGA, K = 5), and the remainder are all below 0.95. Compared to the DCE method, the evaluation values of Proposed and LLF-DCE are not significantly different, and Proposed performs slightly better for each test image. In order to show the comprehensive performance of the proposed algorithm, Figure 18 shows average values and standard deviations of U for different segmentation algorithms with the number of thresholds K from 2 to 5. It can be noted that the average U values of the proposed algorithm are higher than those of other comparison algorithms for each level of thresholding, which indicates superior segmentation quality. In particular, they are significantly higher than the average U values of PSO, BF, ABF, NMS, and RGA in the cases of K = 2, 3, 4. The error bars (standard deviations) of Proposed and LLF-DCE are obviously shorter than those of other segmentation algorithms. Figure 19 shows the comparison of average values of the misclassification error, Hausdorff distance, and Jaccard index for different algorithms. It can be noted that the proposed algorithm achieves the lowest misclassification error and Hausdorff distance, and the highest Jaccard index. In addition, LLF-DCE also performs well when compared with others. In summary, our proposed algorithm performs better than other comparison segmentation algorithms. It can not only achieve good segmentation results but also has excellent stability. Experimental Results on BRATS Database In this subsection, we applied the proposed algorithm to the BRATS (Multimodal Brain Tumor Image Segmentation Benchmark) database. The BRATS database (http:// www.imm.dtu.dk/projects/BRATS2012/data.html, accessed on 25 September 2021) is compiled from the international brain tumor segmentation challenge in MICCAI 2012 conference. It is a widely used database and composed of multi-contrast brain MR scans of 25 low-grade and 25 high-grade glioma cases and the corresponding ground truth. Each case includes four modalities-T1, T1c, T2, and FLAIR [44]-and each MR scanning sequence contains more than one hundred images. Figure 20 presents an example of brain MR images from BRATS. Figure 20a shows the original images and the corresponding ground truth is displayed in Figure 20b. The performance of the proposed algorithm on BRATS was compared with other segmentation algorithms in terms of the uniformity measure, misclassification error, Hausdorff distance, and Jaccard index. Figure 21 shows the average evaluation values for different algorithms. It can be observed that the proposed algorithm achieves excellent results in terms of the uniformity measure and Hausdorff distance, as shown in Figure 21a,c, which are obviously better than those of other algorithms. From Figure 21b,d, the proposed algorithm also performs best, followed by LL-DCE. Conclusions In this paper, a novel multilevel thresholding algorithm based on interval iteration (named IIMT) for brain MR images is proposed. In contrast to most other multilevel thresholding methods, IIMT iteratively searches for sub-regions of the image to achieve segmentation, rather than taking the original image as a whole. First, standard Otsu multilevel thresholding is performed on the original image to obtain initial thresholds and class means. Then, in the succeeding iteration, standard Otsu single thresholding is used to determine the threshold in each interval formed by the class means derived in the previous iteration. For two adjacent peaks in the gray histogram, the optimal threshold is found if the difference between thresholds obtained in two consecutive iterations is less than a preset value. Iterating is stopped when all optimal thresholds are found. Furthermore, we presented an IIMT-based segmentation framework for brain MR images. The hybrid L 1 − L 0 layer decomposition method is utilized to decompose the original image to derive its base layer. IIMT is separately performed on the original image and its base layer to gain two different segmentation results. In order to improve the segmentation accuracy, a fusion scheme is adopted to fuse these two results. Experimental results verified that the proposed algorithm is applicable and can achieve satisfactory segmentation results. Compared to other multilevel thresholding algorithms, the proposed algorithm can obtain a better visual effect and, subjectively, its segmentation results have clear edges and little noise. The uniformity measure, misclassification error, Hausdorff distance, and Jaccard index objectively demonstrated the performance of the proposed algorithm. The proposed algorithm results in effective segmentation for medical images, and shows excellent stability and robustness for images containing noise. In clinical medicine, the proposed algorithm can assist doctors to diagnose diseases, locate the lesion area, and detect changes in tumor volume and size. It also can be used in pre-processing for other image processing technologies, such as image fusion. In future, our research work can be extended in three directions. First, the design idea of determining thresholds in the proposed IIMT can be incorporated into other multilevel thresholding algorithms and extended into 2D/3D Otsu or similar methods, such as maximum entropy and minimum error. Second, more effective segmentation fusion strategies can be designed to improve the quality of medical image segmentation. Finally, deep convolutional neural networks can be adopted to image segmentation. We will combine traditional image segmentation techniques with deep learning models to with the aim of achieving good segmentation effects.
6,846
2021-10-29T00:00:00.000
[ "Computer Science" ]
φ Meson Spin Alignment and the Azimuthal Angle Dependence of Λ (Λ̄) Polarization in Au+Au collisions at RHIC Initial large global angular momentum in non-central relativistic heavy-ion collisions can produce strong vorticity, and through the spin-orbit coupling, causes the spin of particles to align with the system’s global angular momentum. We present the azimuthal angle dependent (relative to the reaction plane) polarization for Λ and Λ̄ in mid-central Au+Au collisions at √ sNN = 200 GeV. We also present the φ meson spin alignment parameter, ρ00 in Au+Au collisions at √ sNN = 19.6, 27, 39, 62.4 and 200 GeV. The implications of the results are discussed. Introduction High-energy relativistic heavy-ion collisions at Relativistic Heavy Ion Collider (RHIC) produce a strongly interacting, hot and dense medium known as Quark Gluon Plasma (QGP) [1]. The initial orbital angular momentum, associated with the receding spectators, is large (∼1000 ) in non-central collisions may be transferred to quarks through spin-orbit coupling [2][3][4], which is then transmitted to final-state hadrons and is detectable through the Λ(Λ) polarization and φ meson spin alignment. Therefore, measurements of the polarization of the particles produced in heavy-ion collisions can provide new insights into the initial conditions and evolution of the QGP [5,6]. The STAR experiment at RHIC has observed for the first time a significant alignment between the angular momentum of the medium produced in non-central collision and the spin of Λ(Λ) hyperons (J=1/2), revealing that the matter produced in heavy-ion collisions is by far the most vortical system ever observed [7]. Such vorticity is expected to be maximal at the equator and due to the low viscosity of the system the vorticity may not be efficiently propagated to the poles. This can lead to a larger in-plane than out-of-plane polarization for hyperons. The study of the azimuthal angle dependence of hyperon polarization can help us in understanding transport properties of the system and shed light on dynamics in a highly vortical, low viscous environment. The strong vorticity, when acting together with particle production mechanisms (e.g. coalescence and hadronization), may also influence the spin alignment of φ mesons (J=1). The magnitude and the transverse momentum (p T ) dependence of the spin alignment are expected to be sensitive to different hadronization scenarios [4]. Thus the φ meson spin alignment also probes the particle production mechanisms. e-mail<EMAIL_ADDRESS> Method The global polarization of spin-1 2 hyperons can be determined from the angular distribution of hyperon decay products relative to the system orbital momentum L [8]: where P H is the hyperon global polarization, α H is the hyperon decay parameter (α Λ = -α Λ = 0.642), and θ * is the angle in the hyperon rest frame between the system orbital momentum L and the threemomentum of the baryon daughter from the hyperon decay. Averaged over all phase space, we extract the average projection of the polarization on L. It is shown that [9]: where Ψ EP is the angle of the first-order event plane that estimates the reaction plane angle Ψ RP , φ * azimuthal angle of the daughter proton (antiproton) in the Λ(Λ) frame, The spin alignment for a spin-1 vector meson is described by a spin-density matrix ρ, 3 × 3 Hermitian matrix with a unit trace. A deviation of the diagonal elements ρ mm (m= -1,0,1) from 1/3 signals a net spin alignment. Because vector mesons decay strongly, the diagonal elements ρ −1−1 and ρ 11 are degenerate and ρ 00 is the only independent observable. It can be determined from the angular distribution of the decay products that [10]: where N 0 is the normalization and θ * is the angle between the system orbital momentum L and the momentum direction of a daughter particle in the rest frame of the parent vector meson. This analysis use charged particles reconstructed by the Time Projection Chamber (TPC) and matched to the Time Of Flight (TOF) detector near mid-rapidity (|η| < 1.0). We reconstruct Λ(Λ) and φ meson's invariant masses through their decay channels: Λ(Λ) → p + π(p + π) and φ → K + +K − , respectively. Topological and kinematic cuts are applied to reduce the combinatorial background. In the study of Λ(Λ) polarization, the direction of L is determined by the first-order event plane reconstructed with information from the Shower Maximum Detectors at Zero Degree Calorimeter. In the study of φ spin alignment, the direction of L is determinted by the second-order event plane reconstructed with TPC tracks. Azimuthal angle dependence of Λ(Λ) polarization The left panel of Figure 1 shows the Λ and Λ polarization(P H ) as a function of φ − Ψ obs at midrapidity in 20-50% central Au+Au collisions at √ s NN = 200 GeV. The φ here is the azimuthal angle of Λ(Λ). The P H from the off-peak for Λ and Λ are consistent with zero, which is as expected and could be served as a consistency check. The P H from the mass peak for Λ and Λ decreases with increasing φ − Ψ obs . This feature is the same for both Λ and Λ, and there is no significant difference in values between Λ and Λ. Because Λ and Λ have opposite signs in intrinsic magnetic moments, it is expected that a magnetic field will enhance (reduce) the polarization for Λ (Λ). Within statistics our data shows that such effect is not visible for Au+Au collisions at √ s NN = 200 GeV. This may be due to the short lifetime of magnetic field which gives little time for particles to align their spin, and/or the late hyperon production time at which moment the magnetic field diminished. The finite averaged P H over four bins is ∼ 0.2% which is consistent with STAR's previous published result [7]. The right panel shows the combined P H between Λ and Λ as a function of φ − Ψ obs at mid-rapidity in 20-50% central Au+Au collisions at √ s NN = 200 GeV. The significance of ∆P H , for Λ and Λ combined, between φ − Ψ obs bin of [0, π/8] and [3π/8, π/2] is 4.7σ. The larger in-plane than out-ofplane polarization is consistent with the picture of maximum vorticity in the equator and low viscosity of the system. Please note that although P H has been corrected for the event plane resolution, when presented as a function of φ − Ψ obs . The smearing correction of φ − Ψ obs bins is not applied yet. φ meson spin alignment The left panel of Figure 2 shows The right panel shows ρ 00 for φ meson within 0.4<p T <3.0 GeV as a function of beam energy for 20-60% central Au+Au collisions [11]. The central values of ρ 00 are slightly larger than 1/3 while the large systematic uncertainties prevents us from making definite conclusions. Note that the efficiency corrections due to kinematic cuts have not been applied to the data. Summay The measurement of the Λ and Λ polarization as a function of azimuthal angle relative to the reaction plane is presented. The difference of P H , for Λ and Λ combined, between the most in-plane bin [0, π/8] and the most out-of-plane bin [3π/8, π/2] is 4.7σ. The data are consistent with the picture of a low
1,708
2018-01-01T00:00:00.000
[ "Physics" ]
Consumers’ preferences regarding energy efficiency: a qualitative analysis based on the household and services sectors in Spain Informational failures frequently lead consumers to make non-optimal energy-efficient purchasing decisions. Energy efficiency labels seek to influence consumer behaviour at the point of sale by reducing informational failures regarding energy efficiency. However, several informational and behavioural factors contribute to the energy efficiency gap and could render label-oriented policies useless. The purchasing decision model of Allcott and Greenstone (The Journal of Economic Perspectives, 26, 3–28, 2012) is used here to explore the different factors that influence purchasing decisions and understand (i) the importance of energy consumption compared to other attributes; (ii) how consumers weight energy savings and (iii) what other benefits and costs influence the purchase of energy-efficient goods. The analysis reported here is based on qualitative research methods and is conducted in the household and service sectors (the accommodation sector and private service companies), for appliances, heating and cooling systems and cars in Spain. Results show that (i) there is still an informational gap regarding energy labels and (ii) bounded rationality and end-user behaviour are important limiting factors for the purchase of energy-efficient goods in Spain. Introduction The European Commission is seeking to increase the energy efficiency (EE) of energy-related products as a means of achieving energy savings of at least 32.5% by 2030(European Commission 2014. Evidence has shown, however, that although EE may have a number of economic and environmental benefits (e.g. cost reductions, decreases in carbon and other emissions), many households and businesses invest less in it than what would appear to be economically rational, while others make EE investments which do not seem to be financially worthwhile (Gerarden et al. 2017;Jaffe et al. 2004;Linares and Labandeira 2010). One explanation for this can be found in the intertemporal arbitrage problem that consumers solve when deciding whether to make an investment involving present and future costs. For instance, consumers often fail to account for running costs during the life cycle of a product and heavily discount future energy savings (Train 1985) or undervalue future savings (Allcott and Wozny 2013). This is an expression of the so-called energy efficiency gap or energy efficiency paradox (Jaffe and Stavins 1994). There are other possible explanations for this paradox which are usually grouped under the headings of market failures (including informational failures) and behavioural failures (Gerarden et al. 2017;Linares and Labandeira 2010;Ramos et al. 2015). Informational failures are one of the most frequent type of failures in the energy market. They lead consumers to make non-optimal choices (Allcott and Sweeney 2016;Phillips 2012). Many policy measures have been proposed and explored for addressing failures of this type, including information campaigns, fiscal incentives, feedback tools, audits and certificates or labels (Newell and Siikamäki 2014;Ramos et al. 2015;Waechter et al. 2015). Energy labels are commonly used to address informational failures as they are easy and cheap to implement (Ramos et al. 2015). Energy labelling in the European Union (EU) dates back more than 25 years: it was first implemented in 1994 for appliances in the application of Directive (1992/75/ ECC) and extended to cars in 1999 with Directive (1999/94/EC). Energy labels are designed to highlight the EE of a good and consequently reduce the information gap (Carroll et al. 2016;Lucas and Galarraga 2015). They provide information on the energy consumption of an energy-related product, on its use of other resources (such as water) and on comfort levels (e.g. noise). The content of labels varies from one product and sector to another. For some products, colour-based labels are used while for others, labels report technical information. The EU Energy Labelling Directive (2010/30/EU) for household appliances requires energy labels to be displayed on energy-related appliances at the point of sale with a scale that ranges from A+++ (the most efficient) to D (the least efficient) using different colours. 1 The heating and cooling industry is covered by two types of regulation: the Ecodesign Regulation and the Energy Labelling Regulation. The latest Ecodesign Regulation, published in 2016, 2 summarises the most relevant information on energy performance, EE and the emission of nitrogen oxides from air heating and cooling products, high-temperature process chillers and fan coil units. Most of these heating and cooling products are also covered by energy labelling regulations 3 and use technical labels. Under the Labelling Directive for cars, two types of label are used in EU countries: a compulsory label, which must provide information on CO 2 emissions (g/km) and fuel consumption (L/100 km), and a voluntary label which provides the same information but with a coloured alphabetical (A-G) grid. The voluntary label is not currently applied in Spain. There is a growing body of research on how to improve EE labels so as to encourage energy-efficient purchases by providing running cost information (Carroll et al. 2016;Codagnone et al. 2016;Kallbekken et al. 2013), health or environment-related information (Asensio and Delmas 2016) or by improving the design of labels to take behavioural failures into account (Waechter et al. 2016). However, other informational and behavioural factors are also likely to mark down the role of these labels. Failing to control for those factors would mean that efforts to improve labelling scheme would be merely scratching the surface. This paper seeks to provide some qualitative insights into the factors that influence consumers' purchasing decisions in regard to energy-efficient goods. The analysis is supported by the purchasing decision model of Allcott and Greenstone (2012), in which purchasing decisions are driven by three sets of factors: (i) the difference in energy intensity of goods; (ii) unobserved costs and benefits and (iii) various weightings representing consumers' preferences, attitudes and behaviour that mark down the weight of EE in purchasing decisions. Goods are assumed to differ only in their energy intensity. However, in a market, there are attributes other than energy intensity which can create differences between energy durable goods and they may be more highly rated by consumers than EE attributes. Consumers may thus prefer a non-energy-efficient good for attributes other than energy intensity. Failing to identify such non-energy-related attributes and possible weighting factors can result in an overestimation of the role of energy savings in the EE gap. The paper sets out to answer three main questions associated with the key parameters of the purchasing decision model of Allcott and Greenstone (2012): (i) do consumers focus only on energy intensity differences when purchasing energy durable goods or are there other attributes that they are likely to rate more highly than EE? (ii) How do consumers weight energy savings? (iii) What are the unobserved costs and benefits of energy-efficient goods? A common qualitative methodology is used to address these questions in different sectors (and products) in order to highlight potential differences between them. The analysis focuses on the household, services and transport sectors in Spain, which between them account for about 75% of the country's energy consumption (IDAE 2017). EE provides an opportunity to reduce energy consumption in the household sector (Linares and Labandeira 2010;Ramos et al. 2015) and in the accommodation and transport sectors (Schleich 2009;Schlomann and Schleich 2015) and to reduce energyrelated running costs in the services sector (Patel and Guedes 2017;Sakshi et al. 2020). The products under review account for a significant proportion of energy consumption in Spain. 4 Specifically, they are (i) household appliances; (ii) heating, ventilation and air-conditioning (HVAC) systems and appliances for accommodation owners and (iii) cars for private companies with their own fleet. Two qualitative research methods are used to capture experiences in the different sectors and products: focus group discussion and in-depth interviews. Specifically, one focus group and sixteen in-depth interviews were carried out in Spain between May and July 2017 to collect qualitative data from the household and services sectors (accommodation sector and private services companies), respectively. The rest of the paper is organised as follows: the "Methodology" section presents the decision model used and the qualitative methodology applied. The "Results" section reports the main results for the three specific research questions raised. The "Discussion" section discusses the main results and the "Conclusion" section concludes. Theoretical framework of investment decisions Energy-efficient investment decisions are intertemporal decisions: in the initial period, the consumer chooses the capital investment; then, in the second period, the consumer uses the good and incurs the energy cost. The investment decision model of the seminal paper of Allcott and Greenstone (2012), which allows to explain why investments in energy efficiency with positive financial returns are not realised, is used to structure the decision-making of consumers regarding energyefficient purchases. This investment model has been widely used for identifying factors limiting energy efficiency investments and test ways to reduce information asymmetry, for transport (Brazil et al. 2019) or appliances (Damigos et al. 2020;Filippini et al. 2020) among other goods. The model considers a profit/utility maximising agent who has to decide between an energy-efficient good with an energy intensity 5 e 1 and an energy inefficient good with an energy intensity of e 0 > e 1 . The two goods are assumed to differ only in their energy intensity levels. The agent, i, then chooses the energy-efficient investment if where p is the price of a unit of energy, m i is the agentspecific quantity of energy services and is the riskadjusted discount rate. ε represents the unobserved costs or benefits that influence the utility function and c the incremental investment costs of the more efficient good. γ is a weighting parameter which captures investment inefficiencies when γ < 1. Focus group and in-depth interviews A qualitative approach based on a focus group and indepth interviews was used to address the research questions raised in the "Introduction" section. Focus group and in-depth interview methods (Milena et al. 2008;Starr 2014) were used in order to ascertain how EE was understood by consumers and what barriers they still faced when deciding the EE level of energy-related purchases. These methods are particularly well suited to understanding the practices, opinions and expectations of different consumers regarding EE and EE labels. They are equally well suited to exploring consumers' perceptions and preferences without imposing the restrictions of a quantitative approach where predefined statements are usually proposed to the participant or interviewee, with the risk of their not being the most relevant to that particular consumer. They also enable a common framework analysis to be used for all sectors. However, this approach can hurt the robustness required for policy recommendations. This qualitative research thus complements the more quantitative studies reported in the literature with a view to providing a better understanding of all the dimensions of EE knowledge and the case of Spain in particular. Participants for both focus group and in-depth interviews were recruited by a market research company that collects market and consumer information in Spain. Focus group A focus group was designed to analyse consumers' preferences regarding household appliances, specifically refrigerators and washing machines. 6 The focus group discussion was conducted on May 31, 2017, in the city of Bilbao with a total of 8 participants. It lasted around 2 h. Participants were recruited strategically to represent typical households in Spain in terms of gender, education level (low, medium, high), age, number of homes (1 or 2), household composition (number of members) and socio-economic status (low, medium, high). The characteristics of each participant are presented in Appendix 1 (Table 3). At the end of the discussion, they were paid €25 for participating. A diversified composition of the focus group was preferred to a repetition of several differentiated focus groups. The fact that only one focus group was carried out may hurt the robustness of the results but running a second or third focus group with participants with the same diversified profiles would have added very little value to the qualitative findings. Moreover, the goal was not to test for differences between differentiated groups of participants but rather to analyse the attitudes and opinions of typical households in Spain. According to Krueger and Casey (2008), it can be argued that different focus groups may lead to relatively different findings but for qualitative analysis, it is well documented that the most important factors can be covered in one wellstructured focus group. In-depth interviews In total, 16 in-depth interviews were conducted face-toface in Spain to analyse the cases of appliances and HVAC systems (among accommodation owners) and vehicles (at service sector companies with their own vehicle fleets). Initially, 8 IDIs were held in Spain between June 21, 2017, and July 5, 2017, for different types of accommodation establishments including cottages, hotels, hostels and guesthouses, to analyse their decision-making processes in purchasing appliances 7 and HVAC. 8 The 8 accommodation owners were recruited so as to provide a representative sample of climate areas (warm and cool), geographical locations (north and Mediterranean), types of area (urban, mountain and coast), types of accommodation (cottage, guesthouse, hostel, hotel) and other accommodation establishment characteristics (star rating, number of rooms, occupancy rate). Details of the sample are provided in Table 4 in Appendix 1. Eight further IDIs were later held in and around Bilbao concerning car fleet purchasing decisions in the private services sector. This sample comprised small (4 companies with 3 vehicles), medium (2 companies with 11 and 18 vehicles) and large fleets (2 companies with 120 and 515 vehicles). The companies interviewed include building renovation firms, driving schools, construction companies and others (see Table 5 of Appendix 1 for more details of the sample). The total number of interviews conducted is within the normal range used to identify the most relevant views of respondents through open-ended questioning (Guion et al. 2001;Milena et al. 2008;Styśko-Kunkowska 2014). Discussion guidelines In-depth interviews were preferred to focus groups in the private service sector because of the availability constraint on bringing together executives from the sector for a focus group meeting. However, the focus group and the sixteen in-depth interviews followed common discussion guidelines with four areas (see Appendix 2). First, the context of the purchasing decision was established for each sector and product (who was responsible for purchasing decisions and what the purchasing process was); then, the first main area involved identifying the key attributes that influenced decision-making for purchasing in the different product categories analysed (e.g. "What are the key factors in the purchasing decision?" "Do you consider EE in the purchasing decision?"). In the focus group for appliances, the group was asked to consensually weight the main attributes and explain the relative importance of each one in the purchasing decision. 9 The second area focused on the comprehension of EE. Consumers' understanding of EE (e.g. "Do you understand what EE is?" "What do you mean by EE?") and their reasons for buying more energy-efficient products (and barriers in the way of such purchases) (e.g. "Why should you buy a high energy-efficient product (and why not)?") were addressed. In the focus group for household appliances, a role play was staged in which participants were separated into two gender-balanced groups: one group was asked to list arguments in favour of buying an energy-efficient appliance (e.g. an A+++-labelled fridge) and the other arguments for not doing so (e.g. a D-labelled fridge). A group leader reported the arguments and a group discussion followed, with counter-arguments.In the third area, focus group and in-depth interview participants were asked about their knowledge and understanding of energy labels (e.g. "Are you familiar with the label?" "Do you think it is useful and clear?"). The fourth and final area sought to analyse how labels could be improved (e.g. "How could labelling be improved?" "What should be added/replaced/changed?"). Indicating running costs in monetary terms was discussed as a specific proposal for improving the understanding of EE labels and therefore encouraging energy-efficient purchases (e.g. "What do you think about providing energy consumption data in monetary units?" "Would you appreciate it?"). Both the focus group and the in-depth interviews were recorded on audio and transcribed to text. The information collected was assessed at a macrolevel in search of participant consensus, patterns and general themes. This process is known as content analysis (Elo and Kyngäs 2008;Hsieh and Shannon 2005). Results The results reported here hinge on the three sets of main elements of the decision model presented in the theoretical framework (Theoretical framework of investment decisions) and answer the following questions: (i) do consumers focus only on differences in energy intensity when purchasing energy durable goods or are there other attributes likely to be more highly rated than energy? (ii) How do consumers weight energy savings? (iii) What are the unobserved costs and benefits of energy-efficient goods? The importance of energy intensity and other attributes EE is rated differently in the three sectors and is not generally the biggest driving factor. For household buying refrigerators, there were different opinions regarding the role of attributes such as dimensions, capacity and price in the actual decision-making process. Some participants attributed more weight to dimensions and capacity (which it was agreed to treat jointly), whereas others put more on price. In the end, a consensus was reached to rank both in joint first place with the same weighting. Another consensus had to be reached between brand and energy consumption. 10 It was finally agreed to rank the energy consumption attribute in second place. Third place went to a generic category jointly representing performance, safety and aesthetics. 11 And, fourth place went to brand. The same exercise was carried out for washing machines with similar results, though in this case, participants attached more importance to load capacity than to price, followed by performance, 12 dimensions and brand in that order. When accommodation owners were asked about the factors that influenced their purchasing decisions with respect to new appliances, the attribute most frequently mentioned was price, followed by a brand in terms of value for money, durability and customers' perceptions. Capacity and noise level (decibels) were also mentioned, particularly for mini-bars in hotel rooms. Other attributes such as shape and size, aesthetics (shape, colour, etc.) and the performance of appliances (particularly TVs) were also mentioned. EE was found to be less important and was mentioned spontaneously by just three of the eight interviewees in relation to kitchen appliances, mini-bars and hair dryers.For HVAC, constraining factors such as budget and the infrastructure of hotels were mentioned, given that once hotels initially install HVAC, it is very costly and difficult to renew and adapt the system to the infrastructure of the hotel. Brand was the most important factor, related mainly to durability (resistance) and technical and maintenance support and then came other characteristics such as size and energy consumption. The decision to purchase vehicles for car fleets in the private service sector was seen as a two-stage process: first, certain initial requirements were identified (for instance capacity, volume, number of seats, etc.) as necessary for the intended purposes of vehicles; then, a set of final attributes drove the decision. These final attributes seemed to be driven by an intertemporal arbitrage of cost minimisation, communication, safety and comfort attributes. Communication and safety-related attributes included communication between the driver, the company and the customers. Connectivity to a global positioning system and Bluetooth cell phone was mentioned as helping to reduce transport/delivery times and deal with customers while maintaining safety and security standards. These attributes and air-conditioning were also associated with comfort. The intertemporal arbitrage of cost minimisation in this case consisted of balancing the future running costs implied by energy consumption (and maintenance) against the purchasing price. The price of vehicles and their energy consumption are the attributes that the interviewees immediately referred to. Vehicle brand seemed to attract less interest: the person responsible for purchasing cars usually compares several brands and chooses those that best balance these attributes. Companies also give importance to the robustness of vehicles so as to reduce the likelihood of unexpected maintenance expenses and reduce future running costs. EE seems not to be the first attribute in the purchasing decision in any of the three sectors analysed. In the best case, it ranks second (refrigerators) and in the worst, it is not mentioned at all (most accommodation establishments in regard to the purchase of appliances). Consumer weighting of energy savings Lack of information is an important factor that leads consumers not to purchase energy-efficient goods (Allcott and Greenstone 2012). This factor can be seen in different forms in all three sectors (see Table 1). First, in all sectors, participants mentioned that they were unable to calculate the savings or that it would take them too long to do so. Private companies were able to calculate the savings to compare conventional fuel vehicles but were unable to estimate the energy costs of electric/hybrid vehicles. The lack of experience with hybrid/electric cars limits their ability to determine whether investing in such vehicles would result in net gains or net losses compared to fossil fuel vehicles. A principal-agent problem limits the perception of the benefits obtained from energy-efficient goods, particularly in the accommodation sector and in the private sector for companies with car fleets. Customers of accommodation establishments are not willing to pay more for an energy-efficient service as they cannot observe the level of efficiency, particularly regarding HVAC. Owners are thus not willing to buy such goods since the payback period on the investment may be longer if the price of the room does not increase. Fleet vehicles are owned by a company and operated by employees who do not pay the costs of ownership and use (maintenance and fuel). The company (the principal) wishes to minimise those costs but the employee (the agent) has no incentive to conserve fuel. Users and buyers differ in their incentives, so a usage problem results (Graus and Worrell 2008) and the company is unlikely to buy a more expensive but energy-efficient vehicle. To deal with this principal-agent issue, companies monitor consumption by assessing consumption individually per kilometre. However, none of the companies contacted kept strict controls. Rather, they only checked in detail when excessive expenditure was detected: "I control things more or less. I check where they have been during the month, what they should have consumed and what they actually consumed, and if the figures are normal and logical that's the end of the matter. However, if it [consumption] has risen sharply I ask them what happened", said one interviewee from a private company. Accommodation establishments face a similar end-user problem. They seek to control energy consumption in different ways: by turning off radiators in empty rooms, informing customers face-to-face or by posters, using automatic card systems in rooms and installing remote controls. Several sources of uncertainty are also observed which tend to mark down the importance of EE in the purchasing decision. In all three sectors, participants said that they mistrusted the information on consumption given on the label and suspected that actual consumption was higher. Households expressed a similar opinion regarding the useful lifetimes of appliances. They felt that the actual useful lifetime was likely to be shorter than the official one. They were thus reluctant to buy more expensive appliances that could last less than indicated. Uncertainty as to future electricity price stability is another factor that affects the perceived profitability of an investment and potentially the purchasing decision: to quote one respondent, "I think there would be more gain [referring to investment in an electric or hybrid vehicle 13 ], right? Though it's probably just like for everything else: they'll probably raise the price of electricity afterwards. There has to be a catch somewhere". Similar reasoning emerged in other sectors. Consumers believed that future increases in the price of electricity, particularly for users who switched energy source, would mark down their budget equilibrium as they already had to support the initial investment in EE. Surprisingly, they associate the regulation of the energy market with the market for energy durable goods and do not realise that if the price of energy increases running costs will be lower with more energy-efficient goods. A bidding budget constraint may support this reasoning, particularly for those consumers that cannot bear both an (observed or anticipated) increase in energy price and a higher initial price of the good. Consumers with a bidding budget constraint and who anticipate higher energy price prefer to allocate their budget to the running cost than to the initial investment cost. Unobserved costs and benefits of energy-efficient purchases Participants from all three sectors had a vague understanding of the concept of EE. They connected it with ideas such as energy production, the reduction of energy consumption and the existence of a label (in the case of households). Even in the case of car fleets, companies were concerned about the fuel consumption of cars but did not relate this to the concept of EE. In spite of this fuzzy understanding, all were aware that the reduction in energy consumption from energy-efficient goods came at the expense of a higher purchasing price. All the participants from the different sectors referred to a number of hard-to-measure associated benefits or costs that would also influence their purchasing decision (see Table 2). All agreed that purchasing energyefficient goods would benefit the environment and help mitigate climate change. In private service companies, lower demand for fuel, gas and electricity would reduce the environmental impact from resource extraction and energy production. However, environmental awareness was mentioned by only two of the eight companies: "I'm concerned [about environmental protection] and I think in that sense the fundamental limitation is the choice of fuel". For others, "this [CO 2 emissions] is a detail that we have never worried about". Additionally, households and accommodation owners were found to be aware that the associated reduction in pollution would benefit public health. Contributing to the green economy was also mentioned by households and accommodation owners as a co-benefit of the purchase of energy-efficient goods. 13 To ascertain the perception of company owners about gains and losses from uncertain investment, interviewees were asked the following question: "Do you think that with the purchase of a high energyefficiency vehicle (electric, hybrid) your company might: (a) gain more than it could lose (why?); (b) lose more than it could gain (why?) and (c) you may be unable to distinguish between expected gains (energy savings) and expected losses (why?)". Their purchase was thus seen as contributing to the creation of new jobs and to research and development. For interviewees whose work entailed direct contact with customers, such as accommodation establishments and private service companies, energy-efficient cars or appliances were seen as helping to green their image, convey the message that they were a company with environmental concerns and were technologically upto-date; they were also seen as enhancing their reputation in the sector. In addition to the observed higher cost of purchase, participants referred to other, less tangible costs that nonetheless affected their final decisions. These were cited mostly by accommodation owners and private companies with car fleets. The maintenance cost of more energy-efficient appliances was perceived to be higher due to the use of new technologies whose repair and maintenance were perceived as more expensive, particularly in HVAC. Given the limited range and long charging times of electric cars, concern was expressed that purchasing them would generate costs in the form of lost market opportunities and delays in delivering merchandise that would hurt the reputation of the company. The limited number of charging points would also require a reorganisation of car parking areas during nonoffice hours. Employees would have to leave the cars they use in the company car park to recharge them. This would result in additional commuting time which could reduce both the productivity and well-being of workers. The limited supply of electric vehicles was another limiting factors for companies; few models per brand are equipped with electric engines and fewer match their needs. Meanwhile, they have to use conventional vehicles with higher energy-related running costs. Discussion There are several factors that help explain the EE gap in the household, accommodation and private sectors in Spain regarding appliances, HVAC and vehicles. The results reported here show that EE is a secondary rather than a key primary attribute in purchasing decisions. Informational failures still seem to exist regarding EE 25 years after the implementation of labels. Energy labels seem not able to convey to consumers in understandable units of measurement how much they would save if they bought energy-efficient products. This is particularly the case for goods that consume electricity: appliances, HVAC and electric vehicles.Cognitive bias as a form of bounded rationality was also highlighted in the analysis. Consumers are frequently unable to process the information required to trade-off alternatives in real decision-making processes (Blasch et al. 2019;Kahneman 1994). An inability to calculate energy costs or the energy saved by buying a more energy-efficient good was revealed in all three sectors. In an exercise during the focus group and in-depth interviews, participants were shown official energy labels and asked about their knowledge and understanding of them as well and how they would modify them so that they helped in making informed purchases. In all three Note: Results based on a role-play game during the household focus group and on the analysis of a direct question on cost and benefits of energy-efficient purchase in the in-depth interviews sectors, consumers recognised that the colour-based label was an appropriate signal of energy performance which provided more information than technical labels (currently used for cars in Spain). For vehicles, participants felt that using the voluntary colour-based label would also harmonise the use of labels in the car industry since it is similar to that used for car parts such as tyres. However, the unit of measurement in this case is also difficult to understand for non-experts. Ways to improve labels by changing their design and contents to make them more understandable and thus, encourage energy-efficient purchases were discussed. The information provided on energy consumption in kilowatt per year for appliances and HVAC was not fully clear to non-experts. The idea of indicating running cost in monetary terms was discussed as a way to overcome this knowledge gap. Several challenges in regard to providing monetary estimates were raised because of the uncertainty related to both consumption per annum and electricity prices. A monetary running cost depends on the frequency of use, which may differ from one consumer to another, and on the market price of energy. Households suggested that information could be presented for an average number of uses and for an average electricity price. For car buyers, knowing how much on average could be saved with the most efficient vehicle compared to a less efficient one would be useful. The interviewees suggested that this information could be reported on the basis of the average distance travelled in kilometres per year since the payback time of an investment in an energy-efficient vehicle depends significantly on its use and the lifetime considered. This hypothesis of providing additional monetary information on the label has been tested experimentally and hypothetically in the literature and has been shown to have a potentially positive effect on energy-related purchases for appliances (Allcott and Sweeney 2016;Newell and Siikamäki 2014;Stadelmann and Schubert 2018) and for vehicles (Allcott and Knittel 2019). However, the lifetime used to report the running costs of energy-related products is a critical element that seems to influence the effectiveness of this measure. Participants in the focus group and in-depth interviews differed concerning the choice of lifetimes for reporting running costs. In the household sector, there was support for reporting the costs per use of washing machines (i.e. on an hourly basis), whereas, in the service sector, reporting running costs of vehicles per annum was preferred. Min et al. (2014) show that providing information for long periods (lifetime of light bulbs) would have more impact on purchasing decisions than giving annual information. Further research seems to be required to reach a consensus on how monetary information should be shown on EE labels. Even when energy savings are expressed in monetary terms, consumers apply a number of weighting factors to potential savings. The principal-agent problem seems to be the main market failure that influences EE choices which is found in all three sectors studied here (households, accommodation establishments and private transport). It arises when one party makes a decision but another party bears the cost or enjoys the benefits of that decision (Gillingham and Palmer 2014;Phillips 2012). This intrinsically relates to end-user behaviour. The split incentive is a particular principal-agent problem where the incentives of the parties are different. It is particularly the case of relationships between landlords (agents) and tenants (principals) (Bird and Hernández 2012;Gillingham and Palmer 2014) and between car fleet users (agents) and buyers (principals) (Graus and Worrell 2008), whose incentives for investing in EE differ. The results reported here show that accommodation establishments and companies with car fleets are reluctant to invest in EE because they cannot fully control end-user behaviour. This issue was also raised by households regarding the use of appliances, particularly by households with children, where there may be difficulties in controlling the use of appliances. Consumers who face a principal-agent problem also relate purchases to their budget constraints: they prefer to spend their budget on paying electricity bills (and anticipated future increases in energy prices) than incur the additional cost of buying energy-efficient goods since they suspect that consumption will not actually decrease due to end-user behaviour. Uncertainty is a special circumstance or factor that could make consumers more likely to use heuristics and underestimate the importance of energy savings. Under uncertainty, the rationality of decision-making leads consumers to think in terms of expected payoffs and they are likely to derive utility from gains and losses relative to a reference point rather than in absolute terms (Kahneman 1994;Kahneman and Tversky 1979). The lack of experience with energy-efficient vehicles such as hybrid or electric and uncertainty as to future energy prices was mentioned frequently in the interviews as reasons for being unable to assess the profitability of investing in such vehicles. As shown by Greene (2011), uncertainty about energy prices combined with loss aversion on the part of buyers results in decisionmaking bias. Estimating the profitability of investing in EE vehicles is a difficult and time-consuming task for companies. When it is furthermore subject to a certain mistrust in consumption information, it becomes difficult to draw up an ex ante balance of the extra cost of the purchase and the future flow of energy costs. Using heuristics may thus be less costly and less timeconsuming. Conclusion Reducing the energy efficiency gap is a critical step towards achieving the goals of cutting energy consumption and reducing CO 2 emissions. This paper explores the factors that motivate consumers to purchase energy-efficient goods across different sectors (households, the accommodation sector and private services companies) in Spain. Based on the purchasing decision model of Allcott and Greenstone (2012), it analyses how highly consumers rate energy savings, how they weight them and how unobserved costs and benefits influence the decision whether to purchase energy-related goods. A qualitative approach based on focus groups and in-depth interviews is used to address those questions. This method is particularly suitable for exploring consumers' perceptions and identifying important factors which may not show up in deductive quantitative inquiries alone and highlighting concepts that can be developed and tested using quantitative methods in larger samples. The results indicate that the difference in energy intensity between goods is not the most significant attribute in the purchasing decision for all types of agents. There are several barriers and unobserved costs, especially in the case of electric vehicles. Energy-efficient purchases are also affected by a number of unobserved benefits related to the environment and human health. These are potential arguments for promoting energy-efficient purchases. Few differences are observed across agents (households, accommodation owners and private companies with car fleets) and products (appliances, heating, ventilation and air-conditioning and cars). Bounded rationality and the principal-agent problem through end-user behaviour are the most relevant obstacles (weights) for the purchase of energy-efficient products. Consumers generally do not understand the unit of measurement of energy consumption shown on energy labels. Providing additional information in monetary terms is technically challenging but would reduce this knowledge gap according to participants. However, reducing informational failure by adding monetary information to labels is likely to clash with end-user behaviour that behaviour often cannot be controlled at a reasonable cost: it renders consumers (households or companies) unwilling to pay the higher purchasing price for energy-efficient products. In addition to informational instruments for effective labels, other instruments are needed to reduce the weights assigned by consumers to energy savings: instruments that help overcome the issues of bounded rationality and end-user behaviour. Acknowledgements This work was undertaken as a part of the CONSumer Energy Efficiency Decision making (CONSEED) project, an EU-funded H2020 research project under grant agree-ment number 723741. This research is also supported by the Spanish State Research Agency through María de Maeztu Excellence Unit accreditation 2018-2022(Ref. MDM-2017. Funding Amaia de Ayala would like to thank the financial support of Fundación Ramón Areces under the project entitled "La toma de decisiones de los hogares en eficiencia energética: determinantes y diseño de políticas". Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. The in-depth interviews and the focus group used the following general guidelines. Specific questions for each sector and product category were drawn up for each general question (available on request). Appendix 1. Sample characteristics & Context of the purchasing decision -Who is responsible for making the decision to purchase products? -How is the purchasing process organised? What steps are taken, how much time is invested? Where do you usually buy the product? -Do you think the information displayed on labels could be improved? -How could labelling be improved? What should be removed? What should be added? What should be changed? -What do you think about providing energy consumption data in monetary units (either to supplement or to replace the physical unit of kWh/year)? Would you appreciate this? Do you think it is useful? Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,030
2020-12-10T00:00:00.000
[ "Economics", "Environmental Science" ]
Improved Arabic Alphabet Characters Classification Using Convolutional Neural Networks (CNN) Handwritten characters recognition is a challenging research topic. A lot of works have been present to recognize letters of different languages. The availability of Arabic handwritten characters databases is limited. Motivated by this topic of research, we propose a convolution neural network for the classification of Arabic handwritten letters. Also, seven optimization algorithms are performed, and the best algorithm is reported. Faced with few available Arabic handwritten datasets, various data augmentation techniques are implemented to improve the robustness needed for the convolution neural network model. The proposed model is improved by using the dropout regularization method to avoid data overfitting problems. Moreover, suitable change is presented in the choice of optimization algorithms and data augmentation approaches to achieve a good performance. The model has been trained on two Arabic handwritten characters datasets AHCD and Hijja. The proposed algorithm achieved high recognition accuracy of 98.48% and 91.24% on AHCD and Hijja, respectively, outperforming other state-of-the-art models. Introduction Approximately a quarter of a billion people around the world speak and write the Arabic language [1]. ere are a lot of historical books and documents that represent a crucial data set for most Arabic countries written in the Arabic language [1,2]. Recently, the area of Arabic handwritten characters recognition (AHCR) has received increased research attention [3][4][5]. It is a challenging topic of computer vision and pattern recognition [1]. is is due to the following: (i) e difference between handwriting patterns [3]. (iv) As shown in Figure 1, in the Arabic language the shape of each handwritten character depends on its position in the world. For example, here in the word ‫"ءارمأ"‬ the character "Alif" is written in two different forms ‫"أ"‬ and ‫,"ا"‬ where, in the Arabic language, each character has between two and four shapes. Table 1 shows the different shapes of the twentyeight Arabic alphabets. Most researchers improved the CNN architecture to achieve good handwritten characters recognition performance [6,13]. However, a neural network with excellent performance usually requires a good tuning of CNN hyperparameters and a good choice of applied optimization algorithms [14][15][16]. Also, a large amount of training dataset [17,18] is required to achieve outstanding performance. e main contributions of this research can be summarized as follows: (i) Suggesting a CNN model for recognizing Arabic handwritten characters. (ii) Tuning of different hyperparameters to improve the model performance. (iii) Applying different optimization algorithms. Reporting the effectiveness of the best ones. (iv) Presenting different data augmentation techniques. Reporting the influence of each method on the improvement of Arabic handwritten characters recognition. (v) Mixing two different Arabic handwritten characters datasets for shape varying. Testing the impact of the presented data augmentation approaches on the mixed dataset. e rest of this paper is organized as follows. In Section 2, we expose the related works in Arabic handwritten character classification. In Sections 3 and 4, we describe the convolution neural network architecture and the model tuning hyperparameters. In Section 5, we make a detailed description of various used optimization algorithms. In Section 6, we describe the different utilized data augmentation techniques chosen in this study. In Section 7, we provide an overview of the experimental results showing the CNN distinguished performance. Section 8 is conclusion and possible future research directions. Related Work In recent years, many studies have addressed the classification and recognition of letters, including Arabic handwritten characters. On the other hand, there are a smaller number of proposed approaches for recognizing individual characters in the Arabic language. As a result, Arabic handwritten character recognition is less common compared to English, French, Chinese, Devanagari, Hangul, Malayalam, etc. Impressive results were achieved in the classification of handwritten characters from different languages, using deep learning models and in particular the CNN. El-Sawy et al. [6] gathered their own Arabic Handwritten Character dataset (AHCD) from 60 participants. AHCD consists of 16.800 characters. ey have achieved a classification accuracy of 88% by using a CNN model consisting of 2 convolutional layers. To improve the CNN performance, regularization and different optimization techniques have been implemented to the model. e testing accuracy was improved to 94.93%. Altwaijry and Turaiki [13] presented a new Arabic handwritten letters dataset (named "Hijja"). It comprised 47.434 characters written by 591 participants. eir proposed CNN model was able to achieve 88% and 97% testing accuracy, using the Hijja and AHCD datasets, respectively. Younis [19] designed a CNN model to recognize Arabic handwritten characters. e CNN consisted of three convolutional layers followed by one final fully connected layer. e model achieved an accuracy of 94.7% for the AHCD database and 94.8% for the AIA9K (Arabic alphabet's dataset). Latif et al. [20] designed a CNN to recognize a mix of handwriting of multiple languages: Persian, Devanagari, Eastern Arabic, Urdu, and Western Arabic. e input image is of size (28 × 28) pixels, followed by two convolutional layers, and then a max-pooling operation is applied to both convolution layers. e overall accuracy of the combined multilanguage database was 99.26%. e average accuracy is around 99% for each individual language. Alrobah and Albahl [21] analyzed the Hijja dataset and found irregularities, such as some distorted letters, blurred symbols, and some blurry characters. ey used the CNN model to extract the important features and SVM model for data classification. ey achieved a testing accuracy of 96.3%. Mudhsh et al. [22] designed the VGG net architecture for recognizing Arabic handwritten characters and digits. e model consists of 13 convolutional layers, 2 max-pooling layers, and 3 fully connected layers. Data augmentation and Boufenar et al. [23] used the popular CNN architecture Alexnet. It consists of 5 convolutional layers, 3 max-pooling layers, and 3 fully connected layers. Experiments were conducted on two different databases, OIHACDB-40 and AHCD. Based on the good tuning of the CNN hyperparameters and by using dropout and minibatch techniques, a CNN accuracy of 100% and 99.98% for OIHACDB-40 and AHCD was achieved. Mustapha et al. [24] proposed a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) for a guided generation of isolated handwritten Arabic characters. e CDCGAN was trained on the AHCD dataset. ey achieved a 10% performance gap between real and generated handwritten Arabic characters. Table 2 summarizes the literature reviewed for recognizing Arabic handwriting characters using the CNN models. From the previous literature, we notice that most CNN architectures have been trained by using adult Arabic handwriting letters "AHCD". In addition, we observe that most researchers try to improve the performance through the good tuning of the CNN model hyperparameters. The Proposed Arabic Handwritten Characters Recognition System As shown in Figure 2, the model that we proposed in this study is composed of three principal components: CNN proposed architecture, optimization algorithms, and data augmentation techniques. In this paper, the proposed CNN model contains four convolution layers, two max-pooling operations, and an ANN model with three fully hidden layers used for the classification. To avoid the overfitting problems and improve the model performance, various optimization techniques were used such as dropout, minipatch, choice of the activation function, etc. Figure 3 describes the proposed CNN model. Also, in this work, the recognition performance of Arabic handwritten letters was improved through the good choice of the optimization algorithm and by using different data augmentation techniques "geometric transformations, feature space augmentation, noise injection, and mixing images." Convolution Neural Network Architecture A CNN model [25][26][27][28][29][30][31][32][33][34] is a series of convolution layers followed by fully connected layers. Convolution layers allow the extraction of important features from the input data. Fully connected layers are used for the classification of data. e CNN input is the image to be classified; the output corresponds to the predicted class of the Arabic handwritten character. Input Data. e input data is an image I of size (m × m × s). (m × m) Defines the width and the height of the image and s denotes the space or number of channels. e value of s is 1 for a grayscale image and equals 3 for a RGB color image. Convolution Layer. e convolution layer consists of a convolution operation followed by a pooling operation. Convolution Operation. e basic concept of the classical convolution operation between an input image I of dimension (m × m) and a filter F of size (n × n) is defined as follows (see Figure 4): Here, ⊗ denotes the convolution operation. C is the convolution map of size (a × a), where a � (m − n + 2p/sL) + 1. sL is the stride and denotes the number of pixels by which F is sliding over I. p is the padding; often it is necessary to add a bounding of zeros around I to preserve complete image information. Figure 4 is an example of the convolution operation between an input image of dimension (8 × 8) and a filter F of size (3 × 3). Here, the convolution map C is of size (6 × 6) with a stride sL � 1 and a padding p � 0. Generally, a nonlinear activation function is applied on the convolution map C. e commonly used activation functions are Sigmoid [34][35][36], Hyperbolic Tangent "Tanh" [35,37], and Rectified Linear Unit "ReLU" [37,38] where here, C a is the convolution map after applying the nonlinear activation function f. Figure 5 shows the C a map when the ReLU activation function is applied on C. Pooling Operation. e pooling operation is used to reduce the dimension of C a thus reducing the computational complexity of the network. During the pooling operation, a kernel K of size (s p × s p ) is sliding over C a . s p denotes the number of patches by which K is sliding over C a . In our analysis s p is set to 2. e pooling operation is expressed as where P is the pooling map and pool is the pooling operation. e commonly used pooling operations are averagepooling, max-pooling, and min-pooling. Figure 6 describes the concept of average-pooling and max-pooling operations using a kernel of size (2 × 2) and a stride of 2. Concatenation Operation. e concatenation operation maps the set of the convoluted images into a vector called the concatenation vector Y. Computational Intelligence and Neuroscience here, P c i is the output of the i th convolution layer. n denotes the number of filters applied on the convoluted images P c−1 i−1 . Fully Connected Layer. e CNN classification operation is performed through the fully connected layer [39]. Its input is the concatenation vector Y; the predicted class y is the output of the CNN classifier. e classification operation is performed through a series of t fully connected hidden layers. Each fully connected hidden layer is a parallel collection of artificial neurons. Like synapses in the biological brain, the artificial neurons are connected through weights W. e model output of the i th fully connected hidden layer is Computational Intelligence and Neuroscience where the weight sum vector H i is here, f is a nonlinear activation function (sigmoid, Tanh, ReLU, etc.). e bias value B i defines the activation level of the artificial neurons. CNN Learning Process A trained CNN is a system capable of determining the exact class of a given input data. e training is achieved through an update of the layer's parameters (filters, weights, and biases) based on the error between the CNN predicted class and the class label. e CNN learning process is an iterative process based on the feedforward propagation and backpropagation operations. Feedforward Propagation. For the CNN model, the feedforward equations can be derived from (1)-(5) and (6). e Softmax activation [40,41] function is applied in the final layer to generate the predicted value of the class of the input image I. For a multiclass model, the Softmax is expressed as follows: where c denotes the number of classes, y i is the i th coordinate of the output vector y, and the artificial neural output h i � n j�1 h i w ij . Backpropagation. To update the CNN parameters and perform the learning process, a backpropagation optimization algorithm is developed to minimize a selected cost function E. In this analysis, the cross-entropy (CE) cost function [40] is used. here, � y i is the desired output (data label). e most used optimization algorithm to solve classification problems is the gradient descent (GD). Various optimizers for the GD algorithm such as momentum, AdaGrad, RMSprop, Adam, AdaMax, and Nadam were used to improve the CNN performance. [40,42]. GD is the simplest form of optimization gradient descent algorithms. It is easy to implement and gives significant classification accuracy. e general update equation of the CNN parameters using the GD algorithm is Gradient Descent where φ represents the update of the filters F, the weights W, and the biases B. (zE/zφ) is the gradient with respect to the parameter φ. α is the model learning rate. A too-large value of α may lead to the divergence of the GD algorithm and may cause the oscillation of the model performance. A too-small α stops the learning process. [43]. e momentum hyperparameter m defines the velocity by which the learning rate α must be increased when the model approaches to the minimal of the cost function E. e update equations using the momentum GD algorithm are expressed as follows: Computational Intelligence and Neuroscience 5 Gradient Descent with Momentum where v(t) is the moment gained at t th iteration. [44]. In this algorithm, the learning rate is a function of the gradient (zE/zφ). It is defined as follows: AdaGrad where where ϵ is a small smoothing value used to avoid the division by 0 and G(t) is the sum of the squares of the gradients (zE/zφ(t)). With a small magnitude of (zE/zφ), the value of α is increasing. If (zE/zφ) is very large, the value of α is a constant. AdaGrad optimization algorithm changes the learning rate for each parameter at a given time t with considering the previous gradient update. e parameter update equation using AdaGrad is expressed as follows: [45]. e issue of AdaGrad is that with much iteration the learning rate becomes very small which leads to a slow convergence. To fix this problem, AdaDelta algorithm proposed to take an exponentially decaying average as a solution, where AdaDelta where E[G 2 (t)] is the decaying average over past squared gradients and c is a set usually around 0.9. [45,46]. In reality, RMSprop is identical to AdaDelta's initial update vector, which we derived above: is gradient descent optimizer algorithm computes the learning rate α based on two vectors: where r(t) and v(t) are the 1 st and the 2 nd order moments vectors. β 1 and β 2 are the decay rates. r(t − 1) and v(t − 1) represent the mean and the variance of the previous gradient. When r(t) and v(t) are very small, a large step size is needed for parameters update. To avoid this issue, a bias correction value is added to r(t) and v(t). where β t 1 is β 1 power t and β t 2 is β 2 power t. e Adam update equation is expressed as follows: Computational Intelligence and Neuroscience [45,47]. AdaMax e factor v(t) in the Adam algorithm adjusts the gradient inversely proportionate to the ℓ2 norm of previous gradients (via the v(t − 1)) and current gradient t (zE/zφ(t)) : e generalization of this update to the ℓp norm is as follows: To avoid being numerically unstable, ℓ1 and ℓ2 norms are most common in practice. However, in general ℓ∞ also shows stable behavior. As a result, the authors propose AdaMax and demonstrate that v(t) with ℓ∞ converges to the more stable value. Here, 5.2.8. Nadam [43]. It is a combination of Adam and NAG, where the parameters update equation using NAG is defined as follows: e update equation using Nadam is expressed as follows: , Data Augmentation Techniques Deep convolutional neural networks are heavily reliant on big data to achieve excellent performance and avoid the overfitting problem. To solve the problem of insufficient data for Arabic handwritten characters, we present some basic data augmentation techniques that enhance the size and quality of training datasets. e image augmentation approaches used in this study include geometric transformations, feature space augmentation, noise injection, and mixing images. Data augmentation based on geometric transformations and feature space augmentation [17,48] is often related to the application of rotation, flipping, shifting, and zooming. Rotation. e input data is rotated right or left on an axis between 1°and 359°. e rotation degree parameter has a significant impact on the safety of the dataset. For example, on digit identification tasks like MNIST, slight rotations like 1 to 20 or −1 to −20 could be useful, but when the rotation degree increases, properly the CNN network cannot accurately distinguish between some digits. Flipping. e input image is flipped horizontally or vertically. is augmentation is one of the simplest to implement and has proven useful on some datasets such as ImageNet and CIFAR-10. Shifting. e input image is shifting right, left, up, or down. is transformation is a highly effective adjustment to prevent positional bias. Figure 7 shows an example of shifting data augmentation technique using Arabic alphabet characters. Zooming. e input image is zooming, either by adding some pixels around the image or by applying random zooms to the image. e amount of zooming has an influence on the quality of the image; for example, if we apply a lot of zooming, we can lose some image pixels. Noise Injection. As it could be seen on Arabic handwritten characters, natural noises are presented in images. Noises make recognition more difficult and for this reason, noises are reduced by image preprocessing techniques. e cos of noise reduction is to perform a high classification, but it causes the alteration of the character shape. e main datasets in this research topic are considered with denoising images. e question which we answer here is how the method could be robust to any noise. Adding noise [48,49] to a convolution neural network during training helps the model learn more robust features, resulting in better performance and faster learning. We can add several types of noise when recognizing images, such as the following. (i) Gaussian noise: injecting a matrix of random values drawn from a Gaussian distribution (ii) Salt-and-pepper noise: changing randomly a certain amount of the pixels to completely white or completely black (iii) Speckle noise: only adding black pixels "pepper" or white pixels "salt" Adding noise to the input data is the most commonly used approach, but during training, we can add random noise to other parts of the CNN model. Some examples include the following: Computational Intelligence and Neuroscience (i) Adding noise to the outputs of each layer (ii) Adding noise to the gradients to update the model parameters (iii) Adding noise to the target variables 6.6. Mixing Image's Databases. In this study, we augment the training dataset by mixing two different Arabic handwritten characters datasets, AHCD and Hijja, respectively. AHCD is a clean database, but Hijja is a dataset with very low-resolution images. It comprises many distorted alphabets images. en, we evaluate the influence of different mentioned data augmentation techniques (geometric transformations, feature space augmentation, and noise injection) on the recognition performance of the new mixing dataset. Datasets. In this study, two datasets of Arabic handwritten characters were used: Arabic handwritten characters dataset "AHCD" and Hijja dataset. AHCD [6] comprises 16.800 handwritten characters of size (32 × 32 × 1) pixels. It was written by 60 participants between the ages of 19 and 40 years and most of the participants are right handed. Each participant wrote the Arabic alphabet from "alef" to "yeh" 10 times. e dataset has 28 classes. It is divided into a training set of 13.440 characters and a testing set of 3.360 characters. Hijja dataset [13] consists of 4.434 Arabic characters of size (32 × 32 × 1) pixels. It was written by 591 school children ranging in age between 7 to 12 years. Collecting data from children is a very hard task. Malformed characters are characteristic of children's handwriting; therefore the dataset comprises repeated letters, missing letters, and many distorted or unclear characters. e dataset has 29 classes. It is divided into a training set of 37.933 characters and a testing set of 9.501 characters (80% for training and 20% for test). Figure 8 shows a sample of AHCD and Hijja Arabic handwritten letters datasets. Experimental Environment and Performance Evaluation. In this study the implementation and the evaluation of the CNN model are done out in Keras deep learning environment with TensorFlow backend on Google Colab using GPU accelerator. We evaluate the performance of our proposed model via the following measures: Accuracy (A) is a measure for how many correct predictions your model made for the complete test dataset: Recall (R) is the fraction of images that are correctly classified over the total number of images that belong to class: Precision (P) is the fraction of images that are correctly classified over the total number of images classified: F1 measure is a combination of Recall and Precision measures: Here, TP � true positive (is the total number of images that can be correctly labeled as belonging to a class x), FP � false positive (represents the total number of images that have been incorrectly labeled as belonging to a class x), FN � false negative (represents the total number of images that have been incorrectly labeled as not belonging to a class x), TN � true negative (represents the total number of images that have been correctly labeled as not belonging to a class x). Also we draw the area under the ROC curve (AUC), where we have the following. An ROC curve (receiver operating characteristic curve) is a graph showing the performance of all classification thresholds. is curve plots two parameters: (i) True-positive rate (ii) False-positive rate AUC stands for "area under the ROC curve." at is, AUC measures the entire two-dimensional area underneath the entire ROC curve from (0.0) to (1.1). Tuning of CNN Hyperparameters. e objective is to choose the best model that fits the AHCD and Hijja datasets well. Many try-and-error trials in the network configuration tuning mechanism were performed. e best performance was achieved when the CNN model was constructed of four convolution layers followed by three fully connected hidden layers. e model starts with To reduce the overfitting problem a dropout of 0.6 rate is added to a model between the dense layers and applies to outputs of the prior layer that are fed to the subsequent layer. e optimized parameters used to improve the CNN performance were as follows: Optimizer algorithm is Adam, the loss function is the cross-entropy, learning rate � 0.001, batch size � 16, and epochs � 40. We compare our model to CNN-for-AHCD over both the Hijja dataset and the AHCD dataset. e code for CNNfor-AHCD is available online [31], which allows comparison of its performance over various datasets. On the Hijja dataset, which has 29 classes, our model achieved an average overall test set accuracy of 88.46%, precision of 87.98%, recall of 88.46%, and an F1 score of 88.47%, while CNN-for-AHCD achieved an average overall test set accuracy of 80%, precision of 80.79%, recall of 80.47%, and an F1 score of 80.4%. On the AHCD dataset, which has 28 classes, our model achieved an average overall test set accuracy of 96.66%, precision of 96.75%, recall of 96.67%, and an F1 score of 96.67%, while CNN-for-AHCD achieved an average overall test set accuracy of 93.84%, precision of 93.99%, recall of 93.84%, and an F1 score of 93.84%. e detailed metrics are reported per character in Table 3. We note that our model outperforms CNN-for-AHCD by a large margin on all metrics. Figure 9 shows the testing result AUC of AHCD and Hijja dataset. Optimizer Algorithms. e objective is to choose the best optimizers algorithms that fit the AHCD and Hijja best performance. In this context, we tested the influence of the following algorithms on the classification of handwritten Arabic characters: By using Nadam optimization algorithm, on the Hijja dataset, our model achieved an average overall test set accuracy of 88.57%, precision of 87.86%, recall of 87.98%, and an F1 score of 87.95%. On the AHCD dataset, our model achieved an average overall test set accuracy of 96.73%, precision of 96.80%, recall of 96.73%, and an F1 score of 96.72%. e detailed results of different optimizations algorithms are mentioned in Table 4. Results of Data Augmentation Techniques. Generally, the neural network performance is improved through the good tuning of the model hyperparameters. Such improvement in the CNN accuracy is linked to the availability of training dataset. However, the networks are heavily reliant on big data to avoid overfitting problem and perform well. Data augmentation is the solution to the problem of limited data. e image augmentation techniques used and discussed in this study include geometric transformations and feature space augmentation (rotation, shifting, flipping, and zooming), noise injection, and mixing images from two different datasets. For the geometric transformations and feature space augmentation, we try to well choose the percentage of by 180°, the network will not be able to accurately distinguish between the handwritten digits "6" and "9". Likewise, on the AHCD and Hijja datasets, if rotating or flipping techniques are used the network will be unable to distinguish between some handwritten Arabic characters. For example, as shown in Figure 10, with a rotation of 180°, the character Daal isolated ‫)د(‬ will be the same as the character Noon isolated ‫.)ن(‬ e detailed results of rotation, shifting, flipping, and zooming data augmentation techniques are mentioned in Table 5. Computational Intelligence and Neuroscience As shown in Table 5 and Figure 11, by using rotation and shifting augmentation approaches, our model achieved a testing accuracy of 98.48% and 91.24% on AHCD dataset and Hijja dataset, respectively. We achieved this accuracy through rotating the input image by 10°and shifting it just by one pixel. Adding noise is a technique used to augment the training input data. Also in most of the cases, this is bound to increase the robustness of our network. In this work we used the three types of noise to augment our data: (iii) Speckle noise e detailed results of different types of noise injection are mentioned in Table 6. As shown by adding different types of noise, the model accuracy is improved, which demonstrate the robustness of our proposed architecture. We achieved good results when adding noise to the outputs of each layer. e proposed idea in this study is to augment the number of training databases by mixing the two datasets AHCD and Hijja, and then we apply the previously mentioned data augmentation methods on the new mixed dataset. Our purpose to use malformed handwritten characters as it proposes the Hijja dataset is to improve the accuracy of our method with noised data. e detailed results of data augmentation techniques on the mixed database are mentioned in Table 7. As shown, the model performance depends on the rate of using Arabic handwriting "Hijja" database. e children had trouble following the reference paper, which results in very lowresolution images comprising many unclear characters. erefore mixing the datasets would certainly reduce performance. Conclusions and Possible Future Research Directions In this paper, we proposed a convolution neural network (CNN) to recognize Arabic handwritten characters dataset. We have trained the model on two Arabic datasets AHCD and Hijja. By the good tuning of the network hyperparameters, we achieved an accuracy of 96.73% and 88.57% on AHCD and Hijja. To improve the model performance, we have implemented different optimization algorithms. For both databases, we achieved an excellent performance by using Nadam optimizer. To solve the problem of insufficient Arabic handwritten datasets, we have applied different data augmentation techniques. e augmentation approaches are based on geometric transformation, feature space augmentation, noise injection, and mixing of datasets. By using rotation and shifting techniques, we achieved a good accuracy equal to 98.48% and 91.24% on AHCD and Hijja. To improve the robustness of the CNN model and increase the number of training datasets, we added three types of noise (Gaussian noise, Salt-and-pepper, and Speckle noise). Also in this work we first augmented the database by mixing two Arabic handwritten characters datasets; then we tested the results of the previously mentioned data augmentation techniques on the new mixed dataset, where the first database "AHCD" comprises clear images with a very good resolution, but the second database "Hijja" has many distorted characters. Experimentally show that the geometric transformations (rotation, shifting, and flipping), feature space augmentation, and noise injection always improve the network performance, but the rate of using the unclean database "Hijja" harms the model accuracy. An interesting future direction is the cleaning and processing of Hijja dataset to eliminate the problem of low- resolution and unclear images and then the implementation of the proposed CNN network and data augmentation techniques on the new mixed and cleaned database. In addition, we are interested in evaluating the result of other augmentation approaches, like adversarial training, neural style transfer, and generative adversarial networks on the recognition of Arabic handwritten characters dataset. We plan to incorporate our work into an application for children that teaches Arabic spelling. AHCR: Arabic handwritten characters recognition DL: Deep learning CNNs: Convolution neural networks AHCD: Arabic handwritten character dataset SVM: Support vector machine ADBase: Arabic digits database HACDB: Handwritten Arabic characters database OIHACDB: Offline handwritten Arabic character database CDCGAN: Conditional deep convolutional generative adversarial network Tanh: Hyperbolic tangent ReLU: Rectified linear unit CE: Cross-entropy GD: Gradient descent NAG: Nesterov accelerated gradient TP: True positive FP: False positive FN: False Negative TN: True negative AUC: Area under curve ROC: Receiver operating curve ELU: Exponential linear unit Symbols I : Image m: Width and height of the image c: Number of channels F: Filter n: Filter size ⊗: Convolution operation C: Convolution map a: Size of convolution map S c : Stride p: Padding f: Nonlinear activation function C a : Convolution Cost function � y i : Desired output ϕ: Update of the filter F (zE/zφ): Gradient α: Model learning m: Momentum v(t): Moment gained at the i th iteration ε: Smoothing value G(t): Sum of the squares of the gradient E[G(t) 2 ]: Decaying overage r(t): Moments vector β: Decay rate r(t − 1): Mean of the previous gradient v(t − 1): Variance of the previous gradient. Data Availability Previously reported AHCD data were used to support this study and are available at https://www.kaggle.com/mloey1/ ahcd1. ese prior studies (and datasets) are cited at relevant places within the text as [43].
7,151.2
2022-01-11T00:00:00.000
[ "Computer Science" ]
Exponential two step approach for Time Domain based Software Process Control Software Reliability Growth Model is a mathematical model of how the software reliability improves as faults are detected and repaired. In this paper we propose a control mechanism based on the cumulative quantity between observations of time domain failure data using mean value function of Goel-Okumoto model, which is based on Non Homogenous Poisson Process. The model parameters are estimated by a two step approach. Software reliability process can be monitored efficiently by using Statistical Process Control. Control charts are widely used for process monitoring. It assists the software development team to identify failures and actions to be taken during software failure process and hence, assures better software reliability. INTRODUCTION Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment (Musa et al., 1987;Lyu, 1996). Among all SRGMs developed so far, a large family of stochastic reliability models based on a non-homogeneous Poisson Process known as NHPP reliability models, has been widely used. Some of them depict exponential growth while others show S-shaped growth depending on nature of growth phenomenon during testing. The success of mathematical modeling approach to reliability evaluation depends heavily upon quality of failure data collected. Software reliability assessment is important to evaluate and predict the reliability and performance of software system, since it is the main attribute of software. To identify and eliminate human errors in software development process and also to improve software reliability, the Statistical Process Control concepts and methods are the best choice. They are used to monitor the performance of a software process over time in order to verify that the process remains in the state of statistical control. It helps in finding assignable causes, long term improvements in the software process. Software quality and reliability can be achieved by eliminating the causes or improving the software process or its operating procedures (Kimura et al., 1995 ). The most popular technique for maintaining process control is control charting. The control chart is one of the seven tools for quality control. Software process control is used to secure the quality of the final product which will conform to predefined standards. In any process, regardless of how carefully it is maintained, a certain amount of natural variability will always exist. A process is said to be statistically "in-control" when it operates with only chance causes of variation. On the other hand, when assignable causes are present, then we say that the process is statistically "out-of-control." The control charts can be classified into several categories, as per several distinct criteria. Control charts should be capable of creating an alarm when a shift in the level of one or more parameters of the underlying distribution or a nonrandom behavior occurs. Normally, such a situation will be reflected in the control chart by points plotted outside the control limits or by the presence of specific patterns. The most common non-random patterns are cycles, trends, mixtures and stratification (Koutras et al., 2007). For a process to be in control the control chart should not have any trend or nonrandom pattern. SPC is a powerful tool to optimize the amount of information needed for use in making management decisions. Statistical techniques provide an understanding of the business baselines, insights for process improvements, communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set, and dynamically improve process capabilities; and focus business areas which need improvement. An early detection of software failures will improve the software reliability. The selection of proper SPC charts is essential to effective statistical process control implementation and use. The SPC chart selection is based on data, situation and need (MacGregor and Kourti., 1995). Many factors influence the process, resulting in variability. The causes of process variability can be broadly classified into two categories, viz., assignable causes and chance causes. The control limits can then be utilized to monitor the failure times of components. After each failure, the time can be plotted on the chart. If the plotted point falls between the calculated control limits, it indicates that the process is in the state of statistical control and no action is warranted. If the point falls above the UCL, it indicates that the process average, or the failure occurrence rate, may have decreased which results in an increase in the time between failures. This is an important indication of possible process improvement. If this happens, the management should look for possible causes for this improvement and if the causes are discovered then action should be taken to maintain them. If the plotted point falls below the LCL, it indicates that the process average, or the failure occurrence rate, may have increased which results in a decrease in the failure time. This means that process may have deteriorated and thus actions should be taken to identify and the causes may be removed. It can be noted here that the parameters a, b should normally be estimated with the data from the failure process. We followed a two step approach in estimating the parameters. The control limits for the chart are defined in such a manner that the process is considered to be out of control when the time to observe exactly one failure is less than LCL or greater than UCL. Our aim is to monitor the failure process and detect any change of the intensity parameter. When the process is normal, there is a chance for this to happen and it is commonly known as false alarm. The traditional false alarm probability is set to be 0.27% although any other false alarm probability can be used. The actual acceptable false alarm probability should in fact depend on the actual product or process (Gokhale and Trivedi, 1998). NHPP SRGM The Non-Homogenous Poisson Process (NHPP) based software reliability growth models (SRGMs) have proved to be quite successful in practical software reliability engineering (Musa et al., 1987). The main issue in the NHPP model is to determine an appropriate mean value function to denote the expected number of failures experienced up to a certain time point. Model parameters can be estimated by using two step approach (i.e one parameter is estimated through Maximum Likelihood Estimate (MLE) and another parameter is estimated through Least Square Estimation (LSE)). Various NHPP SRGMs have been proposed upon various assumptions. Many of the SRGMs assume that each time a failure occurs, the fault that caused it can be immediately removed and no new faults are introduced, which is usually called perfect debugging. Imperfect debugging models have proposed a relaxation of the above assumption (Ohba, 1984;Pham, 1993). J u n e 2 0 , 2 0 1 3 If "t" is a continuous random variable with probability density function: unknown constant parameters which need to be estimated, and cumulative distribution function: Let "a" denote the expected number of faults that would be detected given infinite testing time in case of finite failure NHPP models and "b" represents the fault detection rate. In software reliability, the initial number of faults and the fault detection rate are always unknown. Then, the mean value function of the finite failure NHPP models can be written as: ( ) ( ) m t aF t  , representing the expected number of software failures by time "t". The failure intensity function () t  in case of the finite failure NHPP models is given by , which is proportional to the residual fault content (Pham, 2006).   Nt be the cumulative number of software failures by time "t". A non-negative integer-valued stochastic process   Nt is called a counting process, if   Nt represents the total number of occurrences of an event in the time interval [0, t] and satisfies these two properties: One of the most important counting processes is the Poisson process. A counting process,   Nt , is said to be a Poisson process with intensity  if The initial condition is N(0) = 0 The failure process, N(t), has independent increments The number of failures in any time interval of length s has a Poisson distribution with mean  s, that is, Describing uncertainty about an infinite collection of random variables one for each value of "t" is called a stochastic counting process denoted by   Nt is said to be an NHPP model. Model description: G-O Model One simple class of finite failure NHPP model is the Goel and Okumoto model (Goel and Okumoto, 1979), which has an exponential growth of the cumulative number of failures experienced. It is an NHPP based SRGM, assuming that the failure intensity is proportional to the number of faults remaining in the software describing an exponential failure curve. It has two parameters. Where, "a" is the expected total number of faults in the code and "b" is the shape factor defined as, the rate at which the failure rate decreases. The cumulative distribution function of the model is: The main issue in the NHPP model is to determine an appropriate mean value function to denote the expected number of failures experienced up to a certain time point. Method of least squares (LSE) or maximum likelihood (MLE) has been suggested and widely used for estimation of parameters of mathematical models (Kapur et al., 2008). Non-linear regression is a method of finding a nonlinear model of the relationship between the dependent variable and a set of independent variables. Unlike traditional linear regression, which is restricted to estimating linear models, nonlinear regression can estimate models with arbitrary relationships between independent and dependent variables. The model proposed in this paper is non-linear and it is difficult to find solution for nonlinear models using simple Least Square method. Therefore, the model has been transformed from non linear to linear. The least squares method is widely used to estimate the numerical values of the parameters to fit a function to a set of data. We will use the method in the context of a Linear regression problem. It exists with several variations. Its simpler version is called Ordinary Least Squares (OLS) and more sophisticated version is called Weighted Least Squares (WLS) (Lewis-Beck, 2003). TWO STEP APPROACH FOR PARAMETER ESTIMATION MLE and LSE techniques are used to estimate the model parameters (Lyu, 1996;Musa et al., 1987). Sometimes, the likelihood equations are difficult to solve explicitly. In such cases, the parameters are estimated with some numerical methods (Newton Raphson method). On the other hand, LSE, like MLE, can be applied for small sample sizes and may provide better estimates (Huang and Kuo, 2002). The remaining parameters are estimated through LSE regression approach. ML (Maximum Likelihood) Parameter Estimation The idea behind maximum likelihood parameter estimation is to determine the parameters that maximize the probability of the sample data. The method of maximum likelihood is considered to be more robust and yields estimators with good statistical properties. In other words, MLE methods are versatile and apply to many models and to different types of data. Although the methodology for MLE is simple, the implementation is mathematically intense. Using today's computer power, however, mathematical complexity is not a big obstacle. If we conduct an experiment and obtain N independent observations, 12 , , , N t t t  , the likelihood function (Pham, 2003) , which is substituted in finding "a". LS (Least Square) parameter estimation LSE is a popular technique and widely used in many fields for function fit and parameter estimation (Liu, 2011). The least squares method finds values of the parameters such that the sum of the squares of the difference between the fitting function and the experimental data is minimized. Least squares linear regression is a method for predicting the value of a dependent variable Y, based on the value of an independent variable X. o The Least Squares Regression Line Linear regression finds the straight line, called the least squares regression line that best represents observations in a bivariate data set. Given a random sample of observations, the population regression line is estimated by: ŷ bx a  . Where, "a" is a constant, "b" is the regression coefficient and "x" is the value of the independent variable, and "ŷ " is the predicted value of the dependent variable. The least square method defines the estimate of these parameters as the values which minimize the sum of the squares between the measurements and the model. Which amounts to minimizing the expression: (Xie, 2001). Taking the derivative of E with respect to "a" and "b" and setting them to zero gives the following set of equations (called the normal equations): The Least Square Estimates of "a" and "b" are obtained by solving the above equations. Where, a Y bX ML Estimation Procedure to find parameter 'a' using MLE. The likelihood function of G-O model is given as, LS Estimation Procedure to find parameter 'b' using regression approach. The cumulative distribution function of G-O model is, The parameter D is estimated as, D Y X   and therefore, 1  " is nothing but the parameter "b" estimated through regression approach. DISTRIBUTION OF TIME BETWEEN FAILURES Based on the inter failure data given in Table 1, we compute the software failures process through failure Control chart. We used cumulative time between failures data for software reliability monitoring using G-O model. The use of cumulative quality is a different and new approach, which is of particular advantage in reliability. "  a " and "  b " are estimates of parameters and the values can be computed using iterative method for the given cumulative time between failures data (Xie, 2002) shown in table 1. Using "a" and "b" values we can compute () mt . Assuming an acceptable probability of false alarm of 0.27%, the control limits can be obtained as (Xie, 2002): CONCLUSION The given 30 inter failure times are plotted through the estimated mean value function against the failure serial order. The parameter estimation is carried out by two step approach for the considered model. The graphs have shown out of control signals i.e below the LCL. Hence we conclude that our method of estimation and the control chart are giving a +ve recommendation for their use in finding out preferable control process or desirable out of control signal. By observing the Failure Control chart we identified that the failure situation is detected at 9th point of table-4 for the corresponding () mt , which is below () L mt and then continued to fail. It indicates that the failure process is detected at an early stage compared with Xie et. a1 (2002) control chart, which detects the failure at 23rd point for the inter failure data above the UCL. Hence our proposed Failure control chart detects out of control situation at an earlier than the situation in the time control chart. The early detection of software failure will improve the software Reliability. When the time between failures is less than LCL, it is likely that there are assignable causes leading to significant process deterioration and it should be investigated. On the other hand, when the time between failures has exceeded the UCL, there are probably reasons that have lead to significant improvement. From Figure.2 the process is stabilized by touching the X-axis. Where as in Figure.1 there is a possibility of upward average number of failures also. As SPC is to stabilize at some point of time the two-step approach in Figure.2 is preferable.
3,628.8
2013-06-20T00:00:00.000
[ "Computer Science" ]
Octave-Band Four-Beam Antenna Arrays with Stable Beam Direction Fed by Broadband 4 × 4 Butler Matrix : A novel concept of four-beam antenna arrays operating in a one-octave frequency range that allows stable beam directions and beamwidths to be achieved is proposed. As shown, such radiation patterns can be obtained when radiating elements are appropriately spaced and fed by a broadband 4 × 4 Butler matrix with directional filters connected to its outputs. In this solution, broadband radiating elements are arranged in such a way that, for the lower and upper frequencies, two separate subarrays can be distinguished, each one consisting of identically arranged radiating elements. The subarrays are fed by a broadband Butler matrix at the output to which an appropriate feeding network based on directional filters is connected. These filters ensure smooth signal switching across the operational bandwidth between elements utilized at lower and higher frequency bands. Therefore, as shown, it is possible to control both beamwidths and beam directions of the resulting multi-beam antenna arrays. Moreover, two different concepts of the feeding network connected in between the Butler matrix and radiating elements for lowering the sidelobes are discussed. The theoretical analyses of the proposed antenna arrays are shown and confirmed by measurements of the developed two-antenna arrays consisting of eight and twelve radiating elements, operating in a 2–4 GHz frequency range. Introduction In recent years, the development of modern wireless systems caused interest in advanced antenna technology, among which multibeam antennas that offer multiple independent beams can be distinguished. The concept of multibeam antennas was introduced by Shelton [1] and has become the subject of extensive research up to date [2][3][4]. Multibeam antenna arrays can be realized with the use of beamforming networks, such as Butler matrices, which ensure an appropriate signal distribution across the array [5]. Although there are many reported solutions that involve Butler matrices realized in different technologies, most of them are focused on narrowband concepts [6][7][8][9]. On the other hand, the constant development of communication systems calls for more advanced solutions, such as multiband or broadband networks. Therefore, the concept of scalable antenna arrays has recently gained a lot of interest [10][11][12][13][14][15][16], since such arrays allow the assumed antenna parameters to be achieved, i.e., beamwidth or beam direction in a very broad bandwidth. In the literature, some concepts of scalable antenna arrays with constant broadside beam can be found [13,17,18] which are realized with the use of frequency-dependent feeding networks, whereas multi-beam antenna arrays with almost constant multiple beam patterns are rarely reported. This is due to the required distance between radiating elements which has to be kept around 0.5 λ and appropriate signal distribution, which has to be ensured across the array in a broad bandwidth. Although broadband Butler matrices are known [19,20], the required spacing between radiating elements causes that dual-band concepts often involve separate antenna arrays operating in each sub-band [21,22], whereas solutions that allow constant broad frequency range to be covered are rarely reported. One exemplary solution is described in [23], where multibeam antennas operating in an octave frequency range have been described. In this concept, frequency-dependent Butler matrices change their orders from N to N/2 as the frequency increases. As shown, multiple-beam radiation patterns can be achieved with such a beamforming network. As presented, even wider bandwidths can be achieved by the utilization of modified Butler matrices which change their behavior three times across the operational frequency range [24]. However, the major drawback of these solutions is the complexity of the applied beamforming networks, which limits the applicability of the described concepts [23,24]. A simpler approach to the realization of scalable multibeam antennas is presented in [25], where the feeding network consists of a broadband quadrature directional coupler and frequency-dependent power dividers. As shown, this allows an attractive two-beam radiation pattern to be achieved over the frequency range reaching f H /f L = 3. However, the solution proposed in [25] can be implemented only in two-beam antenna arrays and cannot be straightforwardly extended to antenna arrays with a higher number of beams. In this paper, we present a novel concept of the multi-beam antenna arrays that allows a four-beam radiation pattern to be achieved over a one-octave frequency range. The proposed feeding network consists of a broadband Butler matrix at the outputs to which an appropriate feeding network based on directional filters is connected. Such a solution provides attractive four-beam radiation properties over a very broad bandwidth. Simultaneously, it leads to a simpler feeding network comparing to the previously developed concept [23], since the classic broadband Butler matrices are well developed and the required directional filters are relatively easy to design. The proposed concept was verified by the design and measurements of two four-beam antenna arrays operating in a 2-4 GHz frequency range and consisting of eight and twelve radiating elements, respectively. Concept of Octave-Band Four-Beam Antenna Arrays A concept of the proposed scalable four-beam antenna array is explained in Figure 1. It is based on [25]; however, there are substantial differences between these two approaches. First of all, the antenna array described in [25] utilizes four equally spaced broadband radiating elements. The distance between two inner elements at a higher frequency is equal to the one for two outer elements at a lower frequency; therefore, the frequency ratio in this case equals f H /f L = 3. This means that the concept described in [25] is only reserved for two-beam antenna arrays. Therefore, in this paper, we propose a novel approach, in which the radiating elements are not equally distributed across the array, as shown in Figure 1. In particular, the distance between the two elements operating at the lowest and highest frequencies (radiating elements marked in blue and red colors) is equal to half (or 3/2 times) the distance between two inner elements (two middle radiating elements marked in red). This implies that the relative distance among all elements operating at a lower frequency is exactly the same as the one among the ones operating at a higher frequency when the frequency ratio is equal to f H /f L = 2. Such radiating elements' distribution allows scalable four-beam antenna arrays to be realized when appropriate modifications of the amplitude excitation are applied, as it is explained in detail below. To generate a multiple beam radiation pattern, a broadband 4 × 4 Butler matrix together with four directional filters (DF) is utilized, as shown in Figure 1. The Butler matrix ensures appropriate amplitude and phase distribution between each pair of radiating elements that operate at high and low frequency ranges, whereas directional filters realize smooth signal switching between these elements. This implies that a similar radiation pattern can be obtained over the entire bandwidth from f L to f H (equal to 2f L ). The proposed antenna array was analyzed with the use of numerical optimization and the frequency characteristics of the required directional filters were found. The optimization process focused on achieving the minimum beamwidth variation together with the minimum variation of all beams' directions. The resulting switching function is shown in Figure 2, which shows the amplitude delivered to each of the radiating elements operating at the Electronics 2021, 10, 2712 3 of 14 lowest frequency (marked as LF) together with the amplitude delivered to each radiating elements operating at the highest frequency range (marked as HF). As can be seen, the signal is smoothly switched between the lowest and highest frequency outputs of the directional filters across the bandwidth (see Figure 1). The optimization process reveals that, although it is possible to achieve an almost constant beam pattern across such a broad bandwidth, the relative sidelobe level reaches about −4 dB for such an array when the directivity of the single radiating element is taken into account. This is illustrated in Figure 3, where calculation results are shown assuming that the utilized radiating element is directive with its radiation pattern described by the approximate function cos 1.3 (θ). As can be seen, the two outer beams (2L and 2R beams) feature very low sidelobe levels reaching −4 dB. This is due to the fact that the angles at which the array factor has its maximum, the single radiating element features a severe attenuation; therefore, the relative sidelobe level raises. To generate a multiple beam radiation pattern, a broadband 4 × 4 Butler matrix together with four directional filters (DF) is utilized, as shown in Figure 1. The Butler matrix ensures appropriate amplitude and phase distribution between each pair of radiating elements that operate at high and low frequency ranges, whereas directional filters realize smooth signal switching between these elements. This implies that a similar radiation pattern can be obtained over the entire bandwidth from fL to fH (equal to 2fL). The proposed antenna array was analyzed with the use of numerical optimization and the frequency characteristics of the required directional filters were found. The optimization process focused on achieving the minimum beamwidth variation together with the minimum variation of all beams' directions. The resulting switching function is shown in Figure 2, which shows the amplitude delivered to each of the radiating elements operating at the lowest frequency (marked as LF) together with the amplitude delivered to each radiating elements operating at the highest frequency range (marked as HF). As can be seen, the signal is smoothly switched between the lowest and highest frequency outputs of the directional filters across the bandwidth (see Figure 1). The optimization process reveals that, although it is possible to achieve an almost constant beam pattern across such a broad bandwidth, the relative sidelobe level reaches about −4 dB for such an array when the directivity of the single radiating element is taken into account. This is illustrated in Figure 3, where calculation results are shown assuming that the utilized radiating element is directive with its radiation pattern described by the approximate function cos 1.3 ( ). As can be seen, the two outer beams (2L and 2R beams) feature very low sidelobe levels reaching −4 dB. This is due to the fact that the angles at which the array factor has its maximum, the single radiating element features a severe attenuation; therefore, the relative sidelobe level raises. Therefore, in this paper, we propose the application of unequal power distribution Electronics 2021, 10, x FOR PEER REVIEW total signal resulted in the application of 3 dB attenuators in the outer channels, it sible to improve the overall radiation pattern. The calculated radiation pattern is in Figure 4. As can be seen, the proposed method allows sidelobe level, which i better than −10 dB, to be improved. The second possible approach which allows to minimize the sidelobe level of sulting antenna array with a theoretically lossless network is based on the concep posed in [6] and further developed in [7]. The schematic diagram of the proposed sc antenna array is shown in Figure 5. As seen in this concept, twelve radiating eleme used to achieve taper excitation across the array and additional unequal power di having a power division ratio of 1:2.6 are applied, whereas, to achieve appropriate distributions, the selected four radiating elements are rotated, which ensures an ide phase shift. Moreover, the two outer elements operating at the lower frequency marked in red color in Figure 5 are placed closer to reduce the directivity of the antenna array; therefore, they minimize the resulting grating lobe. Such modificati sulted in the scalable antenna array having the radiation pattern shown in Figure seen also in this case, a significant sidelobe reduction is achieved; however, the larg ference in beamwidths of the two outer beams that can be observed is caused by th directivity of the entire antenna array, which is composed of twelve radiating elem Therefore, in this paper, we propose the application of unequal power distribution to overcome this problem, which allows us to achieve a good radiation pattern. This is another substantial difference between this concept and the one presented in [25]. The tapered excitation across the proposed scalable antenna array can be achieved in either lossy or theoretically lossless networks. The first approach mentioned is illustrated in Figure 1, in which additional attenuators (Att) are applied in the outer channels between the applied Butler matrix and the four directional filters. By controlling the attenuation level of these two attenuators, it is possible to achieve tapered excitation across the entire antenna array. It has to be underlined that, by introducing only 1.25 dB of attenuation of the total signal resulted in the application of 3 dB attenuators in the outer channels, it is possible to improve the overall radiation pattern. The calculated radiation pattern is shown in Figure 4. As can be seen, the proposed method allows sidelobe level, which is now better than −10 dB, to be improved. The second possible approach which allows to minimize the sidelobe level of the resulting antenna array with a theoretically lossless network is based on the concept proposed in [6] and further developed in [7]. The schematic diagram of the proposed scalable antenna array is shown in Figure 5. As seen in this concept, twelve radiating elements are used to achieve taper excitation across the array and additional unequal power dividers having a power division ratio of 1:2.6 are applied, whereas, to achieve appropriate phase distributions, the selected four radiating elements are rotated, which ensures an ideal 180 • phase shift. Moreover, the two outer elements operating at the lower frequency range marked in red color in Figure 5 are placed closer to reduce the directivity of the entire antenna array; therefore, they minimize the resulting grating lobe. Such modifications resulted in the scalable antenna array having the radiation pattern shown in Figure 6. As seen also in this case, a significant sidelobe reduction is achieved; however, the larger difference in beamwidths of the two outer beams that can be observed is caused by the high directivity of the entire antenna array, which is composed of twelve radiating elements. (a) (b) Figure 6. Calculated radiation patterns of scalable antenna array shown in Figure 5 in which a single radiating element having directive radiation pattern described by cos 1.3 (θ) was assumed. (a) 1L and 1R beams and (b) 2L and 2R beams. Design and Realization of Octave-Band Four-Beam Antenna Arrays Both the proposed concepts were verified by the design and realization of four-beam antenna arrays operating in a 2-4 GHz frequency range. First, a directional filter that features the desired switching function was designed, since, among different approaches to achieve the required switching functionality, such a circuit provides the simplest solution. The proposed schematic diagram and layout of the designed filter are shown in Figure 7. As can be seen, it consists of a circuit composed of two coupled-line sections with two quarterwave transmission lines in-between. Moreover, at one of the outputs, a Schiffman Csection is added to equalize the differential phase response between the two outputs of the directional filter. The parameters of the designed directional filter are summarized in Table 1. The designed directional filter was realized in a homogeneous symmetric stripline structure shown schematically in Figure 8, in which a thin laminate layer having thickness h 2 = 0.1 mm was inserted between two thick laminate layers having thicknesses h 1 = 1.52 mm. All layers have the same dielectric constant equal to ε r = 3.38. The designed directional filter was manufactured and measured. The obtained results in comparison with the electromagnetically calculated ones are shown in Figure 9. As can be seen, the appropriate switching function is achieved. Moreover, the directional filter features a good impedance match and differential phase variation not higher than ±10 • . It is worth underlining that the larger phase imbalance is observed around the 2 GHz and 4 GHz frequencies. This has a negligible impact on the antenna array, since, in these regions, the magnitude difference for the LF and HF paths becomes large. Furthermore, the use of the filter provides almost constant gain across the one-frequency octave of the resulting antenna arrays. The performed simulation reveals that the gain change does not exceed ±1 dB for all beams of the eight-element array and 1L and 1R beams of the twelve-element antenna array and it does not exceed ±1.5 dB for 2L and 2R beams of the twelve-element array. Electronics 2021, 10, x FOR PEER REVIEW 6 of 14 Figure 6. Calculated radiation patterns of scalable antenna array shown in Figure 5 in which a single radiating element having directive radiation pattern described by cos 1.3 ( ) was assumed. (a) 1L and 1R beams and (b) 2L and 2R beams. Design and Realization of Octave-Band Four-Beam Antenna Arrays Both the proposed concepts were verified by the design and realization of four-beam antenna arrays operating in a 2-4 GHz frequency range. First, a directional filter that features the desired switching function was designed, since, among different approaches to achieve the required switching functionality, such a circuit provides the simplest solution. The proposed schematic diagram and layout of the designed filter are shown in Figure 7. As can be seen, it consists of a circuit composed of two coupled-line sections with two quarter-wave transmission lines in-between. Moreover, at one of the outputs, a Schiffman C-section is added to equalize the differential phase response between the two outputs of the directional filter. The parameters of the designed directional filter are summarized in Table 1. The designed directional filter was realized in a homogeneous symmetric stripline structure shown schematically in Figure 8, in which a thin laminate layer having thickness h2 = 0.1 mm was inserted between two thick laminate layers having thicknesses h1 = 1.52 mm. All layers have the same dielectric constant equal to εr = 3.38. The designed directional filter was manufactured and measured. The obtained results in comparison with the electromagnetically calculated ones are shown in Figure 9. As can be seen, the appropriate switching function is achieved. Moreover, the directional filter features a good impedance match and differential phase variation not higher than ±10°. It is worth underlining that the larger phase imbalance is observed around the 2 GHz and 4 GHz frequencies. This has a negligible impact on the antenna array, since, in these regions, the magnitude difference for the LF and HF paths becomes large. Furthermore, the use of the filter provides almost constant gain across the one-frequency octave of the resulting antenna arrays. The performed simulation reveals that the gain change does not exceed ±1 dB for all beams of the eight-element array and 1L and 1R beams of the twelve-element antenna array and it does not exceed ±1.5 dB for 2L and 2R beams of the twelve-element array. It has to be underlined that both proposed feeding networks allow good radiation properties of the resulting scalable antenna arrays to be achieved. Moreover, they are much simpler and easier to design than the solution described in [23], where the concept of a four-beam antenna array operating in an octave frequency range is shown. This is due to the fact that broadband Butler matrices, power dividers and directional filters are well known, whereas the feeding network proposed in [23] requires a very complicated modified Butler matrix, which consists of different types of directional couplers that change their properties over the bandwidth. Table 1. Electrical parameters of the developed directional filter utilized in the design of a broadband four-beam antenna array. Parameter Value 298.0 tronics 2021, 10, x FOR PEER REVIEW Figure 8. Cross-sectional view of the dielectric structure used for the develope 298.0 It has to be underlined that both proposed feeding networks allow go properties of the resulting scalable antenna arrays to be achieved. Moreov much simpler and easier to design than the solution described in [23], where of a four-beam antenna array operating in an octave frequency range is shown to the fact that broadband Butler matrices, power dividers and directional fil known, whereas the feeding network proposed in [23] requires a very comp ified Butler matrix, which consists of different types of directional couplers their properties over the bandwidth. As a single radiation element, a linearly tapered slot antenna, shown i was selected. Such a radiating element ensures very broad bandwidth, suffic As a single radiation element, a linearly tapered slot antenna, shown in Figure 10, was selected. Such a radiating element ensures very broad bandwidth, sufficient to cover one frequency octave on one hand and a stable radiation pattern over the bandwidth on the other hand [26][27][28]. The linearly tapered slot antenna was optimized for operation in the 2-4 GHz frequency range. The obtained layout showing all the dimensions is presented in Figure 10 and in Table 2. The calculated reflection coefficient in comparison to the measured one is shown in Figure 11 and is better than −10 dB within the required bandwidth. Although some discrepancies between the simulated and measured reflection coefficients are seen, most likely caused by the inaccuracy of the FR4 dielectric permittivity determination, the designed radiating element features good impedance match in the required bandwidth. Electronics 2021, 10, x FOR PEER REVIEW the measured one is shown in Figure 11 and is better than −10 dB within the r bandwidth. Although some discrepancies between the simulated and measured re coefficients are seen, most likely caused by the inaccuracy of the FR4 dielectric pe ity determination, the designed radiating element features good impedance matc required bandwidth. The radiating element was measured in an anechoic chamber. It was placed 3D-printed rail using plastic screws to avoid a negative impact on the radiation The radiating element was measured in an anechoic chamber. It 3D-printed rail using plastic screws to avoid a negative impact on the The radiating element was measured in an anechoic chamber. It was placed on the 3D-printed rail using plastic screws to avoid a negative impact on the radiation pattern. A reference horn antenna was placed on the other side of the anechoic chamber at a distance of 4 m. Both the reference antenna and the manufactured element were connected to the two-port vector network analyzer. Figure 12 presents the calculated and measured radiation patterns of the developed linearly tapered slot antenna element and it is seen that the designed radiating element exhibits a wide beamwidth over the entire bandwidth, which is significant for applications in multi-beam antenna arrays. The developed radiating element was used in both concepts of scalable four-beam antenna arrays. The feeding network and the rail containing radiating elements were mounted on the back and front of the robotic arm, respectively. The feeding network was connected to the antenna array using SMA cables. Both reference antenna and antenna array were connected to the two-port vector network analyzer. As it is seen in Figures 1 and 5, the feeding networks for both eight-element and twelve-element scalable antenna arrays have four ports so, for each of the ports, a separate measurement was conducted. During the measurements, one of the feeding network's ports was connected to the network analyzer, whereas the other ports were terminated with 50 Ohm impedance. Figure 13 presents the radiation pattern of the eight-element antenna array in which additional attenuators and the radiation pattern of a single radiating element were taken into account. As can be seen, the application of such a radiating element slightly deteriorated the sidelobe level of the antenna array, but they were at an acceptable level since they did not exceed −8 dB. The developed radiating element and directional filter together with the previously developed broadband Butler matrix described in detail in [19] were utilized to realize the broadband four-beam antenna array. The used Butler matrix exhibits bot return loss and isolation not worse than 20 dB and its transmission imbalance does not exceed ±1 dB/8 • over the frequency of interest. Additionally, two 3 dB attenuators were added at the appropriate outputs of the Butler matrix. The assembled model of the four-beam antenna array was measured in an anechoic chamber. The obtained results are shown in Figure 14. It can be seen that good radiation properties were achieved, i.e., the antenna array features constant beam directions and beamwidths. The achieved beamwidths' variation does not exceed ±4 • for 1L and 1R beams and ±6 • for 2L and 2R beams, whereas the direction change does not exceed ±4 • for 1L and 1R beams and ±2.5 • for 2L and 2R beams, respectively. Electronics 2021, 10, x FOR PEER REVIEW 9 of 14 radiation patterns of the developed linearly tapered slot antenna element and it is seen that the designed radiating element exhibits a wide beamwidth over the entire bandwidth, which is significant for applications in multi-beam antenna arrays. The developed radiating element was used in both concepts of scalable four-beam antenna arrays. The feeding network and the rail containing radiating elements were mounted on the back and front of the robotic arm, respectively. The feeding network was connected to the antenna array using SMA cables. Both reference antenna and antenna array were connected to the two-port vector network analyzer. As it is seen in Figures 1 and 5, the feeding networks for both eight-element and twelve-element scalable antenna arrays have four ports so, for each of the ports, a separate measurement was conducted. During the measurements, one of the feeding network's ports was connected to the network analyzer, whereas the other ports were terminated with 50 Ohm impedance. Figure 13 presents the radiation pattern of the eight-element antenna array in which additional attenuators and the radiation pattern of a single radiating element were taken into account. As can be seen, the application of such a radiating element slightly deteriorated the sidelobe level of the antenna array, but they were at an acceptable level since they did not exceed −8 dB. The developed radiating element and directional filter together with the previously developed broadband Butler matrix described in detail in [19] were utilized to realize the broadband four-beam antenna array. The used Butler matrix exhibits bot return loss and isolation not worse than 20 dB and its transmission imbalance does not exceed ±1 dB/8° over the frequency of interest. Additionally, two 3 dB attenuators were added at the appropriate outputs of the Butler matrix. The assembled model of the four-beam antenna array was measured in an anechoic chamber. The obtained results are shown in Figure 14. It can be seen that good radiation properties were achieved, i.e., the antenna array features constant beam directions and beamwidths. The achieved beamwidths' variation does not exceed ±4° for 1L and 1R beams and ±6° for 2L and 2R beams, whereas the direction change does not exceed ±4° for 1L and 1R beams and ±2.5° for 2L and 2R beams, respectively. Similarly, the concept of a scalable antenna array composed of twelve radiating elements was verified experimentally. The calculated radiation pattern in which the radiation pattern of the developed radiating element is taken into account is shown in Figure 15. In addition, in this case, a good sidelobe level was achieved; however, a larger difference of the beamwidths caused by the directivity of the entire array is noticeable. The designed antenna array was developed based on the same components as in the case of eight-element antenna array. Additionally, in this case, two simple power dividers were developed to assemble the entire scalable antenna array. The obtained radiation pattern of the manufactured four-beam scalable antenna array is shown in Figure 16. As can be seen, the achieved beamwidths' variation does not exceed ±4 • for 1L and 1R beams and ±18 • for 2L and 2R beams, whereas the direction change does not exceed ±2.5 • for 1L and 1R beams and ±7 • for 2L and 2R beams, respectively. Similarly, the concept of a scalable antenna array composed of twelve radiating elements was verified experimentally. The calculated radiation pattern in which the radiation pattern of the developed radiating element is taken into account is shown in Figure 15. In addition, in this case, a good sidelobe level was achieved; however, a larger difference of the beamwidths caused by the directivity of the entire array is noticeable. The designed antenna array was developed based on the same components as in the case of eight-element antenna array. Additionally, in this case, two simple power dividers were developed to assemble the entire scalable antenna array. The obtained radiation pattern of the manufactured four-beam scalable antenna array is shown in Figure 16. As can be seen, the achieved beamwidths' variation does not exceed ±4° for 1L and 1R beams and ±18° for 2L and 2R beams, whereas the direction change does not exceed ±2.5° for 1L and 1R beams and ±7° for 2L and 2R beams, respectively. To complete the description of the presented design, the radiation efficiency was cal- Similarly, the concept of a scalable antenna array composed of twelve radiating elements was verified experimentally. The calculated radiation pattern in which the radiation pattern of the developed radiating element is taken into account is shown in Figure 15. In addition, in this case, a good sidelobe level was achieved; however, a larger difference of the beamwidths caused by the directivity of the entire array is noticeable. The designed antenna array was developed based on the same components as in the case of eight-element antenna array. Additionally, in this case, two simple power dividers were developed to assemble the entire scalable antenna array. The obtained radiation pattern of the manufactured four-beam scalable antenna array is shown in Figure 16. As can be seen, the achieved beamwidths' variation does not exceed ±4° for 1L and 1R beams and ±18° for 2L and 2R beams, whereas the direction change does not exceed ±2.5° for 1L and 1R beams and ±7° for 2L and 2R beams, respectively. To complete the description of the presented design, the radiation efficiency was calculated with the use of EM simulations. For the eight-element scalable antenna array, the To complete the description of the presented design, the radiation efficiency was calculated with the use of EM simulations. For the eight-element scalable antenna array, the radiation efficiency in the frequency range of interest varies from 81.3% to 76.4%. Similarly, for the twelve-element scalable antenna array, the radiation efficiency varies between 77.9% and 71.3%. Moreover, the measured radiation efficiencies for both antenna arrays possess similar behavior to the calculated ones. The measured radiation efficiency varies from 63.2% to 55.2% and from 54.6% to 47.4% for the eight-element array and the twelve-element array, respectively. The main cause for the disproportion between EM simulations and measurements is the EM simulation setup. During simulations, an ideal and lossless, apart from the 3 dB attenuators, feeding network was assumed. Such a condition cannot be met during measurements because it is well known that, theoretically, lossless circuits provide some attenuation in the signal path. and measured radiation patterns of both the developed antenna arrays, caused by the couplings between radiating elements, which were not taken into account during the calculations, even though both the developed scalable multi-beam antenna arrays confirm the correctness of the proposed approach and prove the possibility of the realization of four-beam antenna arrays operating in an octave frequency range with the use of the proposed approach. Figure 17 presents both the assembled models of the developed antenna arrays during measurements. culations, even though both the developed scalable multi-beam antenna arrays confirm the correctness of the proposed approach and prove the possibility of the realization of four-beam antenna arrays operating in an octave frequency range with the use of the proposed approach. Figure 17 presents both the assembled models of the developed antenna arrays during measurements. The obtained measurement results reveal some discrepancies between the calculated and measured radiation patterns of both the developed antenna arrays, caused by the couplings between radiating elements, which were not taken into account during the calculations, even though both the developed scalable multi-beam antenna arrays confirm the correctness of the proposed approach and prove the possibility of the realization of fourbeam antenna arrays operating in an octave frequency range with the use of the proposed approach. Figure 17 presents both the assembled models of the developed antenna arrays during measurements. To illustrate the advantages of the presented solution against other recently reported multibeam antennas, Table 3 is presented below. As can be seen, the considered designs offer a large variety in terms of number of beams and the frequency range of operation at the expense of the overall design complexity. It can be observed that the proposed design allows four beams with the lowest variation in terms of both direction and width to be obtained and, simultaneously, it features low complexity. To illustrate the advantages of the presented solution against other recently reported multibeam antennas, Table 3 is presented below. As can be seen, the considered designs offer a large variety in terms of number of beams and the frequency range of operation at the expense of the overall design complexity. It can be observed that the proposed design allows four beams with the lowest variation in terms of both direction and width to be obtained and, simultaneously, it features low complexity. Conclusions In this paper, a novel concept of multi-beam antenna arrays that operate over oneoctave frequency range is proposed. The developed antenna arrays consist of appropriately distributed radiating elements, which are fed with the use of a classic broadband Butler matrix in conjunction with directional filters. Moreover, it is shown that, in such antenna arrays, a tapered excitation is required to improve the resulting radiation patterns. As shown, this can be achieved with either lossy or theoretically lossless feeding networks. The proposed feeding networks allow multi-beam antenna arrays that cover a broad frequency range with a relatively simple design to be realized. They also allow a stable four-beam radiation pattern to be achieved, one that is opposite to the concept presented in [25], where only a two-beam radiation pattern can be achieved. Furthermore, the proposed feeding network is much easier to design than the one presented recently in [23]. It utilizes classic well-developed components in contrast to the previously described solution, where a sophisticated Butler matrix needs to be designed to achieve appropriate amplitude and differential phase characteristics. Moreover, it has to be underlined that Conclusions In this paper, a novel concept of multi-beam antenna arrays that operate over oneoctave frequency range is proposed. The developed antenna arrays consist of appropriately distributed radiating elements, which are fed with the use of a classic broadband Butler matrix in conjunction with directional filters. Moreover, it is shown that, in such antenna arrays, a tapered excitation is required to improve the resulting radiation patterns. As shown, this can be achieved with either lossy or theoretically lossless feeding networks. The proposed feeding networks allow multi-beam antenna arrays that cover a broad frequency range with a relatively simple design to be realized. They also allow a stable four-beam radiation pattern to be achieved, one that is opposite to the concept presented in [25], where only a two-beam radiation pattern can be achieved. Furthermore, the proposed feeding network is much easier to design than the one presented recently in [23]. It utilizes classic well-developed components in contrast to the previously described solution, where a sophisticated Butler matrix needs to be designed to achieve appropriate amplitude and differential phase characteristics. Moreover, it has to be underlined that the concept presented in this paper can be extended to antenna arrays having more beams, e.g., eight beams, whereas frequency-dependent Butler matrices based on the concept from [23] become highly complicated and are not feasible. The proposed concept was successfully verified by the design and measurements of four-beam antenna arrays operating in the 2-4 GHz frequency range and consisting of eight and twelve radiating elements, respectively. The obtained measurement results confirm the correctness and applicability of the presented design methodology. Simultaneously, as shown in the comparison table, the presented design is of low complexity and provides stable beams over one-octave bandwidth.
8,391.6
2021-11-07T00:00:00.000
[ "Physics" ]
A novel long non-coding RNA XLOC_004787, is associated with migration and promotes cancer cell proliferation by downregulating mir-203a-3p in gastric cancer Background Long noncoding RNAs (lncRNAs) have been identified as important regulatory factors implicated in a wide array of diseases, including various forms of cancer. However, the roles of most lncRNAs in the progression of gastric cancer (GC) remain largely unexplored. This study investigates the biological function and underlying mechanism of a novel lncRNA, XLOC_004787 in GC. Methods The location of XLOC_004787 in GES-1 cells and HGC-27 cells were detected by fluorescence in situ hybridization (FISH) assay. The expression levels of XLOC_004787 were assessed using quantitative real-time fluorescence PCR (qRT-PCR) in various cell lines, including GES-1, MGC-803, MKN-45, BGC-823, SGC-7901, and HGC-27 cells. Functional assays such as Transwell migration, cell counting kit-8 (CCK-8), and colony formation experiments were employed to analyze the effects of XLOC_004787 and miR-203a-3p on cell migration and proliferation. Protein levels associated with GC in these cell lines were examined by Western blotting. The intracellular localization of β-catenin and P-Smad2/3 was assessed using immunofluorescence (IF) assay. Additionally, the interaction between XLOC_004787 and miR-203a-3p was investigated using a dual luciferase assay. Results XLOC_004787 was localized at both the cytoplasm and nucleus of GES-1 cells and HGC-27 cells. Compared to normal tissues and GES-1 cells, XLOC_004787 expression was significantly upregulated in GC tissues and cells, with the highest and lowest expression observed in SGC-7901 and HGC-27 cells, respectively. Furthermore, a reduced expression of XLOC_004787 was seen to inhibit migration and proliferation in SGC-7901 cells. Western blotting analysis revealed that a decrease in XLOC_004787 expression correspondingly decreased the expression of N-cadherin, mmp2, mmp9, Snail, Vimentin, β-catenin, C-myc, Cyclin D1, and TGF-β, while concurrently increasing E-cadherin expression. This was also associated with diminished expression of P-Smad2/3 in relation to Smad2/3, and reduced P-Gsk3β expression in comparison to Gsk3β. Additionally, the nuclear entry of P-Smad2/3 and β-catenin was reduced by lower XLOC_004787 expression. Amplifying XLOC_004787 expression via pcDNA_XLOC_004787 suggested a potential for cancer promotion. Notably, XLOC_004787 was found to negatively regulate mir-203a-3p expression, with potential binding sites identified between the two. Higher mir-203a-3p expression was observed to decrease migration and proliferation, and enhance E-cadherin expression. Conversely, suppression of mir-203a-3p expression suggested a potential promotion of proliferation and migration in GC cells. Conclusions These results suggest that XLOC_004787, found to be upregulated in GC tissues, potentially promotes proliferation and migration in GC cells. This occurs through the activation of TGF-β and Wnt/β-catenin signaling pathways and the expression of EMT-related proteins. Additionally, XLOC_004787 may influence cell migration and proliferation by modulating the signaling pathway via the adsorption and inhibition of mir-203a-3p. Introduction Gastric cancer (GC), a malignancy primarily originating from the epithelium of the gastric mucosa, presents a significant global health challenge due to its intricate etiology and high mortality rate [1][2][3][4].This complex disease is characterized by its close association with genetic mutations and irregular gene expression.Moreover, the pathogenesis and progression of GC have been directly linked to infections from certain viruses and microorganisms, most notably the Epstein-Barr virus (EBV) and Helicobacter pylori (H.pylori) [5][6][7].On a global scale, GC ranks fifth in incidence and fourth in mortality rates [2], underlining the critical importance of investigating the molecular mechanisms underpinning its development.Understanding these mechanisms is pivotal for improving early diagnosis and therapeutic strategies for GC [8].The disease is disproportionately prevalent in developing nations [9].Notably, approximately half of these cases are found in Eastern Asia, with China representing a significant proportion.In fact, the incidence of GC in China constitutes 42.6% of the global total, while the mortality rate is at 45.0%, making China fifth in incidence and sixth in mortality among 183 nations [4,[10][11][12][13]. Long noncoding RNAs (lncRNAs), a category of noncoding RNAs exceeding 200 nucleotides in length that lack protein-coding capability, have emerged as critical players in a plethora of biological processes [14].LncR-NAs are involved in the regulation of gene expression, species evolution, embryonic development, metabolic processes, and even tumorigenesis.Certain lncRNAs are identified as potential tumor suppressors or promoters in various cancers, with their dysregulated expression often tied to the biological features of tumor cell proliferation, invasion, and metastasis [14,15].Moreover, lncRNAs have been implicated in a variety of other diseases, such as cardiovascular conditions, neurological disorders, and metabolic diseases, often as a result of abnormal expression patterns [16].Recent studies have also demonstrated the involvement of specific lncRNAs in the immune system's biological processes, such as immune cell differentiation, cell cycle regulation, and apoptosis [17,18].Therefore, the potential to harness the regulatory capabilities of lncRNAs in treating various diseases represents a burgeoning area of research. The intricate interplay between GC and lncRNAs pivots around the complex regulation of gene expression and diverse cellular processes [19].Emerging evidence has implicated dysregulation of certain lncRNAs in the pathogenesis and progression of GC [20,21].This dysregulation appears to interfere with critical signaling pathways implicated in cellular differentiation and apoptosis.Moreover, a subset of these lncRNAs has shown potential as diagnostic and prognostic indicators for GC [22,23].However, a comprehensive understanding of the roles and therapeutic potential of lncRNAs in GC is yet to be fully elucidated.Our current study contributes to this knowledge base by identifying a novel lncRNA, XLOC_004787, demonstrating its significant upregulation in GC tissues and cell lines.Zhu H had reported that XLOC_004787 was aberrantly expressed in human gastric cells and tissues infected with H. pylori when compared to the control group, and the aberrant expression of XLOC_004787 may contribute to the pathological response and development of H. pylori-related diseases [24].Yao M had also reported that XLOC_004787 was a upstream regulatory factor of miR-107 and was logically involved in inhibiting CVB3 replication and release, as well as the resulting inflammatory responses [25].The development of gastric cancer is closely related to H. pylori infection.Therefore, we explored its functional role through both silencing and overexpression in GC cells.Our results indicate that XLOC_004787 plays a key role in cell migration and proliferation and impacts EMT, metastasis, proliferation, and the expression of various signaling pathway proteins.Furthermore, we provide evidence that XLOC_004787 may regulate the progression of GC by modulating the nuclear entry of P-Smad2/3 and β-catenin.An additional observation was a decrease in the expression of mir-203a-3p, suggesting XLOC_004787's role in mediating the migration and proliferation of GC cells.This is particularly significant as previous studies, such as those by Wang Z, et al., have reported that mir-203a-3p inhibits GC cell proliferation by targeting IGF-1R [26].Therefore, our findings hint at the potential of XLOC_004787 as a novel therapeutic target for GC, necessitating further investigation in this promising area. Tumor tissue collection In this study, both GC tissues and their corresponding non-tumor tissues were procured from the Affiliated People's Hospital of Jiangsu University during the period from 2017 to 2020.This study was conducted in strict accordance with the principles outlined in the Helsinki Declaration.All procedures pertaining to tissue sample collection and subsequent experimental protocols were duly approved by the Ethics Committee of Jiangsu University (Zhenjiang, China) and the Ethics Committee of the Affiliated People's Hospital, Jiangsu University.Postcollection, the GC samples were promptly immersed in TRIzol reagent and subsequently stored at -80 °C until further experimental use. Cell culture This study utilized normal gastric mucosal epithelial cells (GES-1) in combination with five GC cell lines (BGC-823, HGC-27, MKN-45, SGC-7901, MGC-803).These cell lines were sourced from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China) and the American Type Culture Collection (ATCC, Manassas, VA, USA), and were preserved in liquid nitrogen at the School of Medicine, Jiangsu University.The cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Gibco, Grand Island, NY, USA) enriched with 10% fetal bovine serum (FBS; Gibco).The cell culture process was carried out in a humidified incubator set at 37 °C and supplied with 5% CO 2 . Fluorescence in situ hybridization analysis (FISH) assay Gene Pharma (Suzhou, China) was responsible for designing and synthesizing the XLOC_004787 probe conjugated with Cy3.The RNA-FISH kit was purchased from Gene Pharma.Cells were seeded onto glass coverslips, cultured overnight, and then fixed with 4% paraformaldehyde at room temperature for 30 min.After a 15-min treatment with 0.1% Triton X-100, cells were washed twice with PBS, and 200μL of 1 × hybridization buffer was added to each well, followed by incubation at 37 °C for 30 min.The liquid was discarded, and 200μL of 2 × buffer C was added and incubated at 37 °C for 30 min.The probe was diluted to 1 μM, denatured for 10 min in a water bath at 75 °C, and then 2μL of 1 μM biotin-labeled probe, 4μL of 1 μM SA-Cy3/FAM, and 14μL of PBS were added and incubated at 37 °C for 30 min.180μL of buffer E was added afterwards.The cells were mixed with the probe mixture and incubated at 37 °C for 12-16 h for hybridization.The next day, cells were washed with 0.1% buffer F at 37 °C for 10 min, washed three times with 2 × buffer C, washed at 60 °C for 10 min each, and washed at 42 °C for 10 min each (three times in total).Finally, diluted DAPI-stained cells were added in the dark for 15 min, washed, sealed, and then observed with a fluorescence microscope (Leica, Mannheim, Germany). The qRT-PCR protocol included an initial step at 95 °C for 5 min, followed by 40 cycles of: 95 °C for 5 s; 60 °C for 20 s, 65 °C for 1 min, and 95 °C for 15 s.The 2 −ΔΔt method was subsequently employed for data quantification, designating GAPDH as the internal control. Colony formation and cell proliferation assay The human gastric carcinoma cell line SGC-7901 was subjected to knockdown of XLOC_004787 via transfection with siRNA-XLOC_004787, while HGC-27 cells were transfected with pcDNA-XLOC_004787 plasmids and control plasmids.Post-transfection, the cells were detached using 0.25% trypsin, resuspended, and then enumerated.Subsequently, a population of 1 × 10 3 cells was seeded into six-well plates and cultured under standard conditions, with the medium being refreshed every three days over a period of 10 to 14 days.After incubation, cells were fixed with 4% paraformaldehyde and stained with crystal violet.Additionally, 1 × 10 3 cells were seeded into 96-well plates and maintained at 37 °C in a 5% CO 2 atmosphere.The subsequent day, 10 μl CCK-8 reagent (Tongren, Shanghai, China) was added to each well at 24, 48, 72, and 96-h intervals and incubated for one hour.The absorbance of each well was recorded via enzyme-linked immunosorbent assay at a wavelength of 450 nm, with the resultant data averaged to plot a growth curve.This entire experimental procedure was performed in triplicate. Cell migration assay The cell migration assays were performed using Transwell chambers (pore size, 8 µm; Corning, Costar, NY, USA).Both SGC-7901 and HGC-27 cells were transfected with siRNA-XLOC_004787, pcDNA-XLOC_004787, and corresponding controls, respectively.After 48 h, the cells were resuspended in serum-free DMEM medium and counted.A volume of 600 µL of medium containing 10% FBS was added into the lower chamber as a chemoattractant.Meanwhile, a total of 1 × 10 5 cells in 200 µL of medium were uniformly seeded into the upper chamber.After incubation for 12 to 24 h, the chambers were immersed in 4% paraformaldehyde for 30 min at room temperature, rinsed with PBS, and the cells stained with crystal violet.The chamber was then washed again with PBS.Under a microscope, five random fields were chosen to image and count cells that migrated to the lower surface of the Transwell chamber membrane, for subsequent statistical analysis. Western blotting analysis and antibody utilization SGC-7901 and HGC-27 cells were lysed utilizing RIPA lysis buffer (Beyotime Biotechnology, Shanghai, China) enriched with phenylmethanesulfonyl fluoride (PMSF) and phosphatase inhibitors.Uniform protein quantities (100 µg) were isolated using 10% SDS-PAGE gels, after which they were transferred onto PVDF membranes.Membranes were then blocked using 5% nonfat milk, and subsequently incubated with primary antibodies (as detailed in Table 1).Following a period of incubation, the membranes were washed and further incubated with HRP-conjugated anti-rabbit IgG (H + L) goat secondary antibodies (Fcmcs, Nanjing, China) for an hour under ambient conditions.Upon another washing cycle, protein detection was facilitated using the ECL system (Image Quant LAS 4000 mini, Pittsburgh, PA, USA) in accordance with the manufacturer's instructions. Immunofluorescence analysis Subsequent to transfection (48 h post-procedure), SGC-7901 and HGC-27 cells (both at a density of 2 × 10 4 ) were dispersed into 24-well plates that contained cell slides.Following a period of 24 h post-inoculation, the cells were fixed using a 4% poly-methyl fermentation solution for 30 min and rinsed with PBS.Permeabilization of cells was achieved with 0.5% Tri-tonX-100 (Sigma-Aldrich, Hong Kong, China) for a period of 10 min, after which the cells were rinsed and blocked using 5% BSA for 30 min.Primary antibodies for P-Smad2/3 and β-catenin were added, followed by overnight incubation at 4 °C.The following day, cells were rinsed with PBS, then treated with Cy3 labeled goat anti-rabbit IgG antibody (Huabio, Hangzhou, China) and incubated for 45 min at 37 °C in dark conditions.A final rinse with PBS was done before adding 0.5 ng/ml DAPI for 10 min.Upon washing with PBS, cells were sealed with an anti-fluorescence quencher, followed by observation under a confocal laser scanning microscope. Dual luciferase reporter assay GenePharma Co. (Suzhou, China) constructed both the wild type (WT) and mutant (MUT) XLOC_004787 plasmids.Human embryonic kidney (HEK 293 T) cells, at a density of 1.0 × 10 5 cells/well, were cultured in a 24-well plate.Subsequently, the cells were co-transfected with either the WT or MUT plasmids and mir-203a-3p mimics (GenePharma, Shanghai, China) using Lipofectamine 3000 as a transfection reagent.After a co-transfection period of 36 h, both renilla and firefly luciferase activities were quantified.The ratio of these activities was then used to evaluate the interaction between XLOC_004787 and mir-203a-3p. Statistical analysis Statistical analyses for this research were conducted using GraphPad Prism 8.2 software (La Jolla, USA).Image Pro Plus software (Media Cybernetics, USA) was used to obtain the relative gray-scale value of the bands and to perform cell counting.Independent t-tests were utilized for comparisons among different groups.Each experiment was independently conducted three times.All statistical tests were two-tailed, with a threshold of p < 0.05 set as the criteria for statistical significance. XLOC_004787 is highly expressed both in GC tissues and cells To determine the location of XLOC_004787, according to RNA-FISH assay, XLOC_004787 was localized at both the cytoplasm and nucleus of GES-1 cells and HGC-27 cells (Fig. 1A, B).The expression level of XLOC_004787 mRNA was measured in 32 pairs of fresh frozen GC tissues and matched normal gastric tissues.The results showed that XLOC_004787 was upregulated in GC samples (Fig. 1C, D).In addition, XLOC_004787 exhibited high expression levels in vitro gastric cancer cell lines, with SGC-7901 cells showing the highest expression and HGC-27 cells showing the lowest expression compared to GES-1 cells (Fig. 1E).Based on these findings, XLOC_004787 is expressed at high levels in both GC tissues and cells. XLOC_004787 overexpression promotes GC cells proliferation and migration The expression of XLOC_004787 was knocked down by siRNA in SGC-7901 cells (Fig. 2A).The expression of XLOC_004787 was overexpressed by pcDNA-XLOC_004787 in HGC-27 cells (Fig. 2A).To explore the function of XLOC_004787 in GC cells, CCK-8 proliferation experiments, plate clone formation experiments and transwell chamber migration experiments found that over-expressed XLOC_004787 in HGC-27 cells promoted cell growth ability and cell migration ability, the HGC-27 cell clones became larger and the number increased compared to control group.(Fig. 2B-D).Siliencing XLOC_004787 markedly weakened the proliferation and migration in SGC-7901 cells, the SGC-7901 cell clones became smaller than the control cells, the clones number decreased.(Fig. 2B-D). XLOC_004787 induces proliferation and migration in GC cells by regulating EMT-related proteins, Wnt/β-Catenin signaling pathway and TGF-β signaling pathway In the following step, EMT-related proteins, Wnt/β-Catenin signaling pathway and TGF-β signaling pathway which related to proliferation and migration were detected by Western blotting (Fig. 3A-J).On the one hand, the level of N-Cadherin, mmp2, mmp9, Snail, Vimentin, β-catenin, C-myc, Cyclin D1 and TGF-β obviously declined while the expression of E-Cadherin increased in the XLOC_004787 knockdown group.On the other hand, it showed the the opposite result after the XLOC_004787 overexpression (Fig. 3A-D, G and H).Furthermore, after XLOC_004787 knocked down, compared to control group the expression of P-Gsk3β and P-Smad2/3 significantly decreased relative to Gsk3β and Smad2/3.(Fig. 3E, I).Then the expression of P-Gsk3β and P-Smad2/3 markedly increased relative to Gsk3β and Smad2/3 after the XLOC_004787 overexpression (Fig. 3F, J). XLOC_004787 induces proliferation and migration by regulating the efficiency of entering the nucleus of P-Smad2/3 and β-catenin On the one hand, IF suggested that that the amount of P-Smad2/3 and β-catenin entering the nucleus is reduced when we knocked down the expression of XLOC_004787 in SGC-7901 cells (Fig. 4A, C).On the other hand, IF found that the efficiencey of P-Smad2/3 and β-catenin entering the nucleus is enhanced after XLOC_004787 overexpression in HGC-27 cells (Fig. 4B, D). mir-203a-3p overexpression inhibits GC cells proliferation and migration The expression of mir-203a-3p was overexpressed by mimics in SGC-7901 cells (Fig. 6A).The expression of mir-203a-3p was knocked down by inhibitor in HGC-27 cells (Fig. 6A).To explore the function of mir-203a-3p in GC cells, CCK-8 proliferation experiments, plate clone formation experiments and transwell chamber migration experiments found that over-expressed mir-203a-3p in SGC-7901 cells suppressed cell growth ability and cell migration ability, the SGC-7901 cell clones became smaller and the number decreased compared to control group.(Fig. 6B-D).Inhibiting mir-203a-3p markedly enhanced the proliferation and migration in HGC-27 cells, the HGC-27 cell clones became larger than the control cells, the clones number increased (Fig. 6B-D). mir-203a-3p affects proliferation and migration in GC cells by regulating EMT-related proteins, Wnt/β-Catenin signaling pathway and TGF-β signaling pathway In the following step, EMT-related proteins, Wnt/β-Catenin signaling pathway and TGF-β signaling pathway which related to proliferation and migration were detected by Western blotting (Fig. 7A-J).On the one hand, the level of N-Cadherin, mmp2, mmp9, Snail, Vimentin, β-catenin, C-myc, Cyclin D1 and TGF-β obviously declined while the expression of E-Cadherin increased in the mir-203a-3p mimics group.On the other hand, it showed the the opposite result after the mir-203a-3p inhibition (Fig. 7A-D, G and H).Furthermore, after mir-203a-3p knocked down, compared to control group the expression of P-Gsk3β and P-Smad2/3 significantly increased relative to Gsk3β and Smad2/3(Fig.7F, J).Then the expression of P-Gsk3β and P-Smad2/3 markedly decreased relative to Gsk3β and Smad2/3 after the mir-203a-3p overexpression (Fig. 7E, I). Discussion The interplay between lncRNAs and various diseases, particularly cancers, is intricate and multifaceted [16,18]. A surge of research within the last decade has unveiled the dual role of lncRNAs, acting as potential oncogenes or tumor suppressors in the development of cancer [27].Their dysregulation is often associated with pivotal biological characteristics including tumor cell proliferation, invasion, and metastasis [18].Specific lncRNAs such as MALAT1, HOTAIR, and UCA1, are found to be highly expressed in GC and are implicated in lymph node metastasis, pathological grading, and recurrence rates [28][29][30].Conversely, lncRNAs like SNHG5 and LINC00675 are known to behave as suppressors of GC, with their diminished expression linked to an increase in GC cell proliferation, invasion, and metastasis [31,32].While we have gained substantial insight into lncRNA regulatory mechanisms and their various biological roles in the human body, the discovery and characterization of novel functional lncRNAs remain a vibrant area of research.In this study, we identified a novel lncRNA, XLOC_004787, through chip-based screening and subsequently verified its full length using sequencing techniques.Previous studies on XLOC_004787 have focused on its role in certain viruses, such as the Coxsackie B3 virus (CVB3) [25], but its function and specific involvement in GC remain unclear.Our findings indicate elevated expression of XLOC_004787 in both GC tissues and cell lines.Moreover, we observed that an upregulated expression of XLOC_004787 in HGC-27 cells promoted cell migration and proliferation.Conversely, the suppression of XLOC_004787 in SGC-7901 cells led to the inhibition of these same processes.The heightened expression of XLOC_004787 could potentially modulate mmp9 and mmp2, thereby influencing the migratory patterns and proliferation of GC cells.We observed the phenomenon of EMT in numerous malignant tumors, which gradually morphs tightly bound epithelial cells into more invasive mesenchymal cells.It should be noted that early onset of EMT in tumors has the potential to modify the tumor cell microenvironment, including factors like inflammation, fibrosis, and neovascularization [33].In malignant tumors, alterations in epithelial cells are typically marked by the reduction of E-cadherin, an epithelial cell marker protein, coupled with an increase in the expression of N-cadherin, Snail, and Vimentin [33,34].Our findings indicate that overexpression of XLOC_004787 enhances the levels of Additionally, the TGF-β signaling pathway, known to influence cell cycle, differentiation, extracellular matrix synthesis, and tumor immune response, plays a pivotal role [35].The Wnt/β-catenin signaling pathway has been demonstrated to be integral in embryonic development, tissue regeneration, and particularly in cancer development.Aberrant expression of the Wnt/β-catenin signaling pathway has been associated with various cancers, including colon, breast, and liver cancers [36,37].This abnormal activation of the Wnt/β-catenin signaling pathway can foster proliferation, invasion, and metastasis of cancer cells during cancer progression [38].Our results show that XLOC_004787 may mediate the proliferation of GC cells via the TGF-β and Wnt/β-catenin signaling pathways.The findings of our study revealed that knockdown of XLOC_004787 substantially attenuated the migration, proliferation, and EMT of GC cells.Conversely, overexpression of XLOC_004787 resulted in an inverse outcome.In HGC-27 cells overexpressing XLOC_004787, the concentration of TGF-β, P-Smad2/3, C-myc, CyclinD1, P-GSK3β, and β-catenin increased, with higher nuclear localization of P-Smad2/3 and β-catenin.In contrast, in SGC-7901 cells exhibiting downregulated XLOC_004787 expression, the levels of TGF-β, P-Smad2/3, C-myc, CyclinD1, P-GSK3β, and β-catenin diminished, and P-Smad2/3 and β-catenin were less abundant in the nucleus. In the human body, the regulation of lncRNAs operates within an intricate network, and their interaction with microRNAs (miRNAs) is multifaceted.LncRNAs have been observed to function as "miRNA sponges [27], " serving to absorb and thereby modulate miRNA expression-an interaction denoted as the "miRNA sponge effect." This interaction impedes miRNA binding to their intended target RNAs, thereby influencing their expression [39].Conversely, miRNAs are capable of altering the expression and function of lncRNAs by influencing their transcription or splicing processes [40].In the scope of this study, it was predicted and substantiated that the lncRNA XLOC_004787 targets the downstream miRNA, miR-203a-3p.The results showed that XLOC_004787 and miR-203a-3p share sequence complementarity and exhibit reciprocal regulation.Upon overexpressing miR-203a-3p, we noted a reduction in GC cell proliferation and migration, accompanied by the inhibited expression of EMT-related proteins, the TGF-β signaling pathway, and the Wnt/β-catenin signaling pathway.Simultaneously, there was an augmentation in E-cadherin expression.Conversely, the inhibition of miR-203a-3p produced the inverse results.Mir-203a-3p has been previously reported as a tumor suppressor gene targeting IGF-1R [26], thus this study further clarifies this relationship. The main limitations of this study are its small inclusion of clinical subjects and the lack of in vivo experiments due to existing laboratory and equipment constraints.For a more comprehensive understanding of this lncRNA's applicability as a potential biomarker, it would be prudent to involve larger patient groups in subsequent clinical trials.Furthermore, the application of inhibitors or inducers related to the Wnt/β-catenin and TGF-β signaling pathways could further elucidate the role of XLOC_004787 in promoting GC cell proliferation and migration. Conclusion In summary, our findings suggest that the overexpression of XLOC_004787 facilitates GC cell migration and proliferation via the mediation of EMT-related proteins, the TGF-β signaling pathway, and the Wnt/β-catenin signaling pathway.Hence, it holds potential as a novel diagnostic and therapeutic marker for GC patients. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Fig. 1 Fig. 1 XLOC_004787 is upregulated both in GC tissues and cells.A and B RNA-FISH assay revealed the cytoplasmic and nuclear location of XLOC_004787 in GES-1 cells and HGC-27 cells (original magnification × 600).C and D XLOC_004787 mRNA expression in 32 paired GC tissues and the adjacent normal tissues by qRT-PCR.E qRT-PCR detected XLOC_004787 levels within GES-1 cells as well as five GC cell lines.*P < 0.05, **P < 0.01, ***P < 0.001 Fig. 2 Fig. 3 Fig. 2 High expression of XLOC_004787 promoted cell migration and proliferation in GC cells.A qRT-PCR was used to detect the efficiency of knockdown and overexpression of XLOC_004787.B CCK-8 assay was conducted to observe the ability of proliferation in GC cells.C Colony-formation assay was carried out to observe the ability of proliferation in GC cells.D Transwell migration assay was used to detect the efficiency of cell migration (original magnification × 40).*P < 0.05, **P < 0.01, ***P < 0.001 Fig. 4 Fig. 4 High expression of XLOC_004787 promoted the efficiency of entering the nucleus of P-Smad2/3 and β-catenin.A and C, The amount of P-Smad2/3 and β-catenin entering the nucleus is reduced after XLOC_004787 silencing by IF, scale bar = 25 µm.B and D, The quantity of P-Smad2/3 and β-catenin entering the nucleus is enhanced after XLOC_004787 overexpression by IF (original magnification × 600) Fig. 7 Fig. 7 High expression of mir-203a-3p suppressed the proteins levels related with migration, proliferation and EMT in GC cells.A, C and G By western blotting analysis, the level of E-cadherin increased and the standard of mmp9, Snail, Vimentin, β-catenin, C-myc, Cyclin D1 and TGF-β decreased after mir-203a-3p overexpression in SGC-7901 cells.B, D and H By western blotting detection, the standard of E-cadherin declined and the level of mmp9, Snail, Vimentin, β-catenin, C-myc, Cyclin D1 and TGF-β raised after mir-203a-3p inhibition in HGC-27 cells.E and I Western blotting found that upregulated mir-203a-3p in SGC-7901 cells reduced the expression of P-Gsk3β and P-Smad2/3 relative to Gsk3β and Smad2/3.F and J Western blotting showed that inhibited XLOC_004787 in HGC-27 cells enhanced the expression of P-Gsk3β and P-Smad2/3 relative to Gsk3β and Smad2/3.Data were shown as mean ± SEM.The experiments were repeated at least three times.*P < 0.05, **P < 0.01, ***P < 0.001 Table 1 The antibodies used in this study
6,035.4
2023-08-12T00:00:00.000
[ "Biology" ]
Strategic Profit Planning and Organizational Performance in Public Sector Commercial Banks of Nepal This study aimed to examine strategic profit planning and its effect on the organizational performance of the public sector commercial banks of Nepal. Using a standardized questionnaire, primary data was obtained. Based on a judgment sampling method, 450 employees were taken for the sample. 72.70 percent of senior and middle-level employees participated in this study. In this study, budget planning, budget participation, budgetary sophistication, and budgetary control were considered as the independent variables and organizational performance was a dependent variable. The findings showed that the dimensions of strategic profit planning had a positive and important impact on the organizational performance of public commercial banks in terms of budget planning and budget participation. However, the other two dimensions of strategic profit planning like budgetary sophistication and budgetary control had a negative impact on the organizational performance of these banks. In such realities, companies need to focus on other factors that contribute to better performance apart from strategic profit planning dimensions, like employee motivation and invest more in staff development to enhance their organizational performance. Budget Planning It involves defining revenue streams and taking into account both current and potential expenditures, trying to achieve the financial goals of an entity. A budget planner's primary goal is to ensure savings after spending allocation. The budget is an important microeconomic concept that can be interpreted in monetary terms as an organizational strategy. Some variants of this term are the business start-up budget, corporate budget, event management budget, government budget, and personal or family budget (George et al., 2019). Budget Participation It is a budgeting system in which the budget formation process deliberately includes all individuals affected by a budget. This approach to bottom-up budgeting aims to achieve more realistic budgets, with much less input from staff, than top-down budgets enforced by senior management on a business (Abata, 2014). Budget participation is often better for morale and helps to lead to more attempts from workers to accomplish what they anticipated in the budget. However, high-level strategic issues are not taken into account by a solely participatory budget, so management needs to provide workers with feedback on the overall direction of the organization and how their divisions fit into that direction (Kohzadi & Hafezi, 2016). Budgetary Sophistication The application of sophisticated budgeting practices is complicated and conceptually difficult to understand. Adopting sophisticated budgeting practices is thus not without costs: both time and effort must be expended to be able to use them. In determining the appropriate level of sophisticated budgeting practices, organizations will compare the net benefits of budgeting methods and tools to their costs. Generally, it is hypothesized that options become more valuable as uncertainty increases. The theory thus suggests that sophisticated budgeting practices are most valuable in case of high uncertainty, in the situation; the costs of sophisticated budgeting practices are likely offset by additional gains from successful investment projects (King & Adetayo, 2018). Budgetary Control It refers to how often managers in a given accounting cycle use budgets to monitor and control expenses and activities. In other words, budgetary control is a mechanism for managers to set budgets for financial and performance targets, compare the actual results, and change performance as required (Kohzadi & Hafezi, 2016). Budgetary control is a method for managers to set financial and performance goals. Organizational Performance Organizational performance is another key construct of this study. It is the actual output of an organization measured against the expected outputs. It is a summary of three key identifiable, measurable, and specific outcome areas such as financial performance, shareholder return, and product performance (Richard et al. 2009). Financial performance is measurable in profits, return on investments, and return on assets (Parajuli & Shrestha, 2020a, 2020b. Shareholder return is measurable in total shareholder return, as well as a measure of economic value addition. Product performance, on the other hand, can be measured in sales or market share achieved, new market penetration, and customer feedback evaluation (Nzuki, 2017). However, in this study organizational performance is measured in terms of return on assets, return on equity, market share growth, total cost reduction, sale growth, and financial liquidity. Strategic Profit Planning and Level of Organizational Performance Strategic planning results in superior financial efficiency, calculated in terms of financial metrics commonly agreed (e.g. revenue, net profit, ROI, ROE, ROS), it is argued. Nevertheless, more recent research (Miller and Cardinal, 2011;Schwenk and Shrader, 2014) offer compelling proof that superior financial output does indeed benefit from strategic planning. Thus, most studies have explored the relationship between strategic profit planning and performance (Gup & Whitehead, 1989;Hopkins & Hopkins, 1994) and have concluded that businesses with a structured strategic profit planning process outperform those that do not. Besides, companies that take a constructive strategic approach have stronger performance than those that take a reactive strategic approach. This evidence indicates the importance of having a systematic, constructive strategic planning mechanism in an organization, whether large or small and the need to infect it. Kohzadi & Hafezi (2016) found that most companies have clear strategies and that there has been no substantial association between the strength of strategic planning and the number of employees. King & Adetayo (2018) reported that top management should be more involved in the strategic profit planning phase to achieve defined organizational goals, which in turn would promote organizational growth and development. George et al. (2019) have found that when success is measured as productivity and when strategic profit planning is measured as structured strategic planning, the positive effect of strategic profit planning on organizational performance is greatest. Research Methodology 3.1Sample There are 27 commercial banks are in operation in Nepal. Out of these, there are three public sector banks namely Agriculture Development Bank Limited (ADBL), Nepal Bank Limited (NBL), and Rastriya Banijya Bank Limited (RBBL). Total senior and middle-level employees of these banks are considered for the study purposes. Based on a judgment sampling method, 450 employees are taken for the sample. Only 327 (72.70 percent) senior and middlelevel employees participated in this study. Source of Data Using a standardized questionnaire, primary data was obtained. The questionnaire contains a 5-points Likert scale, ranging from one (strongly disagree) to five (strongly agree). Research Framework In this study, budget planning, budget participation, budgetary sophistication, and budgetary control are considered as the independent variables and organizational performance is a dependent variable. Thus, based on George et al. Kohzadi and Hafezi (2016), and King and Adetayo (2018) research the model can be adapted and developed as follow: The following hypotheses were built based on this research framework to investigate the effect of strategic profit planning on organizational performance: H1: Budget planning has a substantial influence on organizational performance. H2: Budget participation has a direct effect on organizational performance. H3: There is a significant impact on organizational performance from budgetary sophistication. H4: There is a significant impact on organizational performance from budgetary control. Data Analysis Tools As methods for data analysis, descriptive statistics such as mean and standard deviation (S.D.), and inferential statistics such as correlation analysis and multiple regression analysis are used. Reliability Test Cronbach's Alpha (α) was used to test the reliability of the study. This alpha is also known as the coefficient of reliability (or consistency) so, a coefficient of 0.70 or higher is considered to be acceptable. The reliability test is presented in the following Vol.11, No.22, 2020 Nunally (1978) reported that the value of Cronbach's Alpha of at least 0.70 is considered as a good indication of constant reliability. Table 1 highlights the value of Cronbach's Alpha for each variable under the study is greater than 0.70, which support the notion that the study is reliable. Table (2) shows the means and standard deviation for each variable used in the study. Table (2) depicts a summary of all the variables of the study through descriptive statistics analysis. The magnitude of organizational performance of employees is 4.37 with an S.D. of 0.47, which means organizational performance is high within the public sector commercial banks. Among the factor of strategic profit planning, budget planning has the highest mean of 4.57 with an S.D. of 0.53 whereas budget participation and budgetary control have the least mean of 4.14 with an S.D. of 0.68. Relationship between Strategic Profit Planning with Organizational Performance A Pearson correlation was run to establish how the variables were related to each other. Table (3) shows the correlation results of the study on the variables. The results indicate that budget planning, budget participation, budgetary sophistication, and budgetary control are positively related to organizational performance at 0.92, 0.81, 0.91, and 0.85 at a 1 percent level of significance. This indicated that no one of the strategic profit planning dimensions had a negative correlation with the performance of the bank. Thus, strategic profit planning had positive associations with organizational performance. Impact of Strategic Profit Planning with Organizational Performance This section presents the regression results to examine the impact of strategic profit planning dimensions on organizational performance. The regression model indicates that there is a positive impact of budget planning and budget participation on organizational performance as indicates by the beta coefficients of 1.086 and 0.524 respectively. However, budgetary sophistication and budgetary control have a negative impact on organizational performance with the beta coefficients of -0.084 and -0.6.3 respectively. The results imply that budget planning and budget participation are significant predictors of organizational performance. Thus, these findings provide support for H1 and H2. However, budgetary sophistication and budgetary control are not the predictors of organizational performance. Hence, these findings do not support for H3 and H4. Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697(Paper) ISSN 2222-2847(Online) Vol.11, No.22, 2020 Hypotheses Decision H1: Budget planning has a substantial influence on organizational performance. Accepted H2: Budget participation has a direct effect on organizational performance. Accepted H3: There is a significant impact on organizational performance from budgetary sophistication. Rejected H4: There is a significant impact on organizational performance from budgetary control. Rejected Discussion and Conclusion This study aimed to examine strategic profit planning and its effect on the organizational performance of Nepal's commercial banks in the public sector. Strategic profit planning includes budget planning, budget participation, budgetary sophistication, and budgetary control whereas organizational performance includes return on assets, return on equity, market share growth, total cost reduction, sale growth, and financial liquidity. The study indicated that no one of the strategic profit planning dimensions had a negative correlation with the performance of the banks. Thus, strategic profit planning had a positive and significant relationship with organizational performance. The findings further showed that the dimensions of strategic profit planning had a positive and important impact on the organizational performance of public commercial banks in terms of budget planning and budget participation. In their studies, Drury (2000), Garrison, Noreen, and Seal (2003) and Joshi, Al-Mudhaki, and Bremser (2003) reported that multiple functions regarding budgeting actions can be accomplished through budgeting in the process of financial decision-making and internal activity of an organization, which ultimately supports the improvement of organizational performance. in the same way, several scholars have argued that budgetary engagement and organizational performance are closely related (e.g., Shields & Shields, 1998;Birnberg & Shields, 1989;Gul et al., 1995;Magner, Welker, & Campbell, 1995;Tsui, 2001;Qi, 2010). They stated that, through budget participation (the downward information sharing), subordinates gain information from superiors that helps clarify their organizational roles, including their duties, responsibilities, and expected performance, which in turn enhances organizational performance. However, the other two dimensions of strategic profit planning like budgetary sophistication and budgetary control had a negative impact on the organizational performance of these banks. In such realities, companies need to focus on other factors that contribute to better performance apart from strategic profit planning dimensions, like employee motivation and invest more in staff development to enhance their organizational performance.
2,755.2
2020-11-01T00:00:00.000
[ "Business", "Economics" ]
Conservation in the mechanism of Nedd8 activation by the human AppBp1-Uba3 heterodimer. Human Nedd8-activating enzyme AppBp1-Uba3 was purified to apparent homogeneity from erythrocytes. In the presence of [2,8-3H]ATP and 125I-Nedd8, heterodimer rapidly forms a stable stoichiometric ternary complex composed of tightly bound Nedd8 [3H]adenylate and Uba3-125I-Nedd8 thiol ester. Isotope exchange kinetics show that the heterodimer follows a pseudo-ordered mechanism with ATP the leading and Nedd8 the trailing substrate. Human AppBp1-Uba3 follows hyperbolic kinetics for HsUbc12 transthiolation with 125I-Nedd8 (kcat = 3.5 +/- 0.2 s-1), yielding Km values for ATP (103 +/- 12 microm), 125I-Nedd8 (0.95 +/- 0.18 microm), and HsUbc12 (43 +/- 13 nm) similar to those for ubiquitin activation by Uba1. Wild type 125I-ubiquitin fails to support AppBp1-Uba3 catalyzed activation or HsUbc12 transthiolation. However, modest inhibition of 125I-Nedd8 ternary complex formation by unlabeled ubiquitin suggests a Kd > 300 microm for ubiquitin. Alanine 72 of Nedd8 is a critical specificity determinant for AppBp1-Uba3 binding because 125I-UbR72L undergoes heterodimer-catalyzed hyperbolic HsUbc12 transthiolation and yields Km = 20 +/- 9 microm and kcat = 0.9 +/- 0.3 s-1. These observations demonstrate remarkable conservation in the mechanism of AppBp1-Uba3 that mirrors its sequence conservation with the Uba1 ubiquitin-activating enzyme. Class I ubiquitin-like proteins exert their biological effects through covalent conjugation to their respective target proteins via distinct ligation pathways that function in parallel to those of ubiquitination. The ubiquitin-like proteins include Sumo (1, 2), Nedd8 (3,4), Hub1 (5), ISG15 (6), FAT10 (7,8), and Apg12 (9) among others, reviewed in Ref. 10. Generally, conjugation of ubiquitin-like proteins modulates the protein-ligand interactions of their target rather than committing the target protein to degradation, the role most associated with ubiquitin ligation. Among the ubiquitin-like proteins, Nedd8 is the most closely related to ubiquitin with 58% identity between human paralogs (10). Nedd8 and its plant ortholog Rub1 are conjugated to Cdc53/Cul1 (11,12) and other members of the Cullin family of proteins (13). Cullins are essential structural subunits of the Skp1-based (SCF, Skp1, Cul1, F-box) and elongin B/C-based families of ubiquitin protein ligases (E3), 1 reviewed in Refs. 14 and 15. Conjugation of Nedd8 to Cul1 and Cul2 requires the Ring finger protein Roc1/Rbx1/Hrt1, which serves as a docking adapter (16,17). The attachment of Nedd8 to Cullin isoforms is not required for the intrinsic ubiquitin ligase activity of the SCF complex; however, it enhances ubiquitin chain formation through activation of the cognate E2 ubiquitin-conjugating enzyme (12,18). Because of the central role of SCF and elongin B/C ubiquitin ligases in critical regulatory processes within eukaryotes, Nedd8 conjugation is an essential post-translational modification that is subject to considerable recent interest (12,15). The ATP-coupled activation of Nedd8 that is required for subsequent charging of the Nedd8-specific Ubc12-conjugating enzyme is catalyzed by heterodimeric AppBp1-Uba3 in humans (4,19). Human Uba3 shows 43% homology to the carboxylterminal 500 residues of the human ubiquitin-activating enzyme HsUba1 and encompasses the putative ubiquitin adenylate active site identified by homology to the MoeB subunit of molybdopterin synthase (20,21). Deletion of Uba3 is embryonic lethal in mice, arising from a mitotic defect in the G 1 /G 0 transition and the resulting accumulation of cyclin E and ␤-catenin (22,23), both targets of SCF ligases (24). However, the Uba3catalyzed activation of Nedd8 exhibits an absolute requirement for AppBp1, a protein first identified by its interaction with the carboxyl terminus of amyloid precursor protein and that has marked homology to the amino-terminal half of Uba1 (20,21,25). Overexpression of AppBp1 rescues the temperature-sensitive ts41 mutation of Chinese hamster lung cells by driving S-M checkpoint progression through Nedd8 conjugation (26). The recent 2.6 Å structure of human AppBp1-Uba3 confirms that AppBp1 is required in part to contribute a short conserved active site segment first identified in the mechanistically related MoeB subunit of molybdopterin synthase (21,27). In the activation of ubiquitin, Uba1 forms a ternary complex composed of 1 eq each of a tightly bound ubiquitin adenylate and a covalent Uba1-ubiquitin thiol ester to a conserved active site, Cys 632 (28,29), HsUba1a numbering. Early ATP:PP i exchange studies demonstrated that rabbit Uba1 catalyzes an absolutely ordered mechanism in which ATP binding precedes that of ubiquitin prior to the first catalytic step of ubiquitin adenylate formation (29). The activated ubiquitin moiety is subsequently transferred to the thiol ester site comprising Cys 632 prior to formation of a second ubiquitin adenylate, to generate the final ternary complex (28,29). Marked conservation among the activating enzymes for ubiquitin and ubiquitinlike proteins argues that they proceed by similar mechanisms. However, the apparently substoichiometric formation of the predicted Nedd8 adenylate intermediate catalyzed by the reconstituted plant ortholog of AppBp1-Uba3 suggests that the catalytic cycle for Nedd8 activation may exhibit some differ-ences from that of ubiquitin (30). The latter observation is significant because the presence of enzyme-bound ubiquitin adenylate is required for ubiquitin transthiolation from the E1 ternary complex to E2 carrier proteins, even though this intermediate is not the immediate donor of activated polypeptide (31). To date, the enzymes involved in the activation of ubiquitin-like proteins have not been mechanistically characterized in sufficient detail to resolve these and related questions. In the present studies, we have found that human erythrocytes represent an excellent source of active AppBp1-Uba3 heterodimer and have utilized covalent affinity purified Ap-pBp1-Uba3 and recombinant human Ubc12 in the first mechanistic studies of Nedd8 activation. The results demonstrate marked conservation between the mechanisms for Nedd8 and ubiquitin activation and identify a critical specificity determinant for polypeptide recognition by their respective activating enzymes. MATERIALS AND METHODS Bovine ubiquitin was purchased from Sigma and purified to homogeneity as described previously (32). Homogeneous wild-type ubiquitin, the recombinant ubiquitin mutant UbR72L (33), and recombinant human Nedd8 were radiolabeled by the chloramine-T method using carrier-free Na 125 I obtained from PerkinElmer Life Sciences (34 Cloning, Expression, and Purification of Human Recombinant Nedd8 -Nedd8 was cloned from a HeLa cell cDNA library by PCR amplification using 5Ј and 3Ј primers flanking the coding sequence that contained NdeI or EcoRI restriction sites, respectively. The PCR product was ligated directly into pGEM-T (Promega) for amplification and purification. The resulting pGEM-Nedd8 construct was digested with NdeI/EcoRI and ligated into similarly restricted pPLhUb to yield pPLNedd8 (33). The Nedd8 coding sequence was verified by sequencing the entire coding region by automated sequencing in The Protein and Nucleic Acid Core Facility at the Medical College of Wisconsin. The pPLNedd8 plasmid was transformed into AR58 Escherichia coli cells constitutively expressing a temperature-sensitive [lamda] repressor protein (35). Protein expression was induced by rapidly increasing the cultures to the non-permissive temperature of 42°C (32,33,35). After 2 h of induction at 42°C, cells from 2 liters of LB culture containing 100 g/ml ampicillin were collected by centrifugation at 5,000 ϫ g for 15 min then lysed by passage through a French press. All subsequent steps were conducted at 4°C. The lysate was centrifuged for 30 min at 30,000 ϫ g. Recombinant Nedd8 protein occurred nearly quantitatively within bacterial inclusion bodies present in the 30,000 ϫ g pellet but could be extracted and refolded using a modification of a protocol previously described for the isolation of unstable recombinant ubiquitin mutants (33). The pellet containing the Nedd8-associated inclusion bodies was collected and resuspended to the original lysate volume in 50 mM Tris acetate (pH 7.5) then centrifuged again. The resulting pellet was solubilized in 30 ml of Tris acetate buffer (pH 7.5) containing 6 M urea. The mixture was allowed to stir at room temperature for 30 min, then the urea was slowly removed by dialysis (3.5-kDa exclusion limit dialysis tubing) overnight against 2 ϫ 4 liters of 50 mM Tris acetate buffer (pH 7.5). Insoluble protein was removed by centrifugation at 5,000 ϫ g for 15 min. The supernatant was batch adsorbed onto a 250-ml bed volume of DEAE cellulose (Whatman) equilibrated with 50 mM Tris acetate (pH 7.5). The unadsorbed fraction from the DEAE cellulose was collected and titrated to pH 4.5 with acetic acid before being applied to a 2.5 ϫ 10-cm column of CM52 cellulose (Whatman) equilibrated with 25 mM ammonium acetate (pH 4.5) (36). Bound protein was eluted stepwise with 50 mM ammonium acetate (pH 5.5). Fractions containing Nedd8 protein were pooled and dialyzed against distilled water. The pH of the dialyzed solution was adjusted to 4.5 with acetic acid before being applied to a Pharmacia Mono S 5/5 column equilibrated with 25 mM acetic acid titrated to pH 4.5 with ammonium acetate (36). The bound protein was eluted with a linear 0 -0.5 M NaCl gradient (10 mM/min) at 1 ml/min flow rate. This procedure generally yielded 2-5 mg of protein per liter of E. coli culture. The protein was greater than 99% homogeneous as assessed by Coomassie Brilliant Blue staining following resolution by 14% (w/v) polyacrylamide SDS-PAGE. Absolute protein con-centration was determine spectrophotometrically using the empirical 280 nm ubiquitin extinction coefficient of 0.16 (mg/ml) Ϫ1 (36). 2 Cloning and Expression of Human AppBp1 and Uba3-The fulllength coding sequence of Uba3 was cloned by PCR from human fetal brain expressed sequence tagged I.M.A.G.E. Consortium Clone 45573 (American Type Culture Collection). Flanking primers containing complimentary 5Ј and 3Ј coding sequences and either SalI or EcoRI restriction sites, respectively, were used to amplify the cDNA. The resulting PCR product was ligated into pGEM-T (Promega) to yield pGEMT-Uba3 for subsequent amplification, purification, and sequencing. The complete Uba3 coding sequence was then subcloned into pGEX4T-1 (Amersham Biosciences) using SalI and EcoRI restriction sites to yield pGEX-Uba3. AppBp1 was cloned by PCR from a HeLa cDNA library using flanking primers that contained complimentary 5Ј and 3Ј coding sequences and either NdeI or EcoRI restriction sites, respectively. The PCR product was ligated directly into pGEM-T. The resulting pGEMT-AppBp1 clone was digested with NdeI and EcoRI, then subcloned into complimentary digested pGEX4T-1 to yield pGEX-AppBp1. The AppBp1 sequence was verified by sequencing the entire insert. The glutathione S-transferase fusion proteins GST-Uba3 and GST-AppBp1 were expressed in E. coli BL21 cultures and purified from refolded inclusion bodies by glutathione affinity chromatography. Briefly, bacteria were grown to an A 600 of 0.6 at 30°C and induced by the addition of 0.1 mM isopropyl-1-thio-␤-D-galactopyranoside. After 2 h of induction, the cells were collect by centrifugation and lysed by passage through a French press. The lysate was centrifuged at 30,000 ϫ g for 30 min. The resulting pellets were washed with buffer containing 50 mM Tris-HCl (pH 7.5), 2 mM EDTA, and 1 mM DTT, then resuspended to the original lysate volume in the same buffer containing 6 M urea. After being allowed to stand on ice for 30 min, the urea was removed by dialysis (3.5-kDa exclusion limit dialysis tubing) against 2ϫ 4 liters of 50 mM Tris-HCl (pH 7.5) containing 1 mM DTT. Insoluble protein was removed by centrifugation prior to applying the dialysate to a 5-ml glutathione-agarose column. Unbound protein was removed by washing the column with 5 bed volumes of 50 mM Tris-HCl (pH 7.5). Bound protein was eluted with 5 bed volumes of 50 mM Tris-HCl (pH 7.5) containing 20 mM glutathione, then concentrated with a Millipore Ultrafree BioMax-5K centrifugal filter. The resulting fusion protein was cleaved by digestion with 10 units of thrombin (Amersham Biosciences) per milligram of recombinant protein according to the manufacturer's recommendations. Processed AppBp1 or Uba3 were resolved from GST and thrombin by fast protein liquid chromatography using a Pharmacia Mono Q 5/5 column equilibrated with 50 mM Tris-HCl (pH 7.5) containing 1 mM DTT. Both AppBp1 and Uba3 eluted between 0.31 and 0.35 M within a linear 0 -0.5 M NaCl gradient (12.5 mM/min) at 1 ml/min flow rate. Recombinant AppBp1 and Uba3 proteins were greater than 80% pure, as assessed by Coomassie Brilliant Blue staining of samples resolved by 10% (w/v) SDS-PAGE, and were used without further purification. Protein concentrations for AppBp1 and Uba3 were estimated densitometrically by comparing Coomassie-stained bands to bovine serum albumin standards. Cloning and Expression of HsUbc12-Human Ubc12 was cloned by PCR from a HeLa cell cDNA library using 5Ј and 3Ј primers immediately flanking the HsUbc12 coding sequences that contained SalI and EcoRI restriction sites, respectively. The PCR product was subsequently ligated into pGEM-T (Promega) for amplification and sequencing. The resulting construct was digested with SalI and EcoRI, then the HsUbc12 coding sequence was isolated and ligated into similarly restricted pGEX4T-1 (Amersham Biosciences) to yield pGEX-Ubc12. The HsUbc12 coding sequence was verified by sequencing the entire insert. The GST-HsUbc12 fusion protein was expressed in E. coli BL21 cells and purified by glutathione affinity chromatography. Bacterial cells transformed with pGEX-Ubc12 were grown at 37°C to an A 600 of 0.6, then protein expression was induced by the addition of isopropyl-1-thio-␤-D-galactopyranoside to a final concentration of 0.1 mM. After 2 h of induction, cells were collected by centrifugation, resuspended in buffer containing 50 mM Tris-HCl (pH 7.5), 2 mM EDTA, and 1 mM DTT, then lysed by passage through a French press. The lysate was clarified by centrifugation at 30,000 ϫ g for 30 min and the resulting supernatant was applied to a glutathione-agarose column. Unbound protein was removed by washing the column with 5 bed volumes of 50 mM Tris-HCl 2 The identical aromatic amino acid content for ubiquitin and human Nedd8 allowed us to use the empirical extinction coefficient of the former polypeptide in this spectrophotometric assay. (pH 7.5), 2 mM EDTA, and 1 mM DTT. Bound protein was eluted with 50 mM Tris-HCl (pH 7.5) containing 20 mM glutathione, then concentrated using a Millipore Ultrafree BioMax-5K centrifugal filter. Following processing of the fusion protein with thrombin (Amersham Biosciences) at 10 units per mg of fusion protein, free HsUbc12 was further purified by fast protein liquid chromatography using a Pharmacia Mono Q 5/5 column equilibrated with 50 mM Tris-HCl (pH 7.0) containing 1 mM DTT. The HsUbc12 protein was eluted at 0.26 M within a linear 0 -0.5 M NaCl gradient (12.5 mM/min) at 1 ml/min. The protein was greater than 95% pure as assessed by Coomassie Blue Brilliant staining. The concentration of active protein was determined by an end point thiol ester assay using 125 I-Nedd8 (about 10,000 cpm/pmol) and affinity purified AppBp1-Uba3 heterodimer (37). Purified protein was flash frozen and stored at Ϫ80°C for several months without loss of activity. Affinity Purification of Human AppBp1-Uba3 Heterodimer from Human Erythrocytes-The AppBp1-Uba3 complex was isolated from human red blood cell Fraction II using Nedd8 affinity chromatography by adapting earlier methods for the isolation of Uba1 (28,37). Recombinant Nedd8 was coupled to Affi-Gel 10 (Bio-Rad) at ϳ0.5 mg of Nedd8/ml of resin for a final concentration of 60 M Nedd8 (37). Five units of outdated whole blood was obtained from the Blood Center of Southeastern Wisconsin and used to prepare Fraction II as described previously (37). Erythrocyte Fraction II was supplemented with a final concentration of 2 mM ATP, 10 mM MgCl 2 , 10 mM creatine phosphate, and 1 IU/ml creatine phosphokinase, then applied to the Nedd8 affinity column (10 ml bed volume) previously equilibrated with 50 mM Tris-HCl (pH 7.5), 2 mM ATP, and 2 mM MgCl 2 . The column was washed successively with 2 bed volumes of 50 mM Tris-HCl (pH 7.5), 3 bed volumes of 50 mM Tris-HCl (pH 7.5) containing 0.25 M KCl, and 2 bed volumes of 50 mM Tris-Cl (pH 7.5). Bound protein was eluted with 2 bed volumes of 50 mM Tris-HCl (pH 7.5) containing 2 mM AMP and 2 mM PP i followed by 2 bed volumes of 0.1 M Tris-HCl (pH 9.0) containing 10 mM DTT. The latter eluate was adjusted to pH 7.5 immediately following elution. The proteins from the two elutions were separately concentrated using a Millipore Ultrafree BioMax-5K centrifugal filter then dialyzed against 50 mM Tris-HCl (pH 7.5) containing 1 mM DTT and used without further purification. The concentration of active AppBp1-Uba3 heterodimer was determined by measuring the stoichiometric formation of Uba3-125 I-Nedd8 thiol ester (below). The stoichiometric formation of Uba3-125 I-Nedd8 thiol ester was determined as previously described for Uba1 using radioiodinated protein (28,29). Briefly, 25-l reactions containing 50 mM Tris-HCl (pH 7.5), 10 mM MgCl 2 , 2 mM ATP, 1 mM DTT, 5 M 125 I-Nedd8 (about 10,000 cpm/pmol), 0.5 IU inorganic pyrophosphatase, and the indicated amounts of AppBp1-Uba3 were incubated at 37°C for 3 min. The reactions were quenched by the addition of an equal volume of 4% SDS-sample buffer and the proteins were resolved by non-reducing SDS-PAGE on 12% (w/v) acrylamide gels. The gels were dried and the thiol esters were visualized by autoradiography, then excised and quantified by ␥ counting (28,29). ATP:PP i Isotope Exchange Kinetic Assays-Initial rates of ATP: 32 PP i isotope exchange were measured as previously described for the ubiquitin-activating enzyme (29). Fifty-l reactions contained 50 mM Tris-HCl (pH 7.5), 10 mM MgCl 2 , 1 mM DTT, 1 mM Na 32 PP i (25-50 cpm/ pmol), 10 nM human erythrocyte AppBp1-Uba3 heterodimer, and the indicated concentrations of ATP, Nedd8, and 32 PP i . After starting the reactions by addition of Nedd8, the assays were incubated at 37°C for 20 min, then quenched by the addition of 0.5 ml of 5% (w/v) trichloroacetic acid containing 4 mM NaPP i followed by 300 l of a 10% (w/v) charcoal slurry in 2% (w/v) trichloroacetic acid. The assays were centrifuged at 14,000 ϫ g for 5 min, then the supernatant was removed by aspiration. The charcoal pellet was washed three times with 1 ml of 2% (w/v) trichloroacetic acid prior to quantitation of 32 P radioactivity incorporated into ATP by Cerenkov radiation (29). HsUbc12 Transthiolation Kinetic Assays-Initial rates of AppBp1-Uba3 heterodimer-catalyzed transthiolation were assayed by monitoring formation of HsUbc12-125 I-Nedd8 as described previously for HsUbc2b (38). Reactions of 25 l contained 50 mM Tris-HCl (pH 7.5), 10 mM MgCl 2 , 1 mM DTT, 0.5 IU fast protein liquid chromatographypurified inorganic pyrophosphatase, 0.5 nM AppBp1-Uba3, and the indicated concentrations of ATP, 125 I-Nedd8 (about 10,000 cpm/pmol), and recombinant HsUbc12. The reactions were incubated for 1 min at 37°C to reach thermal equilibrium prior to the addition of radiolabeled Nedd8. After 30 s, the reactions were quenched by the addition of an equal volume of 4% SDS sample buffer and the proteins were resolved by non-reducing SDS-PAGE on 12% (w/v) acrylamide gels (37,38). The gels were dried and the 125 I-Nedd8 thiol esters were visualized by autoradiography. Absolute amounts of Ubc12-125 I-Nedd8 thiol ester formed were determined by ␥-counting of excised bands (37,38). RESULTS Initial Studies with Recombinant AppBp1 and Uba3-Recombinant AppBp1 and Uba3 were expressed separately and purified as described under "Materials and Methods." The purified proteins were of the predicted molecular weights as assessed by SDS-PAGE and Coomassie staining (data not shown). Neither of the two purified proteins alone at 20 nM demonstrated a detectable level of 125 I-Nedd8 thiol ester formation by non-reducing SDS-PAGE nor was such a thiol ester detected when 20 nM of each of the two proteins were mixed (not shown). Similarly, there was no detectable Nedd8 [ 3 H]adenylate formed by either of the recombinant proteins individually at 20 nM or in equimolar combination. However, equimolar mixtures of AppBp1 and Uba3 (20 nM) catalyzed a low rate of Nedd8-dependent ATP: 32 PP i exchange that was not observed with either subunit alone (not shown). Because ATP: 32 PP i exchange must proceed through a Nedd8 adenylate intermediate (29,39), a low level of active heterodimer must be formed on mixing of the recombinant subunits which was below the limit of detection by the stoichiometric Nedd8 [ 3 H]adenylate assay yet detectable by the much more sensitive isotope exchange rate assay. The reconstituted AppBp1-Uba3 heterodimer must also form a Uba3-125 I-Nedd8 thiol ester because a low but measurable rate of 125 I-Nedd8 transthiolation to Ubc12 was also found at 20 nM recombinant AppBp1 and Uba3 (not shown). In refolding experiments, equimolar amounts of the recombinant proteins were combined and urea was added to a final concentration of 6 M. The urea was then removed by dialysis to allow refolding of the proteins. Refolding the combined proteins did not enhance the activity of 125 I-Nedd8 transthiolation to Ubc12, as a measure of functional heterodimer formation, above that found by combining the separately refolded subunits. However, a time-dependent, 10-fold increase in the rate of 125 I-Nedd8 transthiolation to HsUbc12 was noted when equimolar native AppBp1 and Uba3 were combined and then preincubated at 37°C prior to initiating the assay. The timedependent increase in HsUbc12 transthiolation activity followed first-order kinetics with a t1 ⁄2 of 9.8 min. Together, these observations suggest that the in vitro formation of an active AppBp1-Uba3 heterodimer is relatively slow. Affinity Purification of the AppBp1-Uba3 Heterodimer from Human Erythrocytes-Because human erythrocytes are a rich source of the ubiquitin-activating enzyme, we tested human erythrocyte Fraction II for Nedd8 activating activity in an effort to obtain a more practical source of functional AppBp1-Uba3 heterodimer. Fraction II from human erythrocytes shows a pronounced band of 125 I-Nedd8 thiol ester when incubated with the radiolabeled polypeptide in the presence (lane 2) but not the absence (lane 1) of ATP (Fig. 1A). The mobility of the 125 I-Nedd8 thiol ester band on SDS-PAGE gave an apparent molecular weight of 60,000 that was consistent with the predicted molecular weight for Uba3-125 I-Nedd8 thiol ester of 58,000. In addition, the Uba3-125 I-Nedd8 thiol ester band was labile to brief incubation at 100°C in the presence of ␤-mercaptoethanol (not shown), a characteristic feature of 125 I-ubiquitin thiol esters (28). Based on radioactivity present in excised bands of the Uba3-125 I-Nedd8 thiol ester (28), the content of Uba3 was estimated to be 340 pmol/unit of whole blood. A HsUbc12-125 I-Nedd8 thiol ester was not detected in these assays, suggesting that endogenous HsUbc12 in erythrocyte Fraction II must be below the limit of detection for the assay (ϳ0.005 pmol). However, when recombinant HsUbc12 was added to Fraction II, an amount of HsUbc12-125 I-Nedd8 thiol ester was formed based on a separate stoichiometric end point assay for active HsUbc12 using recombinant AppBp1-Uba3 heterodimer (not shown). Either human Ubc12 is lost during terminal differentiation (40) or, more likely, is resolved from erythrocyte Fraction II during DEAE cellulose chromatography because the predicted pI of the polypeptide is 7.7. However, erythrocyte Fraction II must otherwise contain a competent Nedd8 ligation pathway because addition of recombinant HsUbc12 and 125 I-Nedd8 to Fraction II resulted in the appearance of an ATP-dependent conjugate band of 85 kDa corresponding in relative mobility to that predicted for modification of Cul1 and/or Cul2 (not shown). When the human erythrocyte Fraction II was passed through an Affi-Gel 10 Nedd8 affinity column, the activity forming Uba3 thiol ester was depleted from the unadsorbed Fraction II (Fig. 1A, lane 3). Approximately 28% of the initial activity forming Uba3 thiol ester activity was recovered when the column was eluted with 2 mM AMP and PP i (Fig. 1A, lane 4) and 58% of the initial activity forming Uba3 thiol ester activity was recovered in the pH 9.0 -10 mM DTT eluate (Fig. 1A, lane 5). Resolution of the AMP-PP i and pH 9-DTT eluted samples by reducing SDS-PAGE followed by silver staining revealed two bands of 62 and 51 kDa that were in good agreement with the expected molecular weights of 63,000 and 49,000 for human AppBp1 and Uba3, respectively (Fig. 1B). Interestingly, at higher sample loads we noted that the AppBp1 band showed a markedly lower color yield following Coomassie staining than did the Uba3 band, leading to the erroneous impression that the subunits are present at a non-stoichiometric ratio (not shown). However, subsequent silver staining of the gel showed approximately identical intensities for the two subunits (Fig. 1B), consistent with a 1:1 ratio for active heterodimer. This conclusion was confirmed by a similar difference in color yield on Coomassie staining of normalized thrombin-processed recombinant GST-AppBp1 and GST-Uba3 (not shown). In addition, a 1:1 stoichiometry for AppBp1-Uba3 is consistent with the crystal structure of the heterodimer (27). The AppBp1-Uba3 Heterodimer Forms a Stoichiometric Nedd8 Ternary Complex-Ubiquitin-activating enzyme forms a ternary complex during the activation of ubiquitin that is composed of 1 eq each of tightly bound ubiquitin adenylate and covalent ubiquitin thiol ester (28,29). To determine whether human erythrocyte AppBp1-Uba3 heterodimer forms a similar Nedd8 ternary complex, the stoichiometry of Nedd8 [ 3 H]adenylate and Uba3-125 I-Nedd8 thiol ester was determined in parallel with a quantity of human erythrocyte Uba1 ubiquitinactivating enzyme that produced a silver-stained band following SDS-PAGE resolution of the same intensity as affinity purified erythrocyte HsUba3. Table I AppBp1-Uba3 Heterodimer Catalyzes a Random Addition Mechanism for Nedd8 Activation-Previously, ATP: 32 PP i exchange kinetics have been used to demonstrate that ubiquitinactivating enzyme proceeds through an ordered addition mechanism for which ATP is the leading and ubiquitin the trailing substrate (29). Human AppBp1-Uba3 heterodimer catalyzes an analogous ATP: 32 PP i exchange reaction that is absolutely dependent on the presence of Nedd8 (not shown). At 1 M Nedd8 and 1 mM 32 PP i the concentration dependence of ATP on the initial rate for human erythrocyte AppBp1-Uba3 heterodimer- That the initial isotope exchange rate tends to a limiting value rather than zero at infinite Nedd8 concentration indicates that the mechanism for the AppBp1-Uba3 heterodimer follows a formally random addition mechanism, although there is a preferential order of ATP binding preceding that of Nedd8 based on their relative affinities (41). Transthiolation Kinetics for AppBp1-Uba3 Heterodimer-Work from our laboratory has recently shown that initial rates for the transfer of 125 I-ubiquitin thiol ester from the E1 ternary complex to various E2 isozymes can be used as a facile kinetic assay for determining the intrinsic K d of substrate binding (38) . Fig. 3 shows an analogous double reciprocal plot for the dependence of 125 I-Nedd8 concentration on the initial rate of HsUbc12 transthiolation catalyzed by the human AppBp1-Uba3 heterodimer. Linearity of the plot demonstrates that the heterodimer conforms to simple hyperbolic kinetics with respect to radiolabeled Nedd8. In addition, observation of strict hyperbolic kinetics over a Nedd8 concentration range for which substrate inhibition is observed for ATP: 32 PP i exchange (Fig. 2) confirms that the latter behavior is a consequence of pseudoordered substrate addition rather than formation of a nonproductive dead end complex. When fitted by non-linear hyperbolic regression analysis, the data of Fig. 3 (29) and the K m of 0.8 Ϯ 0.2 M recently reported for ubiquitin binding to its HsUba1 ortholog by direct transthiolation kinetics (38). Similar affinities for Nedd8 binding to human AppBp1-Uba3 and ubiquitin binding to HsUba1 probably reflects evolutionary constraints placed on the respective enzymes to satisfy the condition for saturation with respect to polypeptide in order for subsequent conjugation steps to remain rate-limiting, as has been discussed previously (42). The dependence of the initial velocity for 125 I-Nedd8 transthiolation with respect to changes in ATP and HsUbc12 concentrations exhibited similar hyperbolic kinetics based on the linearity of their respective double reciprocal plots (not shown). Values of K m and V max , the latter yielding corresponding estimates for k cat , were calculated from non-linear hyperbolic fitting of the data and are summarized in Table II. The K m of 103 Ϯ 12 M for ATP binding to AppBp1-Uba3 (Table II) was considerably higher than the K m of 7.0 Ϯ 1.1 M recently reported for ATP binding to human Uba1 (38); however, Nedd8 activation must remain saturating with respect to the normal cellular ATP concentration of about 5 mM. In contrast, the K m of 43 Ϯ 13 nM for HsUbc12 binding to AppBp1-Uba3 (Table II) is in the range of the K m values of 123 Ϯ 19 and 102 Ϯ 13 nM found for HsUbc2b binding to human and rabbit Uba1 orthologs, respectively (38). The latter correspondence in affinities for AppBp1-Uba3 and HsUba1 binding to their cognate E2 carrier proteins likely reflects selective constraints imposed by the intracellular concentrations of the Ubc paralogs. Interestingly, the k cat of about 3.5 Ϯ 0.2 s Ϫ1 for HsUbc12 transthiolation from the AppBp1-Uba3 ternary complex (Table II) is remarkably close to the values of 4.5 Ϯ 0.3 s Ϫ1 and 4.8 Ϯ 0.2 s Ϫ1 reported for HsUbc2b transthiolation catalyzed by human and rabbit Uba1 orthologs, respectively (38). Concordance in the k cat values for the ubiquitin-and Nedd8-specific enzyme para- logs presumably reflects similar geometries for the transition states of the respective transthiolation reactions. Ala 72 Is an Important Specificity Determinant for Nedd8 Recognition by AppBp1-Uba3 Heterodimer-Of the known ubiquitin-like proteins, Nedd8 is the most similar (58% identity) in sequence to ubiquitin (10); therefore, we were interested in whether ubiquitin could substitute for Nedd8 in the catalytic cycle of the AppBp1-Uba3 heterodimer. In stoichiometry studies similar to those of Table I, we were unable to detect formation of either heterodimer-bound ubiquitin [ 3 H]adenylate or covalent Uba3-125 I-ubiquitin thiol ester (not shown). In addition, the more sensitive turnover assay involving heterodimer-catalyzed HsUbc12 transthiolation described under "Materials and Methods" failed to detect any HsUbc12-125 Iubiquitin thiol ester formation after 3 min incubation in the presence of 66 M 125 I-ubiquitin (5000 cpm/pmol) and 20 nM heterodimer. Therefore, AppBp1-Uba3 heterodimer appears to exhibit marked discrimination against ubiquitin as an alternative substrate. Because wild type ubiquitin is not activated by human Ap-pBp1-Uba3 heterodimer, we tested ubiquitin as a competitive inhibitor of 125 I-Nedd8 activation in a coupled HsUbc12 transthiolation assay under App1Bp1-Uba3 limiting conditions. In assays conducted as described under "Materials and Methods" in the presence of 0.2 nM human AppBp1-Uba3 heterodimer, 1 M HsUbc12, and 1 M 125 I-Nedd8, we observed 14% inhibition in the initial rate of HsUbc12 transthiolation in the presence of 100 M wild type ubiquitin. Reasonably assuming competitive inhibition, the observed 14% inhibition corresponds to a K d (measured as K i ) for wild type ubiquitin of about 300 M representing a ⌬⌬G 0 of about 3.4 kcal/mol. Docking studies of Nedd8 with human AppBp1-Uba3, modeled after the analogous interaction of MoaD with MoeB (21), has prompted Walden et al. (27) recently to suggest that Ala 72 of Nedd8 is a critical specificity determinant allowing the heterodimer to distinguish its cognate polypeptide substrate from other ubiquitin-like paralogs. The ability of wild type Nedd8 to support AppBp1-Uba3 catalyzed activation and HsUbc12 transthiolation is significantly ablated by mutating the Uba3 residues Leu 206 and Tyr 207 predicted to interact with Ala 72 of Nedd8 (27). We have previously used the UbR72L point mutant to show that Arg 72 is an important binding determinant for ubiquitin recognition by rabbit reticulocyte Uba1 (33). Although wild type 125 I-ubiquitin is unable to support measurable AppBp1-Uba3 activation or HsUbc12 transthiolation, the data of Fig. 4 demonstrate that the 125 I-UbR72L supports heterodimer catalyzed charging of HsUbc12. The concentration dependence for HsUbc12-125 I-UbR72L thiol ester formation with respect to [ 125 I-UbR72L] o is hyperbolic, demonstrated by the linearity of the reciprocal plot in Fig. 4, and yields a K m of 20 Ϯ 9 M by nonlinear regression analysis. In addition, there is remarkable concordance between the k cat for 125 I-UbR72L of 0.9 Ϯ 0.3 s Ϫ1 calculated from the V max (Fig. 4) and the k cat of 3.5 Ϯ 0.2 s Ϫ1 for 125 I-Nedd8 (Table II). The good agreement suggests that specificity is principally an affinity effect and that the transthiolation step from the AppBp1-Uba3 ternary complex to HsUbc12 otherwise exhibits little discrimination between the two orthologs compared with the initial step of polypeptide binding. DISCUSSION The conjugation of ubiquitin and related ubiquitin-like polypeptides to specific protein targets represents a fundamental and highly conserved strategy of eukaryotic cell regulation (43)(44)(45). These post-translational modifications require distinct yet evolutionarily related enzyme pathways that share a common mechanism in which the half-reactions of activation and ligation are catalyzed by separate enzymes (42, 43). The Uba1 ubiquitin-activating enzyme catalyzes the first step in the conjugation of ubiquitin to protein targets and serves as the archetype for similar steps in the activation of other ubiquitin paralogs that now include Sumo, Nedd8, Hub1, ISG15, FAT10, and Apg12 (10). The marked sequence homology between Uba1 and the AppBp1-Uba3 heterodimer required for Nedd8 activation reveals a divergent evolutionary relationship; however, because no activation step for a ubiquitin-like protein has been examined in detail, it has been uncertain whether the similarity in sequences is mirrored by a shared catalytic mechanism. The present studies demonstrate that the marked sequence homology between Uba1 and AppBp1-Uba3 reflects an overall conservation in mechanism. Quantitative stoichiometry studies show for the first time that human AppBp1-Uba3 heterodimer forms a stable ternary complex comprised of equivalent amounts of Nedd8 adenylate and Uba3-Nedd8 thiol ester (Table I). This complex is analogous to the ternary complex originally observed for Uba1 (28). Earlier detection of only trace Rub1 [ 32 P]adenylate formation by the plant heterodimer ortholog Axr1-Ecr1 (30) presumably reflects the exceedingly low yield of active heterodimer formed when reconstituted from individual recombinant subunits (this study). The latter conclusion is supported by the extensive hydrophobic interface between AppBp1 and Uba3 revealed in the recent crystal structure of the human heterodimer (27). Time-dependent reconstitution of Nedd8 activating activity, when monitored by the initial rate of HsUbc12 transthiolation, upon mixing of the separate subunits (this study) most likely reflects a rapid initial subunit association followed by a slower reorganization to yield the native heterodimer. 4. Dependence of 125 I-UbR72L concentration on AppBp1-Uba3 heterodimer-catalyzed HsUbc12 transthiolation. The initial rates of HsUbc12-125 I-UbR72L thiol ester formation were determined at the indicated concentrations of 125 I-UbR72L in assays conducted as described in the legend to Fig. 3 with the exception that incubations were for 3 min. In addition to the conservation in formation of a stable ternary complex, human AppBp1-Uba3 heterodimer is characterized by a pseudo ordered mechanism of substrate addition (Fig. 2). Wild type ubiquitin-activating enzyme possesses an ordered mechanism for substrate binding with ATP serving as the obligatory leading and ubiquitin the obligatory trailing substrate, based on early kinetic isotope exchange studies (29). In such studies, ordered addition is characterized by substrate inhibition at high concentrations of the trailing ligand and a limiting rate tending to zero velocity at infinite concentration, discussed in Ref. 29. In the present study, the dependence of initial ATP: 32 (Fig. 2). Therefore, human AppBp1-Uba3 heterodimer proceeds through a preferentially ordered binding of ATP followed by Nedd8 that resembles that of Uba1. However, because the limiting initial rate at high Nedd8 concentrations tends to a value of 2.2 pmol/min, representing about 4% of the extrapolated V max of 54 Ϯ 1.4 pmol/min, the mechanism of human AppBp1-Uba3 is formally random. Burch and Haas (33) have previously shown that Uba1-dependent activation of a UbR72L point mutant occurs through a random substrate addition mechanism. More recent alanine scanning mutagenesis of ubiquitin identified several additional surface residues that result in purely random addition or pseudo ordered addition, as shown here for the AppBp1-Uba3 catalyzed activation of Nedd8. 3 These observations indicate that ordered addition is not a structural requisite for the catalytic competence of ubiquitin-activating enzyme but reflects differential binding affinity for ATP and ubiquitin as leading versus trailing substrates. The pseudo ordered mechanism of human AppBp1-Uba3 suggests the relative affinities for ATP versus Nedd8 as leading versus trailing substrate are less constrained than for Uba1 and wild type ubiquitin. Because formation of the AppBp1-Uba3 ternary complex is rapid, direct kinetic studies of substrate binding is technically challenging. However, by exploiting the HsUbc12 transthiolation reaction as a coupled reporter assay, we have been able for the first time to quantitate the affinity of substrate binding to human AppBp1-Uba3 heterodimer. As noted earlier, the affinity of Nedd8 for human AppBp1-Uba3 heterodimer (K m ϭ 0.95 Ϯ 0.18 M) is remarkably similar to the K d of 0.58 M found for ubiquitin binding to rabbit Uba1 in equilibrium studies (29) and the K m of 0.8 Ϯ 0.2 M recently determined from analogous HsUba1-catalyzed HsUbc2b transthiolation kinetics (38). Likewise, the K m of 43 Ϯ 13 nM for HsUbc12 binding to AppBp1-Uba3 heterodimer is in the range of the K m of 123 Ϯ 19 nM for HsUbc2b binding to human ubiquitin-activating enzyme (38). The marked concordance in these substrate affinities probably reflects selective constraints imposed by the steady state concentrations of these ligands within the cell to prevent ubiquitin or Nedd8 activation from becoming rate-limiting in their respective ligation pathways, discussed in Ref. 42. In contrast, we consistently found that the K m for ATP binding to human AppBp1-Uba3 heterodimer (103 Ϯ 12 M, Table II) was considerably larger than the K m of 7.0 Ϯ 1.1 M for binding of the nucleotide to HsUba1 (38). The potential functional consequence of this difference is obscure because cellular ATP concentrations generally range near 5 mM. The crystal structure for the E. coli MoeB-ATP-MoaD ternary complex (21) and ATP docking studies to human AppBp1-Uba3 that was based on the MoeB-ATP-MoaD coordinates (27) identify 12 residues within the conserved nucleotide binding pocket that potentially interact with ATP. In a closer examination of the MoeB-ATP-MoaD structure we find an additional residue, Asp 506 , that is well positioned to hydrogen bond to the ribose ring of ATP or engage in a charge interaction with the ATPchelated Mg 2ϩ . Of these 13 residues, 10 are absolutely conserved among MoeB, human Uba3, the corresponding human Sumo-activating enzyme subunit Uba2, and the human ubiquitin-activating Uba1 (27). Among the remaining three variant positions, all of which interact with the adenine ring (21,27), Uba3 contains Ile 54 in place of the invariant valine present within the other three activating enzyme paralogs; the Ile 127 that is conserved among Uba3, Uba2, and MoeB is replaced by Val 552 in Uba1; and Ser 147 of Uba3 replaces a conserved asparagine in the other three enzymes. Presumably one or more of these substitutions account in part for the observed difference in ATP binding affinity (⌬⌬G 0 ϭ 0.7 kcal/mol) between Ap-pBp1-Uba3 and Uba1. The fidelity of signaling by conjugation of Class I ubiquitinlike proteins requires absolute specificity with respect to the polypeptide, as initially suggested for ISG15 (46). Discrimination among the ubiquitin-like proteins must occur at their respective activation steps because the polypeptides are committed to a specific ligation pathway once charge onto the cognate Ubc carrier protein. The present data demonstrates quantitatively that AppBp1-Uba3 exhibits absolute specificity for Nedd8 over ubiquitin. Catalytic specificity is expressed as k cat /K m , the effective second order rate constant. The kinetics for AppBp1-Uba3-catalyzed Nedd8 transthiolation of HsUbc12 yields a k cat /K m ϭ 3.5 ϫ 10 5 M Ϫ1 s Ϫ1 (Table II). Although 125 I-ubiquitin fails to support HsUbc12 thiol ester formation catalyzed by AppBp1-Uba3 heterodimer, the lower limit of detection from the kinetic study (about 10 cpm) allows us to estimate a k second order Յ 700 M Ϫ1 s Ϫ1 for wild type ubiquitin. Therefore, AppBp1-Uba3 heterodimer exhibits a catalytic specificity Ն500-fold for Nedd8 versus wild type ubiquitin. The data of Fig. 4 requires a k cat /K m ϭ 4.2 ϫ 10 3 M Ϫ1 s Ϫ1 for 125 I-UbR72L, reducing the difference in specificity to 83-fold through the contribution at residue 72. Whitby et al. (47) has shown that Arg 72 is also critical in allowing the ubiquitinactivating enzyme to discriminate between ubiquitin and Nedd8. Wild type Nedd8 exhibits low affinity binding to rabbit reticulocyte Uba1 (apparent K d ϭ 182 Ϯ 47 M); however, a Nedd8A72R point mutant binds the activating enzyme in competitive Uba1-125 I-ubiquitin thiol ester assays with an apparent K d of 2.8 Ϯ 0.2 M that is nearly identical to the apparent K d of 2.0 Ϯ 0.2 M for unlabeled ubiquitin (47). The present studies are the first comprehensive examination of the enzymology for Nedd8 activation catalyzed by human AppBp1-Uba3. The data demonstrate quantitatively that the marked sequence conservation between the Nedd8-specific heterodimer and the Uba1 ubiquitin-activating enzyme is mirrored by a conservation in mechanism that includes ternary complex stoichiometry and substrate affinities. In addition, the studies show that human erythrocytes represent a practical source for the facile isolation of this important enzyme reagent.
9,328.2
2003-07-18T00:00:00.000
[ "Biology", "Chemistry" ]
Adaptive Neuron-Like Control of Time-Delay Systems Enhanced with Feedforward and Supervisory Strategies Tracking control of nonlinear systems with significant delay effects has been the focus of intensive research. In this paper, we propose an effective supervised adaptive control scheme to tackle the problem.The scheme is composed of an adaptive control part of two neuron-like models with delay effects and a supervisory control part to enhance robustness against disturbance and model uncertainties. A design methodology based on the Lyapunov analysis is presented. Experimental results obtained from a practical temperature control system show that not only is the design procedure conceptually simple but also the control performance is also excellent when compared with the traditional PD controller. Also, the feedforward term is able to provide extra improvement in the regulation performance. Introduction The study of stability and stabilization for time-delay systems has received considerable attention in recent years [1][2][3][4] since delay is a major source of instability in many important engineering systems [5,6].For instance, Hopf bifurcation caused by time delay is extensively investigated in [7][8][9][10]. The applicability of neural-network-based techniques in nonlinear control systems has been successively demonstrated in [11][12][13] because of their unique modeling capability and adaptability [14,15].However, delay effects are not effectively considered in most of the proposed schemes and modeling error is ignored, which may be a potential source of instability [16].As neural networks have superior capability in the construction of models of complex nonlinear systems, [17][18][19][20] use a feed-forward neural network for model-based predictive control.However, only simulation results are demonstrated in most of these researches.Reference [21] also applied an indirectly derived feedforward term in a simulation study, but the approach is based on predictability of disturbances. In this paper, a particular class of adaptive neural controller is proposed based on a time-delay neural model. Inspired by [22], the model-based adaptation law has two auto-tuning neurons in which both delay effects and feedforward terms are explicitly included, which are not considered in the original contribution.Robustness and stability conditions are derived in the sense of Lyapunov for the design of the proposed adaptation scheme, and performance of the proposed scheme is demonstrated by experimental results of a temperature control system. Controller Design Firstly, we define the desired output as and tracking error as = −.Then we may define = [ , ẏ , . . ., (−1) ] and = [, ė , . . ., (−1) ] .Suppose that we choose a gain vector = [ 0 , 1 , . . ., −1 ] such that all roots of + −1 −1 + ⋅ ⋅ ⋅ + 1 + 0 = 0 are in the open left-half complex plane.The proposed control law () is given by where ( , ) is an adaptive control law and is a supervisory control law which enhances the robustness of the closed-loop system and improves transient performance by keeping system states stay in some prespecified region.The adaptive control law is defined as where f( ) and ĝ( ) are two neuron-like models: where where The adaptation law, ( 4) and ( 5), is designed to ensure the boundedness of and .Substituting (2) into (1), we have This implies that Let = [ ] and = [0 1×(−1) , 1] be a companion form pair; we may rewrite (10) as Now consider a Lyapunov function candidate where > 0 which satisfies the Lyapunov equation with Q being a positive definite symmetric matrix.In the subsequent derivation, we will choose Q such that min () > 1 with min () being the minimum eigenvalue of .Define where is a positive constant and Hence, if < , we have that ‖‖ < .Moreover, the derivative of along the trajectories of the closed-loop system (11) satisfies where , , are boundary functions for and such that 0 ≤ || ≤ and 0 < ≤ ≤ , then we can guarantee that For the following deviation, we define the modeling error where * and * are the optimal parameters.Then ( 11) can be rewritten using Taylor series expansions as We have with and being the approximation errors of higher order terms.Now consider another Lyapunov function candidate V given by Using ( 20)-( 22), we have Furthermore, as (0), mod (0), (0), 0 , and are bounded, and the projection method of the adaptation laws This guarantees that V < 0, if ‖‖ > ( √ ẽ/√2 −1 min ()).From the two adaptation laws ( 6) and (7), we obtain that if (24) is satisfied, the system (1) is uniformly ultimately bounded (UUB) stable. Experimental Study Temperature control systems are among the nonlinear systems with significant delay effects. The proposed control scheme has been implemented on a prototype temperature control system.The system includes a water tank, a water pump, a resistor heater which serves as disturbance, and four thermal couples, as shown in Figure 1.The 40 cm diameter tank is filled with water to a depth of 60 cm, the pump is driven by a 370 W frequency inverter, and the heater is driven by a SSR power IC.The control objective is to maintain the water temperature around the desired value = 30 ∘ C. Experimental results, shown in Figure 2, demonstrate that, in the face of disturbances, the output fluctuation was within 30 ± 0.42 ∘ C for (1), within 30 ± 0.25 ∘ C for (2), and within 30 ± 0.15 ∘ C using the proposed control scheme.It is clear that the proposed scheme was able to achieve accurate tracking performance in the face of measurable or predictable disturbance.Furthermore, under the condition of immeasurable disturbance, temperature of the adaptively controlled system, the control scheme of (2), suffered from larger fluctuation but is still better than that of the PD controlled system, demonstrating effectiveness of the adaptation for the nonlinear and delayed temperature control system. Conclusion We proposed a simple yet effective adaptive neural control scheme for delayed nonlinear systems.Experimental results validate its effectiveness and show that the feed-forward of disturbance, if available, can achieve further improvements.It is clear that the proposed scheme has an excellent regulation performance when compared with PD control law, and the feedforward term can achieve further improvements. Figure 1 :Figure 2 : Figure 1: (a) Structure of the experimental setup.(b) A close view of the installation of heater, cooling pipe, and thermocouples in the water tank (with the cover being opened for observation).A 60 mm thick LDPE (low density polyethylene) foam insulates the tank and forms its cover.
1,505
2013-07-10T00:00:00.000
[ "Engineering", "Computer Science" ]
A Photonic 1 × 4 Power Splitter Based on Multimode Interference in Silicon–Gallium-Nitride Slot Waveguide Structures In this paper, a design for a 1 × 4 optical power splitter based on the multimode interference (MMI) coupler in a silicon (Si)–gallium nitride (GaN) slot waveguide structure is presented—to our knowledge, for the first time. Si and GaN were found as suitable materials for the slot waveguide structure. Numerical optimizations were carried out on the device parameters using the full vectorial-beam propagation method (FV-BPM). Simulation results show that the proposed device can be useful to divide optical signal energy uniformly in the C-band range (1530–1565 nm) into four output ports with low insertion losses (0.07 dB). Introduction Optical power splitters play a crucial role in optical communication systems [1]. These components are significant for bringing the optical fiber to end-users [2]. Multimode interference (MMI)-based devices are important building blocks for photonic integrated circuits due to their simple structure, low excess loss, large optical bandwidth, and low polarization dependence [3,4]. The operation of photonic MMI devices is based on the self-imaging principle [3]. In a multimode waveguide, an input field profile is reproduced in single and multiple images at periodic intervals along the propagation axis of the waveguide, which practically produces self-images at different locations [5]. Slot waveguide structures are based on a combination of low-index material and high-index material [11]. The low-index layer (slot area) is surrounded by two high-index layers that enable the total internal reflection (TIR) effect in order to guide the light into the slot waveguide structure. There are no confinement losses in slot waveguide structures due the strong high power confinement inside the slot area (low-index). Therefore, there is a significant interest in designing a photonic device based on a slot waveguide structure that integrates semiconductor materials. Several studies [12][13][14][15][16][17][18] have been conducted to show the great potential of using a slot waveguide structure to design a photonic device. In addition, the fabrication of slot waveguide structures can be done with CMOS technologies for realizing silicon (Si) photonic chips [19]. Gallium nitride (GaN) has been widely used for integrating optically active nanoscale components with non-photonic devices [20]. For example, GaN components can be grown on epitaxial substrates or can be grown directly on silicon substrates [21]. GaN is well-known for its superior electrical properties, its resistance to temperature, and its potential to cover a wide spectral range. The benefits of using GaN based on conventional waveguides to design optical splitters [22,23] and couplers [24,25] have been demonstrated. Recently, researchers have demonstrated the potential of using GaN based on a slot waveguide for transmitting light in the visible range (400-800 nm) with little transmission loss (0.4-1.0 dB/cm) [26]. A MMI-device-based slot waveguide are very sensitive to the variation of the effective refractive index, which can influence the performance. Therefore, it is better to use a Si-GaN slot waveguide that has a low index difference compared with other materials (e.g., Si-alumina [15] and Si-silica (SiO 2 ) [27]) that were found to be suitable for a MMI-device-based slot waveguide. Thus, it is clear that using the GaN as the slot material can lead to better performances. In this paper, we introduce a unique design of a 1ˆ4 power splitter based on an MMI coupler in a Si-GaN slot waveguide structure. Tapered waveguides were integrated into the input/output of the MMI coupler to reduce the excess loss. Numerical investigations were carried out on the geometrical parameters of the device in order to obtain a self-imaging effect, strong power confinements inside the slot area, and uniform splitting of the optical signal at the output ports. The simulations were done using the full vectorial-beam propagation method (FV-BPM) [19]. The device operates at a wavelength of around 1550 nm with an 81-nm full width at half maximum (FWHM). Therefore, this device can be used in an optical networking system to split the energy in the C-band range. Figure 1a shows a schematic sketch of our proposed 1ˆ4 power splitter at the x-y plane. In Figure 1a, the green areas represent pure Si, blue area represents GaN, and the white area represents pure SiO 2 . The refraction index values for the operated wavelength (1550 nm) are n si = 3.48 (Si), n GaN = 2.305 (GaN), and n clad = 1.444 (SiO 2 ). Materials 2016, 9, 516 2 of 8 superior electrical properties, its resistance to temperature, and its potential to cover a wide spectral range. The benefits of using GaN based on conventional waveguides to design optical splitters [22,23] and couplers [24,25] have been demonstrated. Recently, researchers have demonstrated the potential of using GaN based on a slot waveguide for transmitting light in the visible range (400-800 nm) with little transmission loss (0.4-1.0 dB/cm) [26]. A MMI-device-based slot waveguide are very sensitive to the variation of the effective refractive index, which can influence the performance. Therefore, it is better to use a Si-GaN slot waveguide that has a low index difference compared with other materials (e.g., Si-alumina [15] and Si-silica (SiO2) [27]) that were found to be suitable for a MMI-device-based slot waveguide. Thus, it is clear that using the GaN as the slot material can lead to better performances. The 1ˆ4 Power Splitter Structure and the Theoretical Aspect In this paper, we introduce a unique design of a 1 × 4 power splitter based on an MMI coupler in a Si-GaN slot waveguide structure. Tapered waveguides were integrated into the input/output of the MMI coupler to reduce the excess loss. Numerical investigations were carried out on the geometrical parameters of the device in order to obtain a self-imaging effect, strong power confinements inside the slot area, and uniform splitting of the optical signal at the output ports. The simulations were done using the full vectorial-beam propagation method (FV-BPM) [19]. The device operates at a wavelength of around 1550 nm with an 81-nm full width at half maximum (FWHM). Therefore, this device can be used in an optical networking system to split the energy in the C-band range. The optimal geometric values (see Figure 2a,b) that have been found suitable to the Si-GaN slot waveguide structure are HSi = 300 nm (height of Si layer), HGaN = 100 nm (height of GaN layer), and WS = 400 nm (width of layer Si/GaN). These values have been chosen in order to enable a strong confinement of the electric field inside the slot area (see Figure 3). Figure 1b shows a cross-sectional view of the x-z plane. The 1 × 4 power splitter is based on one input taper, four output tapers, one MMI coupler, and four waveguide segments. The length of the input/output taper is 2 µm/5 µm, and the width of the input/output taper varies from 0.4-0.6 µm/0.6-0.4 µm, respectively. The optimal geometric values (see Figure 2a,b) that have been found suitable to the Si-GaN slot waveguide structure are H Si = 300 nm (height of Si layer), H GaN = 100 nm (height of GaN layer), and W S = 400 nm (width of layer Si/GaN). These values have been chosen in order to enable a strong confinement of the electric field inside the slot area (see Figure 3). Figure 1b shows a cross-sectional view of the x-z plane. The 1ˆ4 power splitter is based on one input taper, four output tapers, one MMI coupler, and four waveguide segments. The length of the input/output taper is 2 µm/5 µm, and the width of the input/output taper varies from 0.4-0.6 µm/0.6-0.4 µm, respectively. The 1 × 4 Power Splitter Structure and the Theoretical Aspect The width of the segment waveguide is 0.4 µm with a length of 5 µm. The gap width between the two ports at the output is 0.7 µm. The width of the segment waveguide is 0.4 µm with a length of 5 µm. The gap width between the two ports at the output is 0.7 µm. The operation principle of the 1 × 4 MMI power splitter in the Si-GaN slot waveguide structure is based on the self-imaging effect [3] and the TIR effect. We will denote Lπ by the beat length of the first two modes propagating through the MMI. This length can be approximated with [3] where neff is the effective refractive index of the core in the slot waveguide (GaN and Si), which is solved by the FV-BPM. We is the effective width of the MMI coupler, and λ is the operating wavelength. The We for the case of transverse magnetic (TM) mode is given by [3]   The operation principle of the 1 × 4 MMI power splitter in the Si-GaN slot waveguide structure is based on the self-imaging effect [3] and the TIR effect. We will denote Lπ by the beat length of the first two modes propagating through the MMI. This length can be approximated with [3] where neff is the effective refractive index of the core in the slot waveguide (GaN and Si), which is solved by the FV-BPM. We is the effective width of the MMI coupler, and λ is the operating wavelength. The We for the case of transverse magnetic (TM) mode is given by [3]   The operation principle of the 1ˆ4 MMI power splitter in the Si-GaN slot waveguide structure is based on the self-imaging effect [3] and the TIR effect. We will denote L π by the beat length of the first two modes propagating through the MMI. This length can be approximated with [3] where n eff is the effective refractive index of the core in the slot waveguide (GaN and Si), which is solved by the FV-BPM. W e is the effective width of the MMI coupler, and λ is the operating wavelength. The W e for the case of transverse magnetic (TM) mode is given by [3] W e " W MMI`λ πˆn clad n eff˙2 where the width of the MMI coupler is W MMI , and its size is 5 µm. This size has been chosen in order to reduce the L π value. Secondly, this size extremely depends on the number of MMI ports at the output. The length of the MMI coupler is given by [3] where p is a positive integer, and N is the number of ports at the output of the MMI coupler. In our case, p = 1 and N = 4. The value of L MMI can be further optimized by choosing certain geometrical parameters of the MMI waveguide and by optimizing the operation wavelength. Simulation Results The simulations of the 1ˆ4 power splitter structure were performed using the FV-BPM based on the RSoft Photonics CAD Suite software (San Jose, CA, USA). Figure 3a,b show the profile mode of the electric fields E y and E x for a wavelength of 1550 nm. Figure 3a shows a strong power confinement (red color) inside the slot area. Therefore, the light can be guided in the proposed design without confinement losses. The normalized power in the slot area is shown in Figure 3a,b as a function of the geometrical parameters H Si and H GaN . The optimal tolerance values of the parameters H Si and H GaN were set between 90% and 100% of the normalized power (black dashed line). Figure 3a,b show that the tolerance values of H Si and H GaN are around 5/6 nm and 4/5 nm, respectively. By solving the major mode E y , we have found the value of n eff that is suitable for the operated wavelength, and its value is 2.89. This value is used to calculate the L π and the W e . Solving Equations (1) and (2) has shown that the values are 64.16 µm and 5.08 µm, respectively. Solving Equation (3) has shown that the value of L MMI is 12.03 µm. This value has been optimized by applying FV-BPM simulations. The optimization of the L MMI and W MMI is shown in Figure 4a,b. It can be noticed from Figure 4a,b that the optimal values of the L MMI and W MMI are 12.3 µm and 5 µm, respectively. These values lead to the best performance of the designed structure. The optimal tolerance values of the parameters L MMI and W MMI were set as in Figure 2. where the width of the MMI coupler is WMMI, and its size is 5 µm. This size has been chosen in order to reduce the Lπ value. Secondly, this size extremely depends on the number of MMI ports at the output. The length of the MMI coupler is given by [3] where p is a positive integer, and N is the number of ports at the output of the MMI coupler. In our case, p = 1 and N = 4. The value of LMMI can be further optimized by choosing certain geometrical parameters of the MMI waveguide and by optimizing the operation wavelength. Simulation Results The simulations of the 1 × 4 power splitter structure were performed using the FV-BPM based on the RSoft Photonics CAD Suite software (San Jose, CA, USA). Figure 3a,b show the profile mode of the electric fields Ey and Ex for a wavelength of 1550 nm. Figure 3a shows a strong power confinement (red color) inside the slot area. Therefore, the light can be guided in the proposed design without confinement losses. The normalized power in the slot area is shown in Figure 3a,b as a function of the geometrical parameters HSi and HGaN. The optimal tolerance values of the parameters HSi and HGaN were set between 90% and 100% of the normalized power (black dashed line). Figure 3a,b show that the tolerance values of HSi and HGaN are around 5/6 nm and 4/5 nm, respectively. By solving the major mode Ey, we have found the value of neff that is suitable for the operated wavelength, and its value is 2.89. This value is used to calculate the Lπ and the We. Solving Equations (1) and (2) has shown that the values are 64.16 µm and 5.08 µm, respectively. Solving Equation (3) has shown that the value of LMMI is 12.03 µm. This value has been optimized by applying FV-BPM simulations. The optimization of the LMMI and WMMI is shown in Figure 4a,b. It can be noticed from Figure 4a,b that the optimal values of the LMMI and WMMI are 12.3 µm and 5 µm, respectively. These values lead to the best performance of the designed structure. The optimal tolerance values of the parameters LMMI and WMMI were set as in Figure 2. The absolute value of the propagating electric field inside the 1ˆ4 MMI power splitter at x-z plane is shown in Figure 5a,b. Figure 5a shows that indeed the intensity of the optical signal is equally split at z = 15 µm into four beams guided to the output ports of the MMI coupler. For more clarity, this result is also represented in three dimensions in Figure 5b. The absolute value of the propagating electric field inside the 1 × 4 MMI power splitter at x-z plane is shown in Figure 5a,b. Figure 5a shows that indeed the intensity of the optical signal is equally split at z = 15 µm into four beams guided to the output ports of the MMI coupler. For more clarity, this result is also represented in three dimensions in Figure 5b. In addition, the normalized power of the optical signal (1550 nm) along the propagation axis (z-axis) was analyzed via FV-BPM simulations. Figure 6 shows the distribution of the normalized power value along the propagation of the optical signal in the proposed design. It can be noticed from Figure 6 that the power value in each port is exactly 24.6% of the total power. To examine the performance of the proposed 1 × 4 power splitter, we have calculated the insertion losses, which are given by where Pin is the power in the input waveguide taper, and Pout is the power in the output port. The insertion losses for all four ports are 0.07 dB. Therefore, the transfer energy is almost 100% from the input waveguide taper into all the four output ports. Thus, the proposed 1 × 4 power splitter is energetically efficient. In addition, the normalized power of the optical signal (1550 nm) along the propagation axis (z-axis) was analyzed via FV-BPM simulations. Figure 6 shows the distribution of the normalized power value along the propagation of the optical signal in the proposed design. It can be noticed from Figure 6 that the power value in each port is exactly 24.6% of the total power. The absolute value of the propagating electric field inside the 1 × 4 MMI power splitter at x-z plane is shown in Figure 5a,b. Figure 5a shows that indeed the intensity of the optical signal is equally split at z = 15 µm into four beams guided to the output ports of the MMI coupler. For more clarity, this result is also represented in three dimensions in Figure 5b. In addition, the normalized power of the optical signal (1550 nm) along the propagation axis (z-axis) was analyzed via FV-BPM simulations. Figure 6 shows the distribution of the normalized power value along the propagation of the optical signal in the proposed design. It can be noticed from Figure 6 that the power value in each port is exactly 24.6% of the total power. To examine the performance of the proposed 1 × 4 power splitter, we have calculated the insertion losses, which are given by where Pin is the power in the input waveguide taper, and Pout is the power in the output port. The insertion losses for all four ports are 0.07 dB. Therefore, the transfer energy is almost 100% from the input waveguide taper into all the four output ports. Thus, the proposed 1 × 4 power splitter is energetically efficient. To examine the performance of the proposed 1ˆ4 power splitter, we have calculated the insertion losses, which are given by where P in is the power in the input waveguide taper, and P out is the power in the output port. The insertion losses for all four ports are 0.07 dB. Therefore, the transfer energy is almost 100% from the input waveguide taper into all the four output ports. Thus, the proposed 1ˆ4 power splitter is energetically efficient. In addition, a Matlab code combined with FV-BPM simulations was performed to determine the power splitter properties of the designed structure. Figure 7 shows the spectral transmission results for different wavelengths close to the wavelength of the optical signal (1550 nm). In addition, a Matlab code combined with FV-BPM simulations was performed to determine the power splitter properties of the designed structure. Figure 7 shows the spectral transmission results for different wavelengths close to the wavelength of the optical signal (1550 nm). The FWHM of the power spectrum is given by where P is the normalized power. Figure 7 shows that the FWHM for each port is around 81 nm (1510-1591 nm). It can be seen in Figure 7 that the FWHM of the power splitter is suitable for the C-band (1530-1565 nm) range of the optics communication field. Preliminary Fabrication Results We first spin-coated a sensitive E-beam resist (poly-methyl-methaclyrate A4) layer and baked it for 120 s on a hotplate at 180 °C. The desired pattern was created by exposing the resist layer to a high-resolution E-beam (CRESTEC CABLE-9000C lithography system) (Shizuoka, Japan). The exposure level was done with a fine alignment using an accurate marking technique. Then, a thermal evaporation system (Nano 36 vacuum by K.J. Lesker) was used to deposit 50 nm of GaN, as shown in Figure 8a. The sample was immersed in acetone for 3 h in order to remain with the wanted pattern (this technique known in the literature as "lift-off"). The sample was masked again by maintaining the previous alignment step as shown in Figure 8b. Finally, we deposited another 50 nm of intrinsic Si and repeated the "lift-off" etching in order to attain the final structure of the slot waveguide. The FWHM of the power spectrum is given by FWHM " λ 2 pP " 0.5q´λ 1 pP " 0.5q (5) where P is the normalized power. Figure 7 shows that the FWHM for each port is around 81 nm (1510-1591 nm). It can be seen in Figure 7 that the FWHM of the power splitter is suitable for the C-band (1530-1565 nm) range of the optics communication field. Preliminary Fabrication Results We first spin-coated a sensitive E-beam resist (poly-methyl-methaclyrate A4) layer and baked it for 120 s on a hotplate at 180˝C. The desired pattern was created by exposing the resist layer to a high-resolution E-beam (CRESTEC CABLE-9000C lithography system) (Shizuoka, Japan). The exposure level was done with a fine alignment using an accurate marking technique. Then, a thermal evaporation system (Nano 36 vacuum by K.J. Lesker) was used to deposit 50 nm of GaN, as shown in Figure 8a. The sample was immersed in acetone for 3 h in order to remain with the wanted pattern (this technique known in the literature as "lift-off"). The sample was masked again by maintaining the previous alignment step as shown in Figure 8b. Finally, we deposited another 50 nm of intrinsic Si and repeated the "lift-off" etching in order to attain the final structure of the slot waveguide. In addition, a Matlab code combined with FV-BPM simulations was performed to determine the power splitter properties of the designed structure. Figure 7 shows the spectral transmission results for different wavelengths close to the wavelength of the optical signal (1550 nm). The FWHM of the power spectrum is given by where P is the normalized power. Figure 7 shows that the FWHM for each port is around 81 nm (1510-1591 nm). It can be seen in Figure 7 that the FWHM of the power splitter is suitable for the C-band (1530-1565 nm) range of the optics communication field. Preliminary Fabrication Results We first spin-coated a sensitive E-beam resist (poly-methyl-methaclyrate A4) layer and baked it for 120 s on a hotplate at 180 °C. The desired pattern was created by exposing the resist layer to a high-resolution E-beam (CRESTEC CABLE-9000C lithography system) (Shizuoka, Japan). The exposure level was done with a fine alignment using an accurate marking technique. Then, a thermal evaporation system (Nano 36 vacuum by K.J. Lesker) was used to deposit 50 nm of GaN, as shown in Figure 8a. The sample was immersed in acetone for 3 h in order to remain with the wanted pattern (this technique known in the literature as "lift-off"). The sample was masked again by maintaining the previous alignment step as shown in Figure 8b. Finally, we deposited another 50 nm of intrinsic Si and repeated the "lift-off" etching in order to attain the final structure of the slot waveguide. Conclusions To conclude, we have shown that a 1ˆ4 power splitter can be implemented on MMI in a Si-GaN slot waveguide structure. Through simulation results, it was shown here that the energy of the optical signal at a wavelength of 1550 nm can be split after a propagation length of about 15 µm, with equal intensity in each output port. The FWHM is 81 nm for each port, and the insertion losses of the proposed device are below 0.07 dB for all four ports. Therefore, this splitter can be used in an optical networking system that works on the entire C-band range. Although only the splitter configuration is considered in this work, the proposed device can also operate as a combiner (coupler) by reversing the direction of the light wave propagation. This design has the potential to integrate with techniques of CMOS technology for realizing a photonic chip due to the use of semiconductor materials (i.e., Si-GaN).
5,555.2
2016-06-25T00:00:00.000
[ "Engineering", "Physics" ]
Comptes Rendus Mathématique . We first interpret Pell’s equation satisfied by Chebyshev polynomials for each degree t , as a certain Positivstellensatz, which then yields for each integer t , what we call a generalized Pell’s equation, satisfied by reciprocals of Christo ff el functions of “degree” 2 t , associated with the equilibrium measure µ of the interval [ − 1,1] and the measure (1 − x 2 )d µ . We next extend this point of view to arbitrary compact basic semi-algebraic set S ⊂ R n and obtain a generalized Pell’s equation (by analogy with the interval [ − 1,1]). Under some conditions, for each t the equation is satisfied by reciprocals of Christo ff el functions of “degree” 2 t associated with (i) the equilibrium measure µ of S and (ii), measures g d µ for an appropriate set of generators g of S . These equations depend on the particular choice of generators that define the set S . In addition to the interval [ − 1,1], we show that for t = 1,2,3, the equations are indeed also satisfied for the equilibrium measures of the 2 D -simplex, the 2 D -Euclidean unit ball and unit box. Interestingly, this view point connects orthogonal polynomials, Christo ff el functions and equilibrium measures on one side, with sum-of-squares, convex optimization and certificates of positivity in real algebraic geometry on another side. Résumé. Nous fournissons d’abord une interprétation particulière de l’équation polynomiale de Pell satis-faite par les polynômes de Chebyshev. Introduction One goal of this paper is to introduce what we call a generalized Pell's equation which, under certains conditions, is satisfied by reciprocals of Christoffel functions associated with (i) the equilibrium measure λ S of a compact basic semi-algebraic set S ⊂ R n , and (ii) associated measures g dλ S , g ∈ G, for an appropriate set G of generators of S. Moreover, checking whether a chosen set G of generators is appropriate, can be done by solving a sequence of convex optimization problems. Another goal is to reveal via the path to obtain the result, strong links between orthogonal polynomials, Christoffel functions and equilibrium measures on one side, and certificates of positivity in real algebraic geometry, optimization and sum-of-squares, as well as a duality result on convex cones by Nesterov, on the other side. The measure µ is called the equilibrium measure associated with the interval [−1, 1]. Next, it turns out that (4) is in fact a particular case of [8,Theorem 17.7] which, rephrased later in the polynomial context by the author in [5,Lemma 4], states that every polynomial p ∈ R[x] (here the constant polynomial p = 2t + 1) in the interior of a certain convex cone, has a distinguished representation in terms of certain SOS. Namely, such SOS are reciprocals of Christoffel functions associated with some rather "intriguing" linear functional φ p ∈ R[x] * associated with p (see [5,Equation (10)]). However in [5,Lemma 4] we did not provide any clue on what is the link between p and φ p . So when S = [−1, 1], (4) tells us that this intriguing linear functional φ p associated with constant polynomials p, is in fact proportional to the (Chebyshev) equilibrium measure dx/π 1 − x 2 of the interval [−1, 1]. So the message of this introductory example is that we can view the polynomial Pell's equation (1) as well as its generalization (4), as algebraic Putinar certificates of increasing degree t = 1, 2, . . . , that the constant polynomials (p = 1 for (1) and p = 2t + 1 for (4)) are positive on the interval [−1, 1]. Contribution The goal of this paper is (i) to define a framework that extends the above point of view to the broader context of compact basic semi-algebraic sets, (ii) to provide conditions under which a multivariate analogue of (4) holds, and (iii) to show that indeed (4) holds for t = 1, 2, 3 for the 2D-Euclidean ball, the 2D-unit box, and the 2D-simplex. As we next see, Equation (4) is particularly interesting as it links statistics, orthogonal polynomials and equilibrium measures on one side, with convex optimization and duality, sum-of-squares and algebraic certificates of positivity, on another side. More precisely, with g j ∈ R[x], j = 1, . . . , m, let be compact with nonempty interior. Our contribution is to investigate an appropriate multivariate analogue for S in (5) and its equilibrium measure, of the SOS characterization (4) for the Chebyshev measure dx/π 1 − x 2 on [−1, 1]. Given g ∈ R[x], let t g := ⌈deg(g )/2⌉, and let s(t ) := n+t n . With g 0 = 1, introduce G := {g 0 , g 1 , . . . , g m } and for every t ∈ N, let G t := {g ∈ G : t g ≤ t } (when g ∈ R[x] 2 for all g ∈ G then G t = G for all t ≥ 1). For two polynomials g , h ∈ R[x], we sometimes use the notation g · h for their usual product, when needed to avoid ambiguity. Given a Borel measure φ on S, denote by g · φ, g ∈ G, the measure g dφ on S. Then define the sets respectively called the quadratic module and 2t -truncated quadratic module associated with is convex cone of sum-of-squares polynomials (SOS in short).) (i). We first show that if a Borel probability measure φ on S (with well-defined Christoffel functions Λ g ·φ t , g ∈ G, t ∈ N) satisfies 1 for some t 0 ∈ N, and (S, g · φ) satisfies the Bernstein-Markov property for every g ∈ G, then necessarily φ is the equilibrium measure λ S of S (as defined in e.g. [1]). Notice that (8) is the perfect multivariate analogue of the univariate (4) for S = [−1, 1] and its equilibrium measure φ = dx/π 1 − x 2 ; therefore we propose to name (8) a generalized Pell's equation as it is the analogue of (4) for several polynomials g , and the solutions (1/Λ g ·φ t ) g ∈G are sums-of-squares (and not a single square as in the multivariate Pell's equation [7].) So in this case, for every t ≥ t 0 , as an element of int(Q t (G) * ), the vector of degree-2t moments of the equilibrium measure λ S , is strongly related to the constant polynomial "1" in int(Q t (G)) (which can be viewed as the density of λ S w.r.t. λ S ). Such a situation is likely to hold only for specify cases of sets S (with S = [−1, 1] and λ S = dx/π (1 − x 2 ) being the prototype example). However we also show that in the general case, the vector of degree-2t moments of the equilibrium measure λ S , is still related to the constant polynomial "1" but in a weaker fashion. Namely, let µ t = p * t λ S be the probability measure whose density p * t w.r.t. λ S is the polynomial in the left-hand-side of (8) (with φ = λ S ). Then lim t →∞ µ t = λ S for the weak convergence of probability measures. That is, asymptotically as t grows, and as a density w.r.t. λ S , p * t behaves like the constant density "1" when integrating continuous functions against p * t λ S . (ii). We next provide an if and only if condition on S and its representation (5) so that indeed, for every t ≥ t 0 , there exists a distinguished linear functional φ * 2t ∈ R[x] * 2t , positive on Q t (G), which an analogue of (8) with Christoffel functions Λ g ·φ * 2t t associated with φ * 2t . Interestingly, this condition which states that is a question of real algebraic geometry related to a (degree-2t truncated) quadratic module associated with a set G of generators of S. Among all possible sets of generators for a given compact semi-algebraic set S, those G for which (10) holds, deserve to be distinguished. (iii). Next, if condition (10) is satisfied then for every fixed t , the moment vector φ * 2t associated with the linear functional φ * 2t in (ii), is the unique optimal solution of a convex optimization problem (with a "log det" criterion) which can be solved efficiently via off-the-shelf softwares like e.g. CVX [3] or Julia [2]. In fact, (9) is an algebraic "certificate" that condition (10) holds, and even more, (9) and (10) are equivalent. Of course, the larger t is, the larger is the size of the resulting convex optimization problem to solve. Moreover, every (infinite sequence) accumulation point φ * = (φ * α ) α∈N n of the sequence of finite moment-vectors (φ * 2t ) t ∈N associated with the linear functional φ * 2t , is represented by a Borel measure φ on S. Then φ satisfies (8) if and only if the whole sequence (φ * 2t ) t ∈N converges to φ * and the convergence is finite. That is, there exists t 0 ∈ N such that for every t ≥ t 0 , φ is a representing measure of φ * 2t . Equivalently, for every t ≥ t 0 , φ * 2(t +1) is an extension of φ * 2t . In addition, if the measure φ is such that (S, g · φ) satisfies the Bernstein-Markov property for all g ∈ G, then necessarily φ is the equilibrium measure λ S of S (by (i)). Interestingly, this hierarchy of convex optimization problems provides a practical numerical scheme (at least for moderate values of t ) to check whether the (unique) optimal solution φ * 2(t +1) is an extension of φ * 2t , for an arbitrary fixed t ∈ N, which should eventually happen if (8) has ever to hold for the limit measure φ associated with the sequence (φ * 2t ) t ∈N . If φ * 2(t +1) is an extension of φ * 2t for some t , then it is a good indication that indeed (8) may hold with φ * 2t being moments of φ (up to degree 2t ). On the other hand, if φ * 2(t +1) is not an extension of φ * 2t then it may be because (i) there is no limit measure φ that satisfies (8), or (ii) one must wait for a larger t (t ≥ t 0 ) to see a possible "extension", or (iii) perhaps G is not an appropriate set of generators of S. However, in that case it remains to check whether the limit measure φ is still the equilibrium measure of S, and if not, to detect its distinguishing features. Notation and definitions Let R[x] denote the ring of real polynomials in the variables x = (x 1 , . . . , 2t be the convex cone of polynomials of total degree at most 2t which are sum-ofsquares (in short SOS). For a real symmetric matrix A = A T the notation A ⪰ 0 (resp. A ≻ 0) stands for A is positive semidefinite (p.s.d.) (resp. positive definite (p.d.)). The support of a Borel measure µ on R n is the smallest closed set A such that µ(R n \ A) = 0, and such a set A is unique. Denote by C (S) the space of real continuous functions on S. Riesz functional, moment and localizing matrix. With a real sequence , and the moment matrix M t (φ) with rows and columns indexed by N n t (hence of size s(t ) := n+t t ), and with entries , and the localizing matrix associated with φ and g , is the moment matrix associated with the new sequence g · φ. The Riesz linear functional g · φ associated with the sequence g · φ satisfies In particular, for any real symmetric A real sequence φ = (φ α ) α∈N n has a representing mesure if its associated linear functional φ is a Borel measure on R n . In this case M t (φ) ⪰ 0 for all t ; the converse is not true in general. In addition, if φ is supported on the set { x ∈ R n : g (x) ≥ 0 } then necessarily M t (g · φ) ⪰ 0 for all t . Christoffel function. Let φ ∈ R[x] * be a Riesz functional (not necessarily with a representing measure) such that M t (φ) ≻ 0. As for Borel measures, we may also define the (degree-t ) Christoffel function is a family of polynomials which are orthonormal with respect to φ, then Similarly, if M t (g · φ) ≻ 0, we may also define the (degree-t ) Christoffel function associated with the Riesz functional g · φ. All the above definitions also hold for finite sequences φ 2t = (φ α ) α∈N n 2t and associated Riesz to all moments up to degree 2t . A compact set S is said to be regular if its associated Siciak's function is continuous everywhere in R n (the same definition also extends to C n ; see [6, Definition 4.4.2, p. 53]). If S is regular and (S, µ) satisfies the Bernstein-Markov property, then uniformly on compact subsets of R n : lim Equilibrium measure. The notion of equilibrium measure associated to a given set, originates from logarithmic potential theory (working in C in the univariate case) to minimize some energy functional. For instance, the equilibrium (Chebsyshev) measure dφ := dx/π 1 − x 2 minimizes the Riesz s-energy functional 1 |x − y| s dµ(x) dµ(y) with s = 2, among all measure µ equivalent to φ. Some generalizations have been obtained in the multivariate case via pluripotential theory in C n . In particular if S ⊂ R n ⊂ C n is compact then the equilibrium measure (let us denote it by λ S ) is equivalent to Lebesgue measure on compact subsets of int(S). It has an even explicit expression if S is convex and symmetric about the origin; see e.g. Bedford , t ∈ N, converges to λ S for the weak-⋆ topology and therefore in particular: (see e.g. [6,Theorem 4.4.4]). In addition, if a compact S ⊂ R n is regular then (S, λ S ) has the Bernstein-Markov property; see [6, p. 59]. For a brief account on equilibrium mesures see the discussion in [6, Section 4-5, pp. 56-60] while for more detailed expositions see some of the references indicated there. Brief summary of main results In Section 2.3, Theorem 2 shows that if a linear functional φ ∈ R[x] * satisfies the multivariate analogue (8) of (4) for S in (5), then under a certain technical assumption, φ is necessarily the equilibrium measure λ S of S. Corollary 4 shows that (8) is also a strong property of orthonormal polynomials associated with λ S , the perfect analogue of (2) for Chebyshev polynomials on S = [−1, 1]. As this strong property is not expected to hold for general sets S in (5), we next show in Theorem 3 that in general, the polynomial associated with λ S (now not necessarily constant (equal to 1) as in Theorem 2) has still a strong property related to the constant polynomial "1". Namely asymptotically, the sequence of probability measures That is, informally, the polynomial density p * t "behaves" asymptotically like the constant (equal to 1) density when integrating continuous functions against p * t λ S . Hence somehow, the vector of degree-2t moments of λ S in the convex cone (Q t (G)) * are still intimately related to the constant polynomial 1 in Q t (G) (but not as directly as in Theorem 2). Next, in Section 2.4 we still consider again the constant polynomial 1 and in Theorem 6 we show that under a simple condition, indeed 1 ∈ int(Q t (G) for all t , and therefore there exists a sequence of linear functional (φ 2t ) t ∈N that satisfies (9) for all t . For each t , the linear functional φ 2t is the unique optimal solution of a simple convex optimization problem with log det criterion to maximize. (In addition, in the case when S = {x : g (x) ≥ 0} for some g ∈ R[x], Lemma 9 relates solutions to Pell's equation with φ 2t and S.) In Section 2.5 one is concerned with the asymptotic behavior of the linear functionals (φ 2t ) t ∈N as t grows, and Theorem 10 shows that there exists a limit moment sequence φ which has a representing probability measure φ on S. Moreover φ satisfies (8) and is the equilibrium measure λ S , if and only if finite convergence takes place, that is, for every t ≥ t 0 , φ 2t is the vector of degree-2t moments of φ. So an interesting issue (not treated here) is to relate φ and λ S when the convergence is only asymptotic and not finite. Finally in Section 3 we provide numerical examples of sets S where (8) holds at least for t = 1, 2, 3. Two preliminary results For simplicity of exposition, we will consider sets S in (5) for which the quadratic polynomial x → R − ∥x∥ 2 belongs to Q 1 (G); in particular, S is contained in the Euclidean ball of radius R for some R > 0, and the quadratic module Q(G) is Archimedean; see e.g. [4]. Let λ S be the equilibrium measure of S (as described in e.g. [1]) and recall that g 0 = 1 (so that g 0 · λ S = λ S ). Let C (S) be the space of continuous functions on S. (5) is compact with nonempty interior. Moreover, there exists R > 0 such that the quadratic polynomial x → θ(x) := R − ∥x∥ 2 is an element of Q 1 (G). In other words, h ∈ Q 1 (G) is an "algebraic certificate" that S in (5) is compact. (5), let Assumption 1 hold. Let φ = (φ α ) α∈N n (with φ 0 = 1) be such that M t (g · φ) ≻ 0 for all t ∈ N and all g ∈ G, so that the Christoffel functions Λ g ·φ t are all well defined (recall that φ ∈ R[x] * is the Riesz linear functional associated with the moment sequence φ). In addition, suppose that there exists t 0 ∈ N such that Theorem 2. With S as in Then φ is a Borel measure on S and the unique representing measure of φ. Moreover, if (S, g · φ) satisfies the Bernstein-Markov property for every g ∈ G, then φ = λ S and therefore the Christoffel polynomials (Λ g ·λ S t ) −1 g ∈G t satisfy the generalized Pell's equations: Proof. In view of Assumption 1, the quadratic module Q(G) is Archimedean. Next, as M t (g ·φ) ≻ 0 for all t ∈ N and all g ∈ G, then by Putinar's Positivstellensatz [10], φ has a unique representing measure on S; that is, the Riesz linear functional φ associated with φ is a Borel measure on S. Next, write (15) as and let α ∈ N n be fixed arbitrary. As (S, g · φ) satisfies the Bernstein-Markov property for every g ∈ G, then by [6,Theorem 4.4.4], where λ S is the equilibrium measure of S; see [1,6]. Hence multiplying (17) by x α and integrating w.r.t. φ yields Each term of the product in the above sum of the right-hand-side has a limit as t grows. Moreover G t = G for t sufficiently large. Therefore taking limit as t increases yields As α ∈ N n was arbitrary and S is compact, then necessarily φ = λ S . and is likely to hold only in some specific cases. The prototype example is Then indeed (4) is exactly (16), and by analogy with the Chebyshev univariate case, we propose to call Equation (16) a generalized Pell's (polynomial) equation of degree 2t . It is satisfied by the polynomials (g · (Λ g ·λ S t −t g ) −1 ) g ∈G t , all of degree less than 2t . If true for all t , then (S, λ S ) satisfies the generalized Pell's equations for all degrees. Of course, to be valid (16) requires conditions on S and its representation (5) by the polynomials g ∈ G. For instance, as shown in Section 3 below, if S is the 2D-Euclidean unit ball with g = 1 − ∥x∥ 2 , (in which case G t = G 1 for all t ≥ 1), then λ S = dx/(π 1 − ∥x∥ 2 ) and we can show that (16) holds for t = 1, 2, 3. Similarly, if S is the 2D-simplex {x : and we can show that (16) holds for t = 1, 2, 3, for the quadratic generators in G = {g 0 , g 1 , g 2 , g 3 However in the general case we have the following weaker result, still related to Theorem 2. Theorem 3. Let λ S be the equilibrium measure of S and assume that for every g ∈ G, (S, g · λ S ) satisfies the Bernstein-Markov property. For every t , define the polynomial Then the sequence of probability measures (µ t := p * t λ S ) t ≥t 0 converges to λ S for the weak-⋆ topology Proof. The polynomial p * t in (18) is well defined because the matrices M t −t g (g · λ S ) are non singular. Each µ t is a probability measure on S because As (S, g · λ S ) satisfies the Bernstein-Markov property for every g ∈ G, then by [6,Theorem 4.4.4], Hence multiplying (18) by f ∈ C (S) and integrating w.r.t. λ S , yields Each term of the product in the above sum of the right-hand-side has a limit as t grows. Moreover G t = G for t sufficiently large. Therefore taking limit as t increases, yields As S is compact it implies that the sequence of probability measures (µ t ) t ∈N ⊂ M (S) + converges to λ S for the weak-⋆ topology σ(M (S), C (S)) of M (S). □ In other words (and in an informal language), when integrating continuous functions against µ t , the density p * t of µ t w.r.t. λ S behaves asymptotically like the constant (equal to 1) density. That is, Theorem 3 is a more general (but weaker) version of Theorem 2. Corollary 4. Let φ be the Borel measure on S in Theorem 2, and for each g ∈ G, let (P g ·φ α ) α∈N n be a family of polynomials, orthonormal with respect to the measure g · φ. Then for every t ≥ t 0 + 1: Proof. Recalling (12), for each g ∈ G t with t ≥ t 0 + 1: which combined with (15) yields (20). Remark 5. Observe that (20) which states a property satisfied by orthonormal polynomials associated with g · φ, g ∈ G t , is a multivariate and multi-generator analogue of (2), the polynomial Pell's equation satisfied by normalized Chebyshev polynomials. However there are several differences between (20) and (2). In (2), where G = {g } with g = (1−x 2 ) (and so with t g = 1), the triplet ( T t , −g , U t −t g ) is a solution to the polynomial Pell equation C 2 − F H 2 = 1 which involves single squares C 2 and H 2 and a single generator F . On the other hand, (20) addresses the multivariate case with possibly several generators g ∈ G t and in compact form reads g ∈G t g C g = 1 which now involves SOS polynomials (C g ) g ∈G t and several generators g ∈ G t . This is why we think that it is fair to call (20) (as well as (8) A convex optimization problem and its dual In Theorem 2 we have taken for granted existence of a linear functional φ such that its moment sequence φ satisfies (15). The next issue is: Given a compact set S as in (5), can we provide such a moment sequence φ? At least, can we define a numerical scheme which provides finite sequences (φ 2t ) t ∈N which "converge" to such a desirable φ as t grows? As we next see, this issue essentially translates to the following simple issue in real algebraic geometry. Do we have 1 ∈ int(Q t (G)) for every t ∈ N? If the answer is yes then indeed such a φ exists. But then the associated linear functional φ will satisfy (15) only if the convergence is finite. Moreover the conditions can be checked by solving a sequence of convex optimization problems described in the next section. Theorem 6. With t ∈ N fixed, Problems (21) and (22) have same finite optimal value ρ t = ρ * t if and only if 1 ∈ int(Q t (G)). Then both have a unique optimal solution φ * 2t ∈ R s(2t ) and (Q * g ) g ∈G t respectively, which satisfy Q * g = M t −t g (g · φ * 2t ) −1 for all g ∈ G t . Therefore Proof. For every fixed t , the convex cone Q t (G) is a particular case of the convex cone K (q) investigated in Nesterov [8, p. 415, Section 2.2] when the functional system {v(x)} in [8] is the set of monomials (x α ) α∈N n 2t and the functions (q 1 , . . . ,q l ) are our polynomials g in G t . Then By [8,Theorem 17.7] p ∈ int(K (q 1 , . . . ,q l )) if and only if p = for some unique φ p ∈ K (q 1 , . . . ,q l ) * . In addition, letting Q g := M t −t g (g · φ p ) −1 , g ∈ G t , the sequence (Q g ) g ∈G t is the unique solution of (22), with p instead of g ∈G t s(t −t g ) in the left-handside of the constraint. Therefore, by [8,Theorem 17.7] for the constant polynomial p = 1, for some distinguished φ ∈ Q t (G) * . Then as 1 ∈ int(Q t (G)) for every t , letting p be the constant polynomial g ∈G t s(t − t g ), one obtains for some unique φ * 2t ∈ Q t (G) * , and Q * g := M t −t g (g · φ * 2t ) −1 , g ∈ G t , is the unique optimal solution of (22). Next, φ * 2t is a feasible solution of (21), and We next prove weak duality, i.e., ρ * t ≤ ρ t , so that φ * 2t (resp. (Q * g ) g ∈G t ) is the unique optimal solution of (21) (resp. (22)) and ρ t = ρ * t . So let φ 2t (resp. (Q g ) g ∈G t ) be an arbitrary feasible solution of (21) (resp. (22)). Then by Lemma 12, for every g ∈ G t , s(t − t g ) + log det(M t −t g (g · φ 2t )) + log det(Q g ) ≤ 〈M t −t g (g · φ 2t ), Q g 〉 . In addition, as φ 2t (1) = 1 from which we deduce weak duality, that is, □ So as one can see, (23) is a multivariate analogue of (4). Crucial in Theorem 6 is the condition 1 ∈ int(Q t (G)) for all t . Below is a simple sufficient condition. (5) with G = {g 0 , g 1 , . . . , g m }, and let Assumption 1 hold. Then 1 ∈ int(Q t (G)) for every t . Lemma 7. Let S be as in For clarity of exposition the proof is postponed to Section 3.1. Lemma 9. Let g ∈ R[x] of even degree be fixed, G := {g }, and suppose that there are two polynomials of even degree p ∈ int(Σ t ), and q ∈ int(Σ t −t g ) such that p + g q = 1. Then there exists a linear In particular with g ∈ R[x] fixed: If there exist polynomials , then (24) holds for some φ ∈ int(Q t (G) * ). Proof. Let G := {g } and let Q t (G) be as in (7). As p ∈ int(Σ t ), and q ∈ int(Σ t −t g ), 1 = p + g q ∈ int(Q t (G)) and by [5,Lemma 4], (24) holds. The second statement is a direct consequence by taking p = i ∈I C 2 i and q = i ∈I H 2 i . □ So Lemma 9 states that if the triple (p, g , q) solve the generalized Pell's equation p + g q = 1, with p ∈ int(Σ t ) and q ∈ int(Σ t −t g ), then p (resp. q) is the Christoffel polynomial ( An asymptotic result We now consider asymptotics for the sequence (φ * 2t ) t ∈N obtained in Theorem 6, as t grows. Theorem 10. Under Assumption 1, let φ * 2t be an optimal solution of (21), t ∈ N, guaranteed to exist by Theorem 6. Then: (i) The sequence (φ * 2t ) t ∈N has accumulation points, and for each converging subsequence (t k ) k∈N , (φ * 2t k ) k∈N converges pointwise to the vector φ = (φ α ) α∈N n of moments of some probability measure φ on S, that is, (ii) A limit probability measure φ as in (i) satisfies (15) if and only if the whole sequence (φ * 2t ) t ∈N converges to φ and finite convergence takes place. That is, there exists t 0 such that for all t ≥ t 0 , and so φ is a representing measure of φ * 2t for all t ≥ t 0 . In addition, under the condition of Theorem 2, φ is the equilibrium measure λ S of S. Proof. By completing with zeros, the finite sequence φ * 2t is viewed as an infinite sequence indexed by N n . Then by a standard argument involving scaling and the σ(ℓ ∞ , ℓ 1 ) weak-⋆ topology, the sequence (φ * 2t ) t ∈N has accumulation points and for each subsequence (t k ) k∈N converging to some φ ∈ N n , one obtains the pointwise convergence lim k→∞ (φ * 2t k ) α = φ α , for every α ∈ N n . Next, let d ∈ N and g ∈ G be fixed, arbitrary. Observe that ⪰ 0 as k increases. As Q(G) is Archimedean, then by Putinar's Positivstellensatz [10], φ is a Borel probability measure on S (as φ * 2t k (1) = 1 for all k). (ii). Let φ be as in (i) and suppose that φ satisfies (15). Then for each t ≥ t 0 , the vector φ 2t = (φ α ) α∈N n 2t is an optimal solution of (21), and by uniqueness, φ 2t = φ * 2t . That is, φ is a representing measure for φ * 2t for all t ≥ t 0 . But this implies that φ * 2(t +1) is an extension of φ * 2t for all t ≥ t 0 , and therefore the whole sequence converges to φ, and the convergence is finite. Conversely, if finite convergence takes place, that is, if φ * 2(t +1) is an extension of φ * 2t for all t ≥ t 0 , then φ in (i) is the unique accumulation point and its associated measure φ satisfies (15). Finally, if (S, g · φ) satisfies the Bernstein-Markov property for all g ∈ G, then by Theorem 2, φ = λ S , which concludes the proof. □ Remark 11. Theorem 10 provides a simple test to detect whether the set G of generators of S is a good one, and if so, a numerical scheme to compute moments of the equilibrium measure λ S of S. Indeed if (26) has to hold for the equilibrium measure λ S , then necessarily, the unique optimal solution φ * 2(t +1) of (21) for t + 1 must be an extension of the unique optimal solution φ * 2t of (21) for t , whenever t is sufficiently large. So for instance, if one observes that φ * 2 is an extension of φ * 1 after solving (21) for t = 1 and t = 2, then it already provides a good indication that finite convergence may indeed take place. Discussion There are several issues that are worth investigating. The first one is to completely validate our result for t > 3, for the cases where S is the unit box, the Euclidean unit ball, and the simplex. One possibility is to use Corollary 4 for each degree t , which only requires to show (20) (a property of orthonormal polynomials associated with the measures (g · λ S ) g ∈G ) as we did on some of the above examples. Another issue is to investigate what is a distinguishing feature of the limit measure φ in Theorem 10 when φ does not satisfy the generalized Pell's equation (8). Could φ still be the equilibrium measure of S? with a g ≥ 0 and A 0 ⪰ 0, to obtain , and therefore (R + 1) t ∈ int(Q t (G)). □
8,083.6
2002-01-01T00:00:00.000
[ "Mathematics" ]
Firing salts method for the synthesis of orthorhombic Gd2TiO5: experimental characterization supported by DFT first principles calculations This work presents the synthesis by the new ‘firing salts method’ (FSM) of orthorhombic Gd 2 TiO 5 which requires only two hours at 1200 °C. X Ray Diffraction, High Resolution Transmission Electron Microscopy, Fourier Transform Infrared Spectroscopy and Raman spectroscopy characterized it. Electron microscopy shows particle size distribution between 50–500 nm with orthorhombic structure according to Rietveld analysis. Raman—infrared spectroscopies and first principles calculations indicate that the second order contribution to the spectra comes from the Ti-O5 interactions. First principles calculations (Density Functional Theory) were used as an aid for the interpretation of the experimental results to assign the normal modes to the bands on the Raman and IR spectra; it also provided an insight of the chemical reactivity of the synthesis. Introduction Rare Earth titanates are a wide family of compounds that depending on its chemical composition and Rare Earth (RE) elements, synthesize mainly in two stable different crystal structures: cubic pyrochlore RE 2 Ti 2 O 7 , or orthorhombic RE 2 TiO 5 .The Ti oxygen coordination has the most important contribution to the physical properties since the RE 3+ sublattice is almost the same in both systems.The pyrochlore structure occurs when the ionic radius ratio (R A /R Ti 1.2); it can be understood in terms of interpenetrated networks of TiO6 octahedra and RE2O chains of distorted cubes, a high ordered system with a cubic Fd-3m symmetry that belongs to the No. 227 space group.It has very interesting magnetic properties -frustrated magnetism [1]; and transport properties like ferroelectricity and fast ionic conductivity [2].When the chemical composition changes to a less ordered system like that of the RE 2 TiO 5 , it crystallizes in a stable orthorhombic structure with Pnma symmetry belonging to the No. 62 space group; where the RE 3+ cations occupy two 7-fold sites, forming a distorted cube because of one oxygen is missing in the polyhedral while the remaining oxygen atoms are slightly rearranged to compensate from their ideal cubic position [3].On the other hand, there are four Ti 4+ cations with five-fold oxygen coordination, giving rise to an offset square based pyramidal polyhedral.The orthorhombic RE 2 TiO 5 compound have no mixed occupancy in the different 4c Wyckoff symmetry sites [3][4][5].These changes allow them to have a wide range of important applications from high permeability dielectrics used in memory devices [6], biomedical applications [7], luminescence [8,9], nuclear reactor control rod materials or as a neutron absorber [3,10,11]. There are several methods for the synthesis of RE titanates.From the solid state reaction of the RE 2 O 3 and TiO 2 reagent materials, with thermal treatments between 1200 °C to 1500 °C and time intervals from 24 to 72 h with more than two intermediate annealing processes [3,4,12,13], or the wet chemistry methods besides the use of RE 2 O 3 as reagent material it is also employed RE(NO 3 ) 3 6H 2 O, citric acid as chelating agent, nitric acid (HNO 3 ) and different materials as titanium ions precursors as for example titanium metal (Ti°), titanium isopropoxide or tetrabutyl titanate.The thermal treatments are focused in two different steps: first, the gel formation with temperatures between 70 °C to 400 °C with time intervals between 1 to 18 h.The second step consists in the annealing of the solid obtained from the gel calcination with thermal treatments between 1100 °C to 1400 °Cwith time intervals between 6 to 24 h [8,9,13].As it could be seen, both synthesis methods use long thermal -high temperature treatments; additionally, the wet chemistry methodologies have water polluted with chemical reagents as a byproduct.In recent years, the need of cleaner and sustainable methodologies or the socalled green chemistry methods aim to avoid or diminish the waste reagents production after chemical reactions.The Molten Salts Method (MSM) is an alternative route for the synthesis of complex oxide materials which has been successfully tested for the formation of perovskite, sillenite and pyrochlore compounds [14][15][16][17][18][19] which also offers a high scalable material production.The synthesis process uses a mixture of convenient salts as a reaction media for the constituent metal oxide precursors that enhance time-saving formation process; therefore, the products achieved after the reaction are the complex oxide material and saline water [14].This method is quite effective for the RE titanates with pyrochlore structure [20], but for the orthorhombic RE 2 TiO 5 it is only after repeated grinding and heating processes that a pure orthorhombic phase is obtained; as in all the syntheses methods, the samples have the presence of the RE 2 Ti 2 O 7 (pyrochlore) impurity which affects the transport properties of the samples.This behavior can be explained in terms of the chemical reactivity for both materials: The rate of formation of the orthorhombic phase is twice the rate of TiO 2 consumption, so the pyrochlore phase is easily produced.Thus, for the solid-state reaction, for example, the synthesis of RE 2 TiO 5 goes thru several stages of heating and grinding of the form: The result usually is: In this work, the synthesis of Gd 2 TiO 5 is presented by a novel method where a mixture of salts (NaCl-KCl) and reactive oxides (Gd 2 O 3 and TiO 2 ) are heated way beyond the melting-sublimation point of the salts, and its experimental and theoretical characterization, to provide useful information to their plausible potential applications and for comparison with previous reported parameters assessed with different synthesis methodologies.The mixture of salts must be such that the difference between melting-sublimation and the reaction temperatures allows a broad thermalization that permits the completeness of the reaction before all salts get evaporated. This work also explores the high temperature firing of the mixture salts by assessing the reaction kinetics.First principles calculations for the orthorhombic Gd 2 TiO 5 , the pyrochlore Gd 2 Ti 2 O 7 , the precursor oxides cubic Gd 2 O 3 , anatase TiO 2 and the individual crystalline materials (Gd hexagonal (hcp), Ti hexagonal (hcp) and the O 2 molecular crystal (tetragonal)) were performed to obtain the vibrational spectra of the titanates and precursor oxides, and their formation energies and reaction enthalpies.X-ray powder diffraction (XRD), Raman and Infrared (IR) spectroscopy, High Resolution Electron Microscopy (HREM) were performed to define the conditions of the orthorhombic formation. Materials and methods The firing salts synthesis process exploits the heath of sublimation of a mixture of salts to promote the formation of a higher temperature phase of rare earth titanates (RETiO 5 ) against the favored by the chemical reactivity one (RE 2 Ti 2 O 7 ).Reactive oxides, Gd 2 O 3 (Sigma-Aldrich > 99%) and TiO 2 (Sigma-Aldrich 99.99%) were mixed into stoichiometric proportion with an equimolar mixture of salts NaCl-KCl (Sigma-Aldrich 99.9%) with a total molar proportion of 7:1 between the salts and oxide powder reactants.The salts and precursors were grounded in an agate mortar into a fine homogeneous powder and heated at 1200 °C for 2 h, without a ramping heat rate in a something like a shocking thermal treatment and then quenched in air.The obtained product was washed and stirred in deionized water to dissolve the remaining unevaporated salts and filtered with a 0.22 μm pore nitrocellulose filter, to finally dry in air. The x ray Diffraction (XRD) were measured at room temperature with a Bruker D8 diffractometer (Cu Kα radiation and a Ni filter) from 10°to 90°with steps of 0.02°in 2θ.Rietveld analysis of XRD pattern was carried out with the implementation of the MAUD software [21].SEM images were acquired with an ultra-high-resolution electron microscope JEOL JSM-7800F at 5 kV acceleration voltage with magnification of ×12,000 and ×50,000.High Resolution Transmission Electron Microscopy (HRTEM) images were acquired with a JEOL TEM-2010 FEG electron microscope with a 200 keV accelerating voltage and a point resolution of 0.19 nm.Measurements of the interplanar distance were performed in HRTEM images using the Fast Fourier Transform (FFT) in Digital Micrograph software from GATAN.Raman spectroscopy was measured with an Aseqinstruments Rm1 confocal Raman spectrometer.Fourier Transform Infrared Spectroscopy (FTIR) was measured in the range from 370 to 4000 cm −1 with a Bruker VERTEX 70 v FTIR spectrometer. Computational details Computational calculations were performed within the Density Functional Theory (DFT) framework [22,23], as implemented in Quantum Espresso [24][25][26], under the generalized gradient approximation (GGA) with the Perdew-Burke-Ernserhof (PBE) exchange-correlation functional [27,28].The wave functions were expanded in a basis set through the Projector Augmented Wave pseudopotential method with cut-off energy of 1224 eV and a k-point sampling inside the first Brillouin zone constructed using the Monkhorst-Pack scheme with 8 × 8 × 8 grids [29], considering as valence electrons 4f 7 5d 1 6s 2 for gadolinium, 3s 2 4s 2 3p 6 3d 2 for titanium and 2s 2 2p 4 for oxygen.The equilibrium properties were obtained via the geometry optimization process in the Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization scheme, where system total energy and hydrostatic pressure were used as equilibrium parameters.The Gd 2 TiO 5 material were modeled with Pnma space group No. 62 symmetry with 2 gadolinium atoms in two independent Wyckoff sites (4c), 4 titanium atoms are in one independent Wyckoff position (4c) and 5 oxygen atoms in one independent Wyckoff site (4c), figure 1.For the Gd 2 TiO 5 geometry relaxation, the initial lattice parameters were measured experimentally from the Rietveld refinement.Although Gd and Ti have f and d electrons so the on-site and inter-site interaction should be considered, we only needed the energy differences to stablish energy trends between them.In this cases there was no need to consider the use of a Hubbard model or other approximations. Results and discussion The Rietveld refinement, figure 2, confirms the formation of the almost pure orthorhombic Gd 2 TiO 5 phase with lattice parameters a = 10.489Å, b = 3.761 Å and c = 11.321Å; in concordance with those previous reported values for the orthorhombic Gd 2 TiO 5 phase (table 1) [4,12,30], and with those calculated from the geometric optimization.According to the Rietveld results, the most distorted cation coordination is around the Gd1 site, because of the short interatomic Gd-O1 distance (1.794 Å), with a 30% difference between the 7-fold different oxygen ions while the Ti cation, tetrahedrally coordinated, have a maximal 10% difference between Ti-O2 compared against the different oxygen interatomic distances, as shown in table 1. The 4% contribution of the Gd 2 Ti 2 O 7 pyrochlore phase on the x-ray diffractogram is an indication of the chemical reactivity of this phase over the Gd 2 TiO 5 one.This trend has been observed in a wide range of synthesis methods and thermodynamic conditions.Table 2 shows the energies DU, the reactive energy of the oxides ( ) DH 0 of both phases and the constituent atoms to calculate the cohesive energy ( ) E c and enthalpies of formation ( ) D H , r 0 in order to elucidate the role of the 'firing salts method' in the promotion of the orthorhombic Gd 2 TiO 5 phase compared against Gd 2 Ti 2 O 7 pyrochlore. In terms of the cohesive energy, defined as the heat of sublimation of a solid into its constituents, given by: Where DU Gd TiO 2 5 and DU Gd Ti O 2 2 7 are the energies per atom of the crystals at equilibrium while U , Gd U Ti and U O2 are the energies of the isolated constituent atoms; the related results, shown in table 2, indicates that both phases are very stable with the pyrochlore one being a little more stable than the orthorombic one.However, a completely different trend happens with the heat of formation, calculated at equilibrium, given by the where 2 3 and DH , TiO 0 2 are the formation enthalpies of the orthorhombic, pyrochlore, gadolinium and titanium oxides at DFT equilibrium.Although the computed results in table 2 show that the heat of formation of the pyrochlore phase is 75% larger than the orthorhombic phase, the former requires less time and temperature than the later.This behavior can be explained in terms of the chemical reactivity for both materials: As already mentioned in the introduction, the rate of formation of the orthorhombic phase is twice the rate of TiO 2 consumption, so it has a rate of formation larger than the pyrochlore. Therefore, it could be stated that solid state reaction and low temperature synthesis conditions promotes the formation of the pyrochlore phase, however, if the system is out of the thermodynamic equilibrium, the more stable structure is the Gd 2 TiO 5 .This is in concordance with the FSM that it is something like a shocking thermal treatment, where the reaction takes place at temperatures far away from the 660 °C melting temperature of the eutectic NaCl -KCl mixture.Therefore, the sublimation of the salt media increases drastically the heat of the reaction, promoting the twice formation rate of the Gd 2 TiO 5 , as it is confirmed experimentally, where the remaining Gd 2 Ti 2 O 7 phase comes from the highly formation stability of the compound. According to the Rietveld analysis, the mean size of polycrystalline aggregates is 386 nm while scanning electron microscopy (figure 3) together with transmission electron microscopy, figure 4, suggests the presence of a dispersed size distribution of particles comprised between 50-500 nm of a well faceted morphology with a microstructure of the particles corresponding with orthorhombic crystal symmetry.The Fast Fourier transform (FFT) analysis of the HRTEM micrograph, figure 5, evidences the presence of three plane contributions ( ̅ ) 320 , ( ) 230 and ( ̅ ) 1 50 linked to the inter-planar distances 0.308 nm, 0.298 nm and 0.021 nm, respectively: all of them in good correlation with the orthorhombic No. 62 space group with zone axis [ ] 001 .This is an indication that the crystal growth and that the nucleation process occurs in the three principal lattice directions. The profile of the particles allows to elucidate the characteristics related with the synthesis reaction.The 'firing salts method' requires high temperature (1200 °C) with a relative short-time calcination treatment, for the formation of the Gd 2 TiO 5 compound, with particle growth starting in the nanoscale regime.This behavior could be understood in terms of low solubility of Gd 2 O 3 in the molten chloride salts; this is assumed by the long thermal treatments necessary to the formation of Gd-based complexes synthesized by molten salt method [15,31].Therefore, the first reaction step involves the solubilization of the reagents to posteriorly initiate a slow nucleation process (at least 2 h), allowing the formation of faceted particles with orthorhombic geometry. The experimental vibrational Raman and IR spectra together with the most representative calculated Raman -IR vibrational energies are shown at figure 6, and in table 3; it is noticed the good correlation between experimental (solid line) and computed DFT data (dashed lines).As it was established before for the A 2 TiO 5 ( A = Y, Dy, Tb, Gd, Sm, Nd, Pr) compounds, the main contribution bands are according to the irreducible 3 [32,33]; at which the strong Raman (B 2g −796 cm −1 ) and IR (B 1u −847 cm −1 ) bands are assigned to Ti-O2 stretching vibrational modes which is, according to the Rietveld refinement, the most asymmetric interatomic distance in the 5-fold Ti oxygen coordination.The 660 cm −1 (Raman-B2g) and 582 cm −1 (IR-B 3u ) are linked to the tetrahedral base Ti-O asymmetric stretching, while Raman (A g -B 3g ) and IR (A u -B u ) bands included from 300 to 500 cm −1 correspond to antisymmetric bending modes of the square planar Ti-O bonds.Below the 300 cm −1 regions, there are five Raman bands and one IR band; all of them related to the trivalent Gd cations with the narrowed B 3g (Raman − 247 cm −1 ) and B 3u (IR − 392 cm −1 ) being related to bending modes of the Gd1-O1 interatomic distance in the 7-fold coordination [32]. The width and intensity of the band at ∼596 cm −1 , not present in the calculated vibrational modes, figure 7, is a combination band from the Ti-O5 coordinated group as suggested by the correlation of the Ti-O5: C 4v point group to the D 2h factor group: ; since the direct product of the D 2h factor group is such that g xg = u xu = g, we have assigned it as a symmetric stretching Ti-O vibration with A g symmetry as the result of the most likely combination of two stretching Ti-O vibrations. Conclusion Gd 2 TiO 5 was successfully synthesized with a minimum remnant pyrochlore phase by the firing salts method which requires less energy and time than typical solid-state reaction.The reduced reaction times produces growth grain sizes starting in the nanoscale region which favors high strength components, suitable for applications in nuclear reactor control rod materials or as neutron absorbers.DFT calculations supports the experimental results and clarify the origin of the second order Raman bands that indicates the phosphor behavior for luminescence centers.Also, the first principles calculations provide an insight of the formation process.The high scalable FSM of Gd 2 TiO 5 together with its characterization opens the research to RE-titanates and doped systems with many possible future applications. Figure 1 . Figure 1.Crystal structure of the Pnma Gd 2 TiO 5 material with the different oxygen labeled assigned coordinating the Gd and Ti cations.The structure was obtained with the Diamond Software from Crystal Impact version 3.0 T.M. . Figure 4 . Figure 4. Transmission electron micrograph from different regions of the Gd 2 TiO 5 synthesized compound. Figure 6 . Figure 6.Raman and FTIR spectra of the Gd 2 TiO 5 compound.The dotted lines are the calculated DFT vibrational contributions. Figure 7 . Figure 7.Comparison between DFT calculated and experimental IR and Raman active modes. Table 2 . Internal energy per atom ( ) DU and its constituents' atoms, cohesive energy (E c ) and enthalpy of formation (D H
4,205.6
2024-05-20T00:00:00.000
[ "Materials Science", "Physics" ]
Artificial Neural Network Model for Prediction of Tool Tip Temperature and Analysis Technological improvements put computer systems in the center of our life and various scientific disciplines. These can range from controlling a device in our home to public institutions and the industry. One of these disciplines is a sub-area in mechanical engineering called machining is concerned with not only mechanical systems but also computer aided systems. Artificial Neural Networks -an area of artificial intelligencewhich is concerned with learning and decision making of computers is a field that scientists are very interested in. In this study, an Artificial Neural Network system was designed for predicting the temperature at the tool tip in the machining process. In the metal cutting process, tool tip temperature is one of the conditions that must be identified, analyzed and monitored. For this purpose, an ANN model was developed to determine the tool tip temperature in the turning process. In the designed ANN model, parameters consisting of three inputs and one output were used. The three input variables were rake angle (γ-o), approaching angle (χ-o), feedrate (fmm/rev) respectively. The output parameter was the tool tip temperature (T-0C). The most appropriate model was determined according to Mean Squared Error ratio. In the test phase of the Artificial Neural Network, the smallest Mean Squared Error was obtained with the Artificial Neural Network topology formed as 3-4-1. In this Artificial Neural Network model, calculations were Mean Squared Error=0.00144, R2=0.9956 (absolute fraction of variance) in the training phase and Mean Squared Error=0.00231, R2=0.9954 in the test phase. The results show that the designed Artificial Neural Network model can be used for predicting and analyzing tool tip temperature. Introduction Computer-aided applications have found usage in one of the common disciplines of technology that is the mechanical engineering and it's sub-field the machining. A workpiece in machining is produced by removing parts of various sizes using cutting and machine tools suitable for the process. In this process, with the help of appropriate parameters, the most important objective is producing parts that are undamaged, smooth and high quality. In the manufacturing technology, the main factor affecting the usability and cost of the material is the metal cutting process. Proper cutting conditions are created with different situations in the tool geometry and the process yield is aimed to be increased at the maximum level with the help of the quality of the produced workpiece. It is very difficult to develop a comprehensive model that includes all cutting parameters and tool geometry [1,2,3,4]. Computers are initially developed to transfer electronic data or to perform complex calculations and now they can gather information about events, make decisions and even learn about the relationships between events. Complex problems that are very hard or even impossible to solve can be solved via approaches that are heuristic and considered in artificial intelligence. These solutions could be developed into a model. Along with computer technology, artificial intelligence is constantly growing, and new approaches are emerging in every area. Artificial Neural Networks -an area of artificial intelligence-which is concerned with learning and decision making of computers is a field that scientists continue to be very interested in [2,3]. As technology develops, there are positive improvements in machining industry about production cost, productive and quality. It is important that produced workpiece is of good quality, smooth and undamaged during the machining process that is shaping the workpiece on cutting tool. Cutting parameters and tool geometry must be determined for the production cost. Tool geometry variables (rake angle, approaching angle etc.) and cutting parameters (feedrate, cutting speed, depth of cut) affect the temperature and cutting force on the tool during the process. The changes in these parameters cause the tool tip temperature (TTT) values on the tool surface to change. It is therefore important that the TTT value is known and monitored. The sliding of sawdust on the surface of the cutting tool occurs under high pressures and therefore high temperatures occur on the tool surface due to the magnitude of the cutting force and the friction. Factors such as gradual deformations in the cutting tool geometry, fracture of the cutting edge due to the instantaneous high forces, plastic deformation in the cutting tool due to high temperature and stresses are the reasons for the cutting tool to lose its cutting ability. The cutting temperatures, especially the maximum temperature at the surface between tool and sawdust are also important in terms of tool life. Cost of production, productivity and the quality are also related with the monitoring of TTT. TTT can be measured by sensors (experimentally) during the lifting process and can be transferred to the computer via cards. However, these measurements may not be at the desired accuracy due to some negativity during the process. There can also be heat dissipation. In other words, TTT detection can be difficult to determine even after measurements. Therefore, different mathematical approaches can be used to calculate the surface temperature correctly [1, 4, 6,7,8,9,10]. In metal cutting, creating a model for determining TTT that includes all the cutting parameters and tool geometry is very difficult for such a non-linear and complex field. In such difficulties, it would be a wise solution to develop models that use artificial intelligence techniques such as artificial neural networks or fuzzy expert systems etc. Inspired by human brain functions, Artificial Neural Network (ANN) can learn and generalize through testing. One of the important areas where ANN is used is estimation. ANN may reveal unknown or hard to perceive relationships between data. Many studies show that ANN is as widely used as conventional methods and gives even better results in estimation studies. The success of ANN, especially in non-linear situations makes it favorable as an estimation appliance. For this purpose, a model was designed to enable the network to learn the pattern between input and output data [2,3,11]. In literature reviews; studies like various software that performs analysis based of the finite element technique [19], experimental, mathematical [4,5,6,18], statistical [20,15] and analytical models [21,12], three-dimensional model [13], optimization (using Taguchi method etc.) [14,17] and artificial intelligence methods [20,7,16,11] are used for modeling the temperature. The purpose of this study is to estimate and analyze TTT (T-0 C) in the turning process of metal cutting. For this purpose, an ANN approach based on variable tool geometry and cutting parameters. The developed ANN model has three inputs (rake angle, approaching angle, feedrate), one hidden layer (hl) and one output (tool tip temperature). It has been observed that there is a strong correlation between the temperature values estimated by ANN and the measured experimental [4] data. Analysis and comparisons made based on the consistency between the values obtained this way. Artificial Neural Network Model for Tool Tip Temperature ANN has a network with input, output and hidden layers. This computer system can derive and create new information by using relations between these layers through the learning process like the human brain. Each layer receives input with connected weights from the other neurons, passes thorough the neurons and produces and output signal that can also be produced by other neurons. In this way the process proceeds along the neurons and layers back and forth. When the process reaches the specified error value, the network training process stops and model is created. There are numerous ANN network struct ures and architectures in the literature. These includes the feed-forward back-propagation network which is commonly used for engineering and estimation operations [2,7,11]. ANN architecture for TTT estimation is "3-hl-1" (Figure 1). This model was designed with the help of the data obtained by Saglam et al [4]. System modelling with ANN approach was done based on the cutting conditions given in Table 1. In the working conditions, the depth of cut is 1.5 (d-mm) and cutting speed is 133 (v-m/min). ANN for estimating TTT has a structure with 3 inputs, 1 hl and 1 output (Figure 1). The input parameters are rake angle (γ-o ), approaching angle (χ-o ) and feedrate (f-mm/rev). The output parameter is the TTT (T-0 C). The dataset [4] includes 256 values, 192 for training and 64 for testing. A computer with Intel i7-4720HQ 2.6Ghz processor, 16GB RAM and Matlab software used for modeling an ANN. After designing the ANN network structure, the input and output values obtained via experimental study were normalized (to improve the training character) between 0-1 using equation (eq.) 1 [22]. Here, V denotes experimental real values, VN denotes normalized values by equation 1 and Vmax, Vmin denote the minimum and maximum values in V. Table 2 gives the descriptive statistical results that summarize the numerical values of the data set and convert them to descriptive indexes. In the training and testing processes, different training algorithms (trainlm-traingd) and transfer functions (purelin, tansig, logsig vb.) in the hidden and the output layers were examined using feedforward back-propagation algorithm. With these experiments, the most suitable network model was tried to be found in the ANN network structure. For this purpose, the network was trained and tested by changing the number of neurons, epochs and the training and transfer functions in the hidden layer. Thus, it is aimed to find the best network. The results are obtained with the aid of a software developed using Matlab. In the ANN procedure, the statistical comparisons between experimental and estimated values are done using mean squared error (MSE-eq. 2) and absolute fraction of variance (R 2 -eq. 3) [23,24]. With these statistical results, the most suitable network model was determined. These equations are; Here, di denotes the target or real value, Oi denotes the output or the estimated value and n denotes the number of outputs. Results and Discussion First step in the study is determining the training algorithm that gives the best results. For this, training and testing processes of ANN software were run in Matlab. In this way, it was determined which training algorithm gives more close results to experimental TTT measurement values. Levenberg-Marquardt (trainlm) and Gradient descent backpropagation (traingd) algorithms were run on the 256 data set (4 fold) respectively and results were obtained. While training algorithms trainlm and traingd were running, the transfer functions Hypberbolic Tangent Sigmoid (tansig), Logistic Sigmoid (logsig) and Linear (purelin) were tested in the developed software respectively and results were obtained. The best results of the training and the test processes were determined by MSE and R2 error rate. The number of epoch and neurons in hidden layer are also considered when determining the best algorithm. The training algorithm were chosen with the smallest error rate. As a result, the most suitable transfer function was logsig (Log-Sigmoid) and training algorithm was trainlm (Levenberg-Marquadt). In this case, the neuron counts in the hidden layer for all training and test algorithms in the single layer network structure were changed to 2, 4, 7, 10, 20, 50 respectively. As the number of neurons in the hidden layer was changed, the number of epochs was also changed to 2, 5, 10, 15, 20, 25, 100 respectively and MSE, R 2 results were observed (Table 3). When the MSE and R 2 results are examined, the models with the smallest MSE error rate and the highest R 2 value were analyzed. These analyzes are done to find the best performance. Among these network models, the network that gives the best result in test phase that is best represents the experimental values is the most suitable network. In this study, 3-4-1 (5 Epochs) structured model-9 has the smallest MSE (0.00231) and the highest R 2 (0.9954). In this case, as can be seen from Table 3, there are very suitable models that can be used for training. After the training phase, even if there are models with low MSE error rate, it can be said that the same model doesn't give successful results in the test phase according to MSE and R 2 statistical rates. Therefore, in the testing process, 3-4-1 model with the lowest MSE error rate and the highest R 2 ratio were used. When the Table 3 values compared, this model has been used for both training and test phases. trainlm for training algorithm and logsig for activation function were determined. The MSE result and comparative experimental measurement-ANN graphic obtained by the training (logsig-trainlm) for the chosen model of 3-4-1 can be seen in Figure 2. The MSE result and comparative experimental measurement-ANN graphic obtained by the testing (logsig-trainlm) for the chosen model of 3-4-1 can be seen in Figure 3. When the comparative graphics in Fig.2 and Fig.3 are examined, it can be seen that the estimated TTT values with ANN model are similar to those obtained with experimental studies and measurements. The statistical results in Table 3 can make the comparisons clearer. In the later stage, all estimated data after training and testing phases (model 9) in 3-4-1 (5 Epoch) ANN model were combined. All results estimated with ANN and all TTT results measured in experimental [4] studies were compared. Comparative graphic can be seen in Figure 4. Furthermore, the correlation coefficient was calculated statistically as R=0.99. When the experimental TTT values and estimated ANN values are compared, it can be seen that they are close to each other, similar and compatible. The developed ANN results and the measured values were evaluated by using regression analysis. The graph shown in Fig. 5 indicates that the correlation coefficient was 0.99. In the case presented in this study, the correlation coefficient obtained were very close to 1, which indicates a perfect match between ANN estimation values and measurement of temperature values. There were no meaningful differences between the measurements of TTT and ANN results. Conclusion In this study, a three input one output ANN study was performed to determine the tool tip temperature during the turning (metal cutting) process. If the data set, the parameters, difficulty of repeating the experiment and creating the mathematical formulas are considered in a non-linear situation, it can be seen that the use of ANN is a useful. Under these circumstances, TTT values were estimated with a developed ANN system. Numerical results obtained from ANN model were compared with experimental results. The comparisons show that there is a correspondence between the two groups of data. It is seen that ANN has provided successful results and can be modeled for estimating this kind of systems. The training functions offered by MATLAB were tried and it was observed that the trainlm (Levenberg-Marquardt) training algorithm provided the best solution. It was observed in the test results and graphs that the logsig (Logistic Sigmoid) function yielded more successful outcomes. Accuracy rates that were obtained during the training and testing stages and MSE show that the model created in the study can be used for predicting TTT. It is thought that when the number of cutting parameters and values are changed, the success rate can be further increased. Also, instead of the ANN used in study, when another intelligent system, algorithm, mathematical or statistical approaches are used alone or in combination with ANN, it can affect the success rates. The ANN model can turn the disadvantages of experimental studies to advantages. Furthermore, this developed model has the ability to estimate the outcomes of parameters that couldn't done in experiments.
3,551.8
2018-03-29T00:00:00.000
[ "Computer Science", "Materials Science" ]
GAUFRE: a tool for an automated determination of atmospheric parameters from spectroscopy We present an automated tool for measuring atmospheric parameters (T_eff, log(g), [Fe/H]) for F-G-K dwarf and giant stars. The tool, called GAUFRE, is written in C++ and composed of several routines: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a chi^2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analysed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log(g)) to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behaviour of the program when using the seismic log(g). Introduction Spectroscopy is one of the most powerful tool that astronomy possesses in order to derive atmospheric parameters and abundances of stars. From the stellar spectrum it is possible to measure the effective temperature (T eff ), the surface gravity (log g) and the abundance of iron ([Fe/H]) and of several other elements. The classic method consists in measuring the equivalent widths (EW) of a species in two different ionization states, usually FeI and FeII. By imposing excitation and ionization equilibrium through stellar atmosphere models, it is possible to derive T eff , log g and to infer elemental abundances from the curves of growth. This method is precise and it is widely adopted [1]. In spite of the precision, the line-by-line classical analysis is time consuming: usually EWs are measured by hand using the IRAF "splot" routine (where the choice of the continuum and the position of the line is completely manual) and the procedure for finding the best T eff and log g requires a large number of iterations. The growing amount of stellar data due to large surveys (i.e. RAVE, SEGUE, LAMOST, HERMES and Gaia ESO) and the availability of dedicated telescopes and multi-object spectrograph, requires the development of automated pipelines, methods and programs that fasten the analysis process. For example, DAOSPEC [2] and ARES [3] are useful and free codes that performs an automated EW measurement; the MOOG code [4] is a valid collection of routines useful for determining atmospheric parameters and abundances. In particular, MOOG abfind and synth tasks are widely used in literature [5] [1]. Another method for the estimation of atmospheric parameters and elemental abundances is to compute a e-mail<EMAIL_ADDRESS>arXiv:1301.7256v1 [astro-ph.SR] 30 Jan 2013 a set of synthetic spectra and to find the best match between the synthetic and the observed spectrum. This method is adopted by surveys as RAVE [6] and HERMES [7] as well as in small surveys as ARCS [8] [9]. In this paper we present a new automatic code, GAUFRE, that can perform both type of analysis. In section 2 we give a short description of the idea behind the program and the two types of analysis that it performs. In subsection 2.4 we present the first results of tests using spectra of objects well known in literature. In section 3 we present an additional tool of GAUFRE that consists in using the asteroseismic gravity as a fixed value for log(g) in order to refine the measurement of T eff , microturbulence velocity (ξ mic ) and elemental abundances. Section 4 discusses the future perspectives for the code. The GAUFRE code GAUFRE is a collection of several C++ routines. It is written in order to measure fast and precisely radial velocity (V rad ) and atmospheric parameters (T eff , log g, [Fe/H]) of a star starting from a onedimensional normalized spectrum. The radial velocity is measured with a cross-correlation technique (routine GAUFRE-RV), atmospheric parameters can be measured adopting a χ 2 fitting over a library of synthetic spectra (GAUFRE-CHI2 routine) or with the classic FeI-FeII lines technique (GAUFRE-EW routine). GAUFRE is an updated and extended version of the code written for the Asiago Red Clump Spectroscopic survey (ARCS) [8]. GAUFRE was created because of the need of introducing new libraries of synthetic spectra, adapting the code for different resolutions and implementing the classic line-by-line technique. The code was written in C++ and does not need any particular library, in order to avoid any software license problems and to be executable in different platforms. The only additional program needed for running GAUFRE is MOOG, which can be easily installed and downloaded from its homepage (http://www.as.utexas.edu/∼chris/moog.html). The user is also supposed to download the libraries of synthetic spectra and model atmospheres required by GAUFRE. So far GAUFRE is tested for F-G-K stars (both giants and dwarfs) and works in the 3500 -9000 Å spectral range. Radial Velocities: GAUFRE-RV The radial velocity subroutine (GAUFRE-RV) measures the radial velocity V rad by cross-correlating the observed spectrum with a synthetic spectral library [10]. The procedure is the same as described in [8]. The procedure starts from the continuum normalized spectrum in a 2-column ASCII format. First, the synthetic normalized spectra of the selected library are renormalized, following the same parameters as adopted for the normalization of the observed spectrum (same function, same order and same high and low rejection values). To lower the impact of the different noise level in the observed and synthetic spectra, they are scaled to match their geometric mean. This procedure is needed in order to improve the accuracy of the cross-correlation and of the χ 2 fitting and it is performed by the GAUFRE-LIB routine (see also section 2.2). The result of the GAUFRE-RV subroutine is a file containing the value of V rad and an ascii file containing the continuum normalized spectrum corrected for the radial velocity. To validate the GAUFRE-RV routine we measured the radial velocity of a set of Red Giants stars that are IAU standard radial velocity stars. These stars were observed with the Asiago Echelle Spectrograph (INAF-OAPd); we then downloaded, when available, the spectrum from the ESO-Archive. For this test, we adopted the synthetic spectra library provided by L. Fossati (cf. section 2.2). Results are summarized in Table 1. The mean difference between our values and those present in the literature is ∆RV = 0.10 km s −1 , with a rms of 0.3 km s −1 . χ2 on synthetic spectra libraries: GAUFRE-CHI2 Atmospheric parameters (T eff , log g, [Fe/H], [α/Fe], V rot sin i) are obtained by GAUFRE-CHI2 via χ 2 fitting of the continuum normalized spectrum against a synthetic spectral library. The choice of the spectral library is up to the user, available libraries are: a library based on Kurucz model atmosperes [11], the library provided by L. Fossati (built adopting the spectral synthesis code Synth3 described in [12]) and the AMBRE library [13]. Characteristics of different libraries are summarized in Table 3. Libraries are provided at different resolutions and they cover a wide spectral range (usually 3,500 -10,000 Å). Before starting χ 2 analysis, the desired library must be cut, normalized and degraded at the same wavelength interval, normalization function and resolution of the real spectra. For this purpose a routine GAUFRE-LIB has been created. The degradation of the spectra at the desired resolution is performed through deconvolution with a Gaussian profile. EW and MOOG: GAUFRE-EW GAUFRE-EW automatically performs the classical line-by-line analysis for deriving atmospheric parameters and abundances. The procedure starts from a continuum normalized spectrum (ASCII 2 columns format), a file containing a list of lines to measure and their parameters (as requested by MOOG) and a file containing the parameters of the spectrum like the wavelength coverage, resolution and, if any, guessed values for T eff and log g. The EW of every line present in the input file is measured (except when it is not detectable). The program selects an area of 3-4 Å around the wavelength of the line (this parameter is selected by the user). The spectrum is then fitted with a polynomial function in order to determine the continuum and point with the lowest intensity. In order to test the automated EW measurement of the line we compared the EW values obtained by measuring features by hand (using IRAF-splot) and the correspondent EW measured by GAUFRE. The test was performed on the spectrum of Arcturus, taken with the ESO-FLAMES-UVES instrument, with the U580 setup (5770-6825 Å). The agreement is quite good and rms is of ∼3 mÅ (see Figure 1). Atmospheric parameters were computed by using MOOG abfind driver (the program uses its noniinteractive version, MOOGSILENT), the measured EW of FeI and FeII lines and a family of model atmospheres (MARCS [14] or Kurucz [15]). T eff is calculated by assuming the excitation equilibrium and minimizing the trend of the Fe abundance versus the excitation potential. The surface gravity, log g is derived by assuming the ionization equilibrium: log n (FeI)= log n (FeII). The procedure is iterative and the program will converge to T eff and log g that satisfy both the ionization and excitation equilibria. The value of the microturbulence ξ mic is derived by minimizing the trend of the FeI abundance versus the FeI lines EW. Atmospheric parameters validation We took spectra of 7 F-G-K stars taken with FLAMES-UVES U580 or FEROS from the ESO-archive (http://archive.eso.org). We selected spectra of targets very well known in literature: α Cen A, µ Cas A, β Vir, Arcturus, µ Leo, ξ Hya and γ Sge. As a reference we used an average value of the most recent entries of the PASTEL catalog [3]. We analyzed spectra with both GAUFRE-EW (Kurucz model atmospheres) and GAUFRE-CHI2 (Fossati library of synthetic spectra) and we compared our results with those present in literature. The agreement is quite good as showed in Tab.3. Table 3. Comparison of the atmospheric parameters obtained by GAUFRE-CHI2 (G-CHI2) and GAUFRE-EW (G-EW) with thopse present in literature for a set of 7 stars. Asteroseismic constraints As extensively discussed, for example, in [16], the frequency of maximum power, ν max can be used for deriving a precise estimate of the surface gravity: The precision of the gravity derived from Eq. 1 is generally expected to be below 0.05 dex, thanks to the weak sensitivity of the scaling relation to the assumed T eff and the high precision usually achieved for the measurement of ν max . The scaling relation has been demonstrated to be reliable [17] [18] [19] [20] and it can be used for refining atmospheric parameters and abundances. As a matter of fact one can improve the spectroscopic analysis by fixing the log g to the seismic value: this largely increases the accuracy of the derived T eff , [Fe/H] and hence chemical abundances. A set of spectra of 111 RG stars belonging to the LRc01 and LRa01 fields of CoRoT have been used as benchmark for testing the log g values derived by GAUFRE. Spectra are taken with the ESO-FLAMES GIRAFFE 9 setup, centered in the MgI triplet (5278 Å). Spectra have already been analyzed [21] by using MATISSE tool [22]. First, we compared log g values of GAUFRE-CHI2 (using the synthetic library provided by L. Fossati, see Tab3 In addiction the spectral range covered by spectra is affected by the presence of MgH molecular bands. This complicates even more the continuum normalization, leading to several systematics in the atmospheric parameters determination. As shown in Fig. 2, the seismic gravity can be used to enhance the accuracy of the atmospheric parameters. In the case of Gazzano et al. dataset [21] with a poor SNR, the seismic gravity helped in providing a more precise determination of T eff and [M/H] reducing average errors from 120 K and 0.20 dex to 75 K and 0.11 dex respectively. Conclusions We present the GAUFRE program, a versatile tool developed for measuring radial velocities and atmospheric parameters from optical spectra. The program is composed by different routines: GAUFRE-RV, GAUFRE-CHI2, GAUFRE-EW. We performed some preliminary tests in order to check the performances of the tool. These tests show that the program is reliable and that it can be used to process spectra of F-G-K dwarfs and giants with a SNR above 40. At the moment it is adopted by the Liège node within the Gaia-ESO Survey (PI: G. Gilmore and S. Randich) and further applications are planned. We also showed an interesting and useful extension of GAUFRE that uses, when avaiable, the seismic log g as a fixed value for the surface gravity: the precise and also likely more accurate values given by asteroseismology allow us to greatly refine the values of T eff and [Fe/H]. The GAUFRE tool is continously in development: we plan to implement new libraries of synthetic [21], photometry (J − K) and asteroseismology) in function of the SNR. Right panel: differences between the atmospheric parameters measured with different techniques (GAUFRE-CHI2 this paper, MATISSE by [21], photometry (J − K) and asteroseismology) in function of the T eff , log g and [M/H] measured by GAUFRE-CHI2 by fixing the log g to the seismic value. Green crosses are data from [21], blue diamonds are data obtained by using GAUFRE-CHI2 and red circles are values measured by GAUFRE-CHI2 by fixing the log g to the seismic value. It is worth to take into account the poor quality of the spectra: more than 50% of the spectra possess a SNR < 40. spectra and to extend the analysis to the infrared region. New and detailed tests are planned as well, in order to better investigate the performances of GAUFRE at different SNR. In the next future we plan to make the GAUFRE code avaiable through the web and to create an userfriendly graphical interface.
3,220
2013-01-30T00:00:00.000
[ "Physics" ]
Strong Approximate Consensus Halving and the Borsuk-Ulam Theorem , Introduction Many computational problems, e.g.linear and semidefinite programming, are most naturally expressed using real numbers.When the model of computation is discrete, these problems must be recast as discrete problems.In the case of linear programming this causes no problems.Namely, when the input is given as rational numbers and an optimal solution exists, a rational valued optimal solution exists and may be computed in polynomial time.For semidefinite programming however, it may be the case that all optimal solutions are irrational.For dealing with such cases we may instead consider the weak optimization problem as defined by Grötschel, Lovász and Schrijver [GLS88]: Given ε > 0, the task is to compute a rationalvalued vector x that is ε-close to the set of feasible solutions and has objective value ε-close to optimal.Assuming we are also given, as an additional input, a strictly feasible solution and a bound on the magnitude of the coordinates of an optimal solution, the weak optimization problem may be solved in polynomial time using the ellipsoid algorithm [GLS88].Let us note however that without additional assumptions, even the complexity of the basic existence problem of semidefinite feasibility is unknown.In fact, the problem is likely to be computationally very hard [TV08].More precisely, it is hard for the problem PosSLP, which is the fundamental problem of deciding whether an integer given by a division free arithmetic circuit is positive [ABKM09]. In this paper we consider real valued search problems, where existence of a solution is guaranteed by topological existence theorems such as the Brouwer fixed point theorem and the Borsuk-Ulam theorem.This means that the search problems are total, thereby fundamentally differentiating them from general search problems where, as described above, even the existence problem may be computational hard.We are mainly interested in the approximation problem: given ε > 0, the task is to compute a rational-valued vector x that is ε-close to the set of solutions. Recall that the Brouwer fixed point theorem states every continuous function f : B n → B n , where B n is the unit n-ball, has a fixed point, i.e. there is x ∈ B n such that f (x) = x [Bro11].The Borsuk-Ulam theorem states that every continuous function f : S n → R n , where S n is the unit n-sphere in R n+1 , maps a pair of antipodal points of S n to the same point in R n , i.e. there is x ∈ S n such that f (x) = f (−x) [Bor33].The Brouwer fixed point theorem is of course not restricted to apply to the domain B n , but applies to any domain that is homeomorphic to B n .Similarly the Borsuk-Ulam theorem applies to any domain homeomorphic to S n by an antipode-preserving homeomorphism.It is well-known that the Borsuk-Ulam theorem generalizes the Brouwer fixed point theorem, in the sense that the Brouwer fixed point theorem is easy to prove using the Borsuk-Ulam theorem [Su97;Vol08]. The Brouwer fixed point theorem and the Borsuk-Ulam theorem naturally define corresponding real valued search problems, and thereby also corresponding approximation problems.In addition, the statements of the theorems naturally leads to another notion of approximation.For the case of the Brouwer fixed point theorem we may look for an almost fixed point, i.e. x ∈ B n such that f (x) is ε-close to x, and for the case of the Borsuk-Ulam theorem we look for a pair of antipodal points that almost map to the same point, i.e. x ∈ S n such that f (x) and f (−x) are ε-close.Following [EY10], we shall refer to this notion of approximation as weak approximation and to make the distinction clear we refer to the former (and general) notion of approximation as strong approximation.In the setting of weak approximation in relation to the Borsuk-Ulam theorem we assume that f has co-domain B n . In their seminal work, Etessami and Yannakakis [EY10] introduced the complexity class FIXP to capture the computational complexity of the real-valued search problems associated with the Brouwer fixed point theorem, and proved that the problem of finding a Nash equilibrium in a given 3-player game in strategic form is FIXP-complete.In order to have a notion of completeness, the class FIXP is defined to be closed under reductions.The type of reductions chosen by Etessami and Yannakakis, SL-reductions, consists of mapping between sets of solutions by a composition of a projection reduction followed by individual affine transformation applied to each coordinate. Etessami and Yannakakis consider different ways to cast real valued search problems as discrete search problems.In addition to the approximation problem, these are the partial computation problem where the task is to compute a solution to a given number of bits of precision and decision problems, where the task is to evaluate a sign condition of the set of solutions given the promise that either all solutions satisfy the condition or none of them do.Of these we shall only consider the approximation problem.The class FIXP a denotes the class of discrete search problems corresponding to strong approximation of Brouwer fixed points and is defined to be closed under polynomial time reductions.Etessami and Yannakakis also prove that the problem PosSLP reduce to the problem of approximating a Nash equilibrium, thereby showing that FIXP a likely contains search problems that are computationally very hard. While the notion of SL-reductions is very restricted, it is sufficient for proving completeness of the problem of finding Nash equilibrium.Likewise, SL-reductions are sufficient for showing that FIXP is robust with respect to the choice of domain for the Brouwer function. Another important reason for using SL-reductions is that they immediately imply polynomial time reductions between the corresponding decision and approximation problems (the partial computation problem is more fragile and requires additional assumptions, cf.[EY10]).As we are mainly interested in the approx-imation problem more expressive notions of reducibility can be considered, while maintaining the property that reducibility implies polynomial time reducibility between the corresponding approximation problems.A sufficient condition for this is that the mapping of solutions is polynomially continuous and polynomial time computable. The Borsuk-Ulam Theorem Deligkas, Fearnley, Melissourgos, and Spirakis [DFMS21] recently introduced a complexity class BU to capture, in an analogy to FIXP, the computational complexity of the real-valued search problems associated with the Borsuk-Ulam theorem. The Borsuk-Ulam theorem has a number of equivalent statements that are also easy to derive from each other.A function f defined on the unit sphere S n is odd if f (x) = − f (−x) for all x ∈ S n .Note that the boundary ∂ B n of the unit n-ball B n is identical to S n−1 .We thus say that a function f defined on B n is odd on ∂ B n if f is odd when restricted to S n−1 .We present the simple proof of the known fact that the different formulations can be derived from each other, for the purpose of discussing equivalence from a computational point of view. Theorem 1 (Borsuk-Ulam).The following statements hold: (1) If f : S n → R n is continuous there exists x ∈ S n such that f (x) = f (−x). (2) If g : S n → R n is continuous and odd there exists x ∈ S n such that g(x) = 0. (3) If h : B n → R n is continuous and odd on ∂ B n there exists x ∈ B n such that h(x) = 0. We may view S n as two hemispheres, each homeomorphic to B n , which are glued together along their equators.Let π : S n → B n be the orthogonal projection defined by π(x 1 , . . ., x n+1 ) = (x 1 , . . ., x n ).Then given h we may define The assumption that h is odd on ∂ B n makes g a well-defined continuous odd function.We have g(x) = 0 if and only if h(x) = 0, which shows that (2) implies (3).Conversely, given g we define h by h 2 ) and have g(y) = 0. On the other hand, when g(y) = 0 we may define x = (y 1 , . . ., y n ) if y n+1 ≥ 0 and x = (−y 1 , . . ., −y n ) if y n+1 < 0, and we have h(x) = 0. Together this shows that (3) implies (2). The class BU defined in [DFMS21] corresponds to first formulation of the above theorem.We may clearly consider the second formulation equivalent to the first also from a computational point of view.In particular, when translating between formulations, the set of solutions is unchanged.Note that this set of solutions has the property that all solutions come in pairs: when x is a solution then −x is a solution as well.For the third formulation of the theorem this property only holds for solutions on the boundary ∂ B n . In contrast, while the mapping of solutions of the third formulation to the second (and first) formulation given above is continuous this is not the case in the other direction.More precisely, consider y ∈ S n such that g(y) = 0.For a solution strictly contained in the upper hemisphere, the orthogonal projection to the first n coordinates produces x ∈ B n such that h(x) = 0.For a solution y strictly contained in the lower hemisphere, the projection is instead applied to the antipodal solution −y. To clarify this issue from a computational point of view we introduce a new class BBU of real valued search problems corresponding to the third formulation of Theorem 1, and it will follow from definitions that BU ⊆ BBU.In the context of strong approximation however, the corresponding classes of discrete search problems BU a and BBU a will be shown to coincide.The idea is that given an approximation to y ∈ S n , where g(y) = 0, that is sufficiently close to the equator of S n , there is no harm in incorrectly deciding to which hemisphere y belongs, since solutions x ∈ ∂ B n for which h(x) = 0 also come in pairs. For the class BU, the notion of SL-reductions is clearly too restrictive to allow a reasonable comparison to FIXP.Closing the class BU by SL-reductions, the solutions would still come in pairs, thereby imposing strong conditions on the set of solutions.On the other hand the reductions should also not be too strong.In particular it would be desirable that FIXP would be still be closed under the chosen notion of reductions.This issue is not discussed in [DFMS21].We shall therefore propose a suitable notion of reductions for both BU and BBU. Consensus Halving The Consensus halving problem is a classical problem of fair division [SS03].We are given a set of n bounded and continuous measures µ 1 , . . . ,µ n defined on the interval A = [0, 1].The goal is to partition the interval A into at most n + 1 intervals, i.e. by placing at most n cuts, such that unions of these intervals form another partition A = A + ∪ A − of A satisfying µ i (A + ) = µ i (A − ) for every i.We may think of the intervals being assigned a label from the set {+, −}, and A + is precisely the union of the intervals labeled by +.Such a partition is also known as a consensus halving.Using the Borsuk-Ulam theorem, Simmons and Su [SS03] proved that a consensus halving using at most n cuts always exists.Simmons and Su represent a division of A as a point x on the unit n-sphere S n 1 with respect to the ℓ 1 -norm.The point x is viewed as representing a division into precisely n + 1 intervals, where some intervals are possibly empty.More precisely, the i-th interval has length|x i |, and intervals of length 0 may simply be discarded.The intervals of positive length are then labeled according to sgn(x i ).Note that for any x, the antipode −x represent the division where the sets A + and A − are exchanged.This naturally leads to a formulation using the Borsuk-Ulam theorem [SS03].Namely we may consider the function F : S n 1 → R n given by F(x) i = µ i (A + ), and note that any x ∈ S n 1 for which F(x) = F(−x) represent a consensus halving. We are interesting in the simple setting of additive measures, where we have corresponding density functions f 1 , . . ., f n such that µ i (B) = B f i (x) dx.To cast the consensus halving problem as a real valued search problem we follow [DFMS21] and assume that the measures µ 1 , . . . ,µ n are given by the distribution functions F 1 , . . ., F n defined by x 0 f i (x) dx.An instance of the consensus halving problem is then given as a list of algebraic circuits computing these distribution functions. Corresponding to the different formulations of the Borsuk-Ulam theorem as a real valued search problem with domain S n or B n we get two different formulations of the consensus halving problem.We denote these by CH and BCH respectively.Deligkas et al. proved membership of CH in BU following the proof of Simmons and Su, and proved hardness of CH for FIXP.Combining these, it follows that FIXP ⊆ BU. Strong versus Weak Approximation The difference between weak and strong approximation was studied in detail in the general context of the Brouwer fixed point theorem by Etessami and Yannakakis.A central example is the problem of finding a Nash equilibrium (NE).An important notion of approximation of a NE is the notion of an ε-NE.Computing an ε-NE of a given strategic form game Γ is polynomial time equivalent to computing a weak ε ′ -approximation to a fixed point the Nash's Brouwer function F Γ associated to Γ [EY10, Proposition 2.3].In turn, computing a weak ε ′ -approximation to a fixed point of F Γ polynomial time reduces to computing a strong ε ′′ -approximation to a fixed point of F Γ [EY10, Proposition 2.2], since the function F Γ is polynomially continuous and polynomial time computable.In general however an ε-NE might be far from any actual NE, unless ε is inverse doubly exponentially small as a function of the size of the game [EY10, Corollary 3.8]. For the problem of consensus halving we can illustrate the difference between weak and strong approximation by a simple example.We shall refer to a weak ε-approximation of a consensus halving as simply an ε-consensus halving.Consider a single agent whose measure µ is on the interval [0, 1] is given by the following density We have µ([0, 1]) = 1 and since µ is a step function, the corresponding distribution function F is piecewise linear.The unique consensus halving is obtained by placing a cut at the point ε/2 − ε 2 /(2 + 2ε). Placing a cut at any Thus an ε-consensus halving might be very far from an actual consensus halving.Note also that placing a cut at any point t ∈ [0, 3ε/2 − ε 2 /(2 + 2ε)] is a strong ε-approximation, which illustrates that a strong approximation is not necessarily a weak approximation.On the other hand, a strong (ε 2 /2)-approximation is also an ε-consensus halving.The Brouwer fixed point theorem and the Borsuk-Ulam theorem can both be proved starting from combinatorial analogoues of the two theorems, namely from Sperner's lemma and Tucker's lemma, respectively.The proofs of these two lemmas are constructive, but using them to derive the Brouwer fixed point theorem and the Borsuk-Ulam theorem involve a nonconstructive limit argument.Let us in passing note that while Sperner's lemma, like the Borsuk-Ulam theorem, has several different formulations, it is usually formulated as the combinatorial analogue of the third formulation of Theorem 1. Sperner's and Tucker's lemma give rise to total NP search problems.These turn out to be complete for the complexity classes PPAD and PPA introduced in seminal work by Papadimitriou [Pap94].Papadimitriou proved PPAD-completeness of the problem given by Sperner's lemma as well as membership in PPA of the problem given by Tucker's lemma, while PPA-completeness of the latter problem was proved recently by Aisenberg, Bonet, and Buss [ABB20].These results also imply that the classes PPAD and PPA corresponds to the problems of computing weak approximations to Brouwer fixed points and to Borsuk-Ulam points. The computational complexity of the problems of computing an ε-NE and of computing an ε-consensus halving was settled in breakthroughs of two lines of research.Computing an ε-NE was shown to be PPADcomplete by Daskalakis, Goldberg and Papadimitriou [DGP09] and Cheng and Deng [CD06].Computing an ε-consensus halving was shown to be PPA-complete by Filos-Ratsikas and Goldberg [FG18; FG19]. Our Results Our main result is that the problem of strong approximation of consensus halving is equivalent to strong approximation of the Borsuk-Ulam theorem. Theorem 2. The strong approximation problem for CH is BU a -complete. As described we view the consensus halving problem as the real valued search problem with its domain being either the unit sphere or the unit ball with respect to the ℓ 1 -norm.The theorem is proved by reduction from the real valued search problem associated with the Borsuk-Ulam theorem on the domain being the unit ball with respect to the ℓ ∞ -norm, i.e. from a defining problem of the class BBU. It is of general interest to study the relationship between search problems given by the Borsuk-Ulam theorem on different domains from a computational point of view.The reduction establishing the proof of Theorem 2 gives additional motivation for this.The domains we consider are unit spheres S n p and unit balls B n p with respect to the ℓ p -norm for p ≥ 1 or p = ∞.It is of course straightforward to construct homeomorphisms between unit spheres or unit balls with respect to different norms, and these could be used to define reductions between the different problems.We would however like that the mapping of solutions is simple, and in particular we would like to avoid divisions and root operations.We prove that one may in fact reduce between domains using SL-reductions. Deligkas et al. gave a reduction from the FIXP-complete problem of finding a Nash equilibrium to CH.Combined with membership of CH in BU, this gives the inclusion FIXP ⊆ BU.We observe that a proof due to Volovikov [Vol08] of the Brouwer fixed point theorem from the Borsuk-Ulam theorem may be adapted to give a simple proof of the inclusion FIXP ⊆ BU. For the class FIXP we prove two interesting structural properties that do not appear to have been observed earlier.While FIXP is defined using SL-reductions, we show that FIXP is closed under polynomial time reductions where the mapping of solutions is expressed by general algebraic circuits.This in particular supports that one may reasonably define the classes BU and BBU using less restrictive notions of reductions than SL-reductions.We propose to have the mapping of solutions be computed by algebraic circuits involving the operations of addition, multiplication by scalars, as well as maximization.This means that the mapping of solutions is a piecewise linear function, and we refer to these as PL-reductions.The second structural result for FIXP is a characterization of the class by very simple Brouwer functions.These are defined on the unit-hypercube domain [0, 1] n and each coordinate function is simply one of the operations {+, −, * , max, min}, modified to be have the output truncated to the interval [0, 1]. For the classes BU and BBU we prove that they are also closed under reductions where the mapping of solutions is computed by general algebraic circuits, but with the additional requirement that this function must be odd. For the class FIXP, an interesting consequence of the proof that finding a Nash equilibrium is complete, is that the class may be characterized by Brouwer functions computed by algebraic circuits without the division operation.The proof also shows that the class FIXP is unchanged even when allowing root operations as basic operations.We prove by a simple transformation that the classes BU and BBU may be characterized using algebraic circuits without the division operation.Furthermore, as a consequence of Theorem 2 the class of strong approximation problems BU a = BBU a is unchanged even when allowing root operations as basic operations. Comparison to previous work As a precursor to the proof of PPA-completeness of computing an ε-consensus halving, Filos-Ratsikas, Frederiksen, Goldberg and Zhang [FFGZ18] proved the problem to be PPAD-hard.Deligkas et al. [DFMS21] uses ideas from this proof together with additional new ideas to obtain their proof of FIXP-hardness for computing an exact consensus halving. While PPAD ⊆ PPA, the PPAD-hardness result of [FFGZ18] is not implied by the recent proofs of PPA-completeness.In particular, the work [FFGZ18] proves PPAD-hardness even for constant ε, while the work of [FG19] only proves PPA-hardness for ε being inverse polynomially small.In the same way, while FIXP ⊆ BU, FIXP-hardness of computing an exact consensus halving is not implied by our reduction, since Theorem 2 establishes BU a -hardness rather than BU-hardness.Recently a considerably simpler proof of PPA-hardness for computing an ε-consensus halving was given by Filos-Ratsikas, Hollender, Sotiraki and Zampetakis [FHSZ20], and our reduction is inspired by this work. All reductions described above are similar in the sense that one or more evaluations of a circuit are expressed in the consensus halving instance.The full interval A is partitioned into subintervals, cuts within these subintervals encode values in various ways, and agents implement the gates of the circuit by placing cuts.A main difference between the reductions establishing PPAD-hardness and FIXP-hardness to those establishing PPA-hardness is that in the former reductions, all cuts are constrained to be placed in distinct subintervals.This reason this is possible is that the objective is to find a fixed point of the circuit, which means that inputs and outputs may be identified. In the setting of PPA and BBU the objective is to find a "zero" of the circuit.More precisely, for the setting of PPA the objective is to find two adjacent points of a given Tucker labeling that receive complementary labels, i.e. labels of different sign but same absolute value.For the setting of BBU the objective is to find an actual zero point of the circuit.All of the reductions establishing PPA-hardness of computing an ε-consensus halving have the property that cuts encoding the input of the circuit are free cuts, meaning that they can in principle be placed anywhere, and as a result also interfere with the evaluations of the circuit.This is also the case for our reduction, and this invariably limits its applicability to the approximation problem. In the reduction of [FHSZ20], the interval A is structured into different regions, a coordinate-encoding region, a constant-creation region, several circuit-simulation regions, and finally a feedback region.Our reduction also has a coordinate-encoding region and several circuit simulation regions, but the functions performed by the constant-creation region and feedback regions perform in [FHSZ20] is our reduction integrated in the individual circuit simulation regions and done differently. A novelty of the reduction of Here it is required that there is exactly one cut in the subinterval, and the value encoded is determined by the distance between the cut position and the left endpoint of the interval.In [FHSZ20] values are encoded by what we will call label encoding.Here there is no requirement on the number of cuts in the subinterval, and the value encoded is simply the difference between the Lebesgue measures of the subsets of the interval receiving label + and label −.We shall employ a hybrid approach where the coordinate-encoding region uses label encoding while the circuit-simulation regions uses position encoding.The first step performed in a circuit-simulation region is thus to copy the input from the coordinate-encoding region.Switching to position encoding allows us in particular to implement a multiplication gate, similarly to [DFMS21]. Here the multiplication xy is computed via the identity xy = ((x + y) 2 − x 2 − y 2 )/2.In [DFMS21] where values range over [0, 1], the squaring operation may be implemented directly by agents.In our case values range over the interval [−1, 1], and the squaring operation is decomposed further, having agents compute it separately over the intervals [−1, 0] and [0, 1]. In analogy to [FHSZ20] we have feedback agents that ensures that the circuit evaluates to 0 on the encoded input.The criteria that the agents check is however different, and for our purposes it is crucial that we have the same sign pattern in the position encoding of the output of the circuit as the copy of the input made by the circuit-simulation region.The actual detection of an output of 0 is performed by using approximations of the Dirac delta function.For computing the distribution functions of the feedback agents, we make use of the fact that these are computed by algebraic circuits, which enable us to make a strong approximation of the Dirac delta function via repeated squaring. Organization of Paper In Section 2 we introduce necessary terminology and we give a detailed account of real valued search problems and reducibility between these.Our structural results for FIXP are given in Section 3 and our structural results for BU and BBU are given in Section 4. Section 4 also includes the simple proof of the inclusion FIXP ⊆ BU.We present our main result, Theorem 2, in Section 6. Algebraic Circuits Let B be a finite set of real valued functions, for example B = {+, −, * , ÷, max, min}.An algebraic circuit C with n inputs and m outputs over the basis B is given by an acyclic graph G = (V, A) as follows.The size of C is equal to the number of nodes of G, which are also referred to as gates.The depth of C is equal to the length of the longest path of G. Every node of indegree 0 is either an input gate labeled by a variable from the set {x 1 , . . ., x n } or a constant gate labeled by a real valued constant.Every other node is labeled by an element of B called the gate function.If a node v is labeled by a gate function g : A → R with A ⊂ R k we require that g has exactly k ingoing arcs with a linear order specifying the order of arguments to g.The output of C is specified by an ordered list of m (not necessarily distinct) nodes of G.The computation of C on a given input x ∈ R n is defined in the natural way.Computation may fail in case a gate of C labeled by a function g : A → R receives an input outside A, and in this case the output of C is undefined.Otherwise we say that the output is well defined and denote its value by C(x). We shall in this paper just consider algebraic circuits where the basis consists only of continuous functions.This means in particular that any algebraic circuits computes a continous function as well.We shall also only consider consider constant gates labeled with rational numbers.In this case we are also interested in the bitsize of the encoding of the constants, which is the maximum bitsize of the numerator or denominator.An important special class of algebraic circuits are those over the basis {+, −, * , ÷} and using just the constant 1.We refer to these as arithmetic circuits.An arithmetic circuit with no division gates is called division-free.Note that any integer of bitsize τ may be computed by a division-free arithmetic circuit of size O(τ). By using multiplication with the constant −1, the functions − and min may be simulated using + and max, respectively.In this way we may convert a circuit over the full basis {+, −, * , ÷, max, min} into an equivalent {+, * , ÷, max}-circuit.We shall also consider circuits where use of the multiplication operator * is restricted to having one of the arguments being a constant gate.We denote this by the symbol * ζ and use it in particular for defining {+, * ζ , max}-circuits. At times it will convinient to consider gate functions with their output range truncated to stay within a given interval. While we shall not consider circuits with the discontinous sign function sgn, in the context of approximating functions, it is sometimes sufficient to use an approximation of sgn instead.A typical use of sgn(z) is to perform a selection between two values x and y.We define the δ -approximate selection function to be the function that based on sgn(z) outputs values x or y except in the interval of length δ centered around 0 where it instead linearly interpolates between x and y.Definition 1.For given δ > 0, the (two-sided) δ -approximate selection function Sel is defined by We note that Sel δ may be computed as Sel δ (x, y, z) = (1 − t)/2 • x + (1 + t)/2 • y, where t defined by t = max(min(z, δ /2), −δ /2)/(δ /2) is the δ -approximation of sgn(z).In particular is Sel δ (x, y, z) computed by a {+, * , max}-circuit (or a {+, * , ÷, max}-circuit if we also view δ as a variable). Search problems A general search problem Π is defined by specifying to each input instance I a search space (or domain) D I and a set Sol(I) ⊆ D I of solutions.We distinguish between discrete and real-valued search problems.For discrete search problems we assume that D I ⊆ {0, 1} d I for an integer d I depending on I. Analogously, for real-valued search problems we assume that D I ⊆ R d I for an integer depending on I.One could likewise distinguish between search problems with discrete input and real-valued input.We are however mostly interested in problems where the input is discrete, that is we assume that instances I are encoded as strings over a given finite alphabet Σ (e.g.Σ = {0, 1}). A very important class of discrete search problems arise from decision problems given as languages in NP, thereby forming the class of NP search problems.More precisely, these are the discrete search problems where we assume there are polynomial time algorithms that (i) given I compute d I whose magnitude is polynomial in |I|, (ii) given I and x ∈ {0, 1} d I checks whether x ∈ D I , and lastly, (iii) given I and x ∈ D I checks whether x ∈ Sol(I).The corresponding language in NP is then The class of all NP search problems is denoted by FNP.The subclass TFNP of FNP consists of the NP search problems for which Sol(I) = / 0 for every input I.An NP search problem Π is said to be solvable in polynomial time if there is a Turing machine running in polynomial time that on input I gives as output some member y of Sol(I) in case Sol(I) = / 0 and rejects otherwise.The subclass of FNP consisting of the search problems solvable in polynomial time is denoted by FP, and it holds that FP = FNP if and only if P = NP. Many natural search problems are however defined with a continous search space.Not all of these may adequately be recast as discrete search problems, but are more naturally viewed as real-valued search problems.One approach for studying such problems would be to switch to the Blum-Shub-Smale model of computation [BSS89].A BSS machine resembles a Turing machine, but operates with real numbers instead of symbols from a finite alphabet.In particular is the input real-valued, and input instances are therefore encoded as real-valued vectors.All basic arithmetic operations and comparisons are unit-cost operations.One may then define real-valued analogues of Turing machine based classes.In particular, Blum, Shub and Smale defined and studied the real-valued analogues P R and NP R of P and NP.A BSS machine may in general make use of real-valued machine constants.If a BSS machine only uses rational valued machine constants we shall call it constant-free.Real-valued analogoues of the classes FP, FNP, and TFNP for the BSS machine model do not appear to be defined in the literature, but can be defined in a straight-forward way.Let us just note that the proof that P = NP implies FP = FNP does not generalize to the setting of BSS machines, since it crucially depends on the search space being discrete. For the classes P R and NP R , if we simply restrict the input to be discrete and consider only constantfree BSS machines this results in complexity classes, denoted by BP(P 0 R ) and BP(NP 0 R ), that may directly be compared to Turing machine based complexity classes.Indeed, it was proved by Allender, Bürgisser, Kjeldgaard-Pedersen and Miltersen [ABKM09, Proposition 1.1] that BP(P 0 R ) = P PosSLP , where PosSLP is the problem of deciding whether an integer given by a division free arithmetic circuit is positive.While the precise complexity of PosSLP is not known, Allender et al. proved that it is contained in the counting hierarchy CH (not to be confused with the consensus halving problem whose abbreviation coincides). The class BP(NP 0 R ) is equal to the class ∃R that was defined by Schaefer and Štefankovič [SŠ17] to capture the complexity of the existential theory of the reals ETR.It is known that NP ⊆ ∃R ⊆ PSPACE, where the latter inclusion follows from the decision procedure for ETR due to Canny [Can88].Schaefer and Štefankovič also prove ∃R-completeness for deciding existence of a probability-constrained Nash equilibrium in a given 3-player game in strategic form; later works have extended this to ∃R-completeness for many other decision problems about existence of Nash equilibria satisfying different properties in 3-player games in strategic form [GMVY18; BM16; BM17; BH19].The proofs of ∃R-hardness makes critical use of the fact that the input is discrete and it is not known if these problems are also complete for NP R . We define the class of ∃R search problems as the following subclass of all real valued search problems. Instaces I are encoded as string over a given finite alphabet Σ and we assume there is a polynomial time algorithm that given I computes d I , where D I ⊆ R d I .We next assume that there are polynomial time constant free BSS machines that given I and x ∈ R d I checks whether x ∈ D I , and given I and x ∈ D I checks whether x ∈ Sol(I).The corresponding language in ∃R is then L = {I | Sol(I) = / 0}. Solving real-valued search problems Let Π be a ∃R search problem.In analogy with the case of NP search problems, one could consider the task of solving Π to be that of giving as output some member y of Sol(I) in case Sol(I) = / 0. In general each member of Sol(I) may be irrational valued which precludes a Turing machine to compute a solution explicitly.This is in general also the case for a BSS machine, even when allowing machine constants.Regardless, we shall restrict our attention to Turing machines below. On the other hand, when Sol(I) = / 0 a solution is guaranteed to exist with coordinates being algebraic numbers, since a member of Sol(I) may be defined by an existential first-order formula over the reals with only rational-valued coefficients.This means that one could instead compute an indirect description of the coordinates of a solution, for instance by describing isolated roots of univariate polynomials.If such a description could be computed in polynomial time in |I| we could consider that to be a polynomial time solution of Π. Etessami and Yannakakis [EY10] suggest several other computational problems one may alternatively consider in place of solving a search problems Π explicitly or exactly.Our main interest is in the problem of approximation.We shall assume for simplicity that D I ⊆ [−1, 1] d I .Together with an instance I of Π we are now given as an auxiliary input a rational number ε > 0, and the task is to compute x ∈ Q d I such that there exist x * ∈ Sol(I) with x * − x ∞ ≤ ε.We shall turn this into a discrete search problem by encoding the coordinates of x as binary strings.More precisely, to Π we shall associate a discrete search problem Π a where instances are of the form (I, k), where I is an instance of Π and k is a positive integer.We define ε = 2 −k and let the domain of (I, k) be D I,k = {0, 1} d I (k+3) , thereby allowing the specification of a point x ∈ D I with coordinates of the form x i = a i 2 −k+1 , where a i ∈ {−2 k+1 , . . ., 2 k+1 }.The solution set Sol(I, k) is defined from Sol(I) by approximating each coordinate.That is, we define Sol(I, k) = {x ∈ D I,k | ∃x * ∈ Sol(I) : x * − x ∞ ≤ ε}.Note that if we had defined Sol(I, k) by instead truncating the coordinates of solutions x * ∈ Sol(I) to k bits of precision, we would have obtained the possibly harder problem of partial computation which was also considered by Etessami and Yannakakis [EY10]. We say that Π can be approximated in polynomial time if the approximation problem Π a can be solved in time polynomial in |I| and k. Reductions between search problems Let Π and Γ be search problems.A many-one reduction from Π to Γ consists of a pair of functions ( f , g).The function f is called the instance mapping and the function g the solution mapping.The instance mapping f maps any instance I of Π to an instance f (I) of Γ and for any solution y ∈ Sol( f (I)) of Γ the solution mapping g maps the pair (I, y) to a solution x = g(I, y) ∈ Sol(I) of Π.It is required that Sol( f (I)) = / 0 whenever Sol(I) = / 0. We will only consider many-one reductions, and will refer to these simply as reductions. If Π 1 and Π 2 are discrete search problems a reduction ( f , g) between Π 1 and Π 2 is a polynomial time reduction if both functions f and g are computable in polynomial time.If Π 1 and Π 2 are real-valued search problems it is less obvious which notion of reduction is most appropriate and we shall consider several different types of reductions.For all these we assume that f is computable in polynomial time.The reduction ( f , g) is a real polynomial time reduction if g is computable in polynomial time by a constant free BSS machine.We shall generally consider this notion of reduction too powerful.In particular the definitioon does not guaranteed that the function g is a continuous function in its second argument y.For this reason we instead consider reductions defined by algebraic circuits over a given basis B of real-valued basis functions. We say that the reduction ( f , g) is a polynomial time B-circuit reduction if there is a function computable in polynomial time thats maps an instance I to a B-circuit C I in such a way that C I computes a function C I : D f (I) → D I where g(I, y) = C I (y) for all y ∈ Sol( f (I)).Note in particular that the size of C I and the bitsize of all constant gates are bounded by a polynomial in |I|.If in addition there exists a constant h such that the depth of C I is bounded by h for all I we say that the reduction ( f , g) is a polynomial time constant depth B-circuit reduction.Etessami and Yannakakis [EY10] defined the even weaker notion where the function f is a separable linear transformation.The reduction ( f , g) is an SL-reduction if there is a function π : {1, . . ., d I } → {1, . . ., d f (I) } and rational constants a i , b i , for i = 1, . . ., d I , all computable in polynomial time from I, such that for all y ∈ Sol( f (I)) it holds that x i = a i y π(i) + b i , where x = g(I, y).Thus an SL-reduction is simply a projection reduction together with an individual affine transformation of each coordinate of the solution. Functions computed by algebraic circuits over the basis {+, * ζ , max} are piecewise linear.We shall thus call polynomial time {+, * ζ , max}-circuit reductions for polynomial time piecewise linear reductions, or simply PL-reductions. It is easy to see that all notions of reductions defined above are transitive, i.e. if Π reduces to Γ and Γ reduces to Λ, then Π reduces to Λ as well. A desirable property of PL-reductions is that the solution mapping g is polynomially continuous.By this we mean that for all rational ε > 0 there is a rational δ > 0 such that for all points x and y of the domain, x − y ∞ ≤ δ implies g(x) − g(y) ∞ ≤ ε, and the bitsize of δ is bounded by a polynomial in the bitsize of ε and of |I|.An example of a notion of reductions not guaranteed to be polynomially continuous would be {+, * , max}-circuit reductions, since a circuit might perform repeated squaring.However, constant depth {+, * , max}-circuit reductions would still be polynomially continuous. Total real-valued search problems Like in the case of TFNP where interesting classes of total NP search problems may be defined in terms of existence theorems for finite structures [Pap94; GP18], we may define classes of total real valued ∃R search problems based on existence theorems concerning domains D I ⊆ R n .Typical examples of such domains D I are spheres and balls.Suppose p is either a real number p ≥ 1 or p = ∞.By S n p and B n p we denote the unit n-sphere and unit n-ball with respect to the ℓ p -norm defined as S n p = {x ∈ R n+1 | x p = 1} and B n p = {x ∈ R n | x p ≤ 1}, respectively.If p is not specified, we simply assume p = 2. The Brouwer fixed point theorem and FIXP We recall here the definition of the class FIXP by Etessami and Yannakis [EY10].The class FIXP is defined by starting with ∃R search problems given by the Brouwer fixed point theorem, and afterwards closing the class with respect to SL-reductions.We shall refer to these defining problems as basic FIXP problems.The class FIXP a is the class of strong approximation problems corresponding to FIXP.More precisely, FIXP a consist of all discrete search problems polynomial time reducible to the problem Π a for Π ∈ FIXP. The definition of FIXP is quite robust with respect to the choice of domain and set of basis functions allowed by circuits in the basic FIXP problems.Etessami and Yannakis proved that basic FIXP problems defined by {+, −, * , ÷, max, min, k √ }-circuits are still in the class FIXP.Likewise, basic FIXP-problems where D I is a ball with rational-valued center and diameter, or more generally an ellipsoid given by a rational center-point and a positive-definite matrix with rational entries, are still in the class FIXP [EY10, Lemma 4.1].The same argument allows for using as domain the ball B d p with respect to the ℓ p norm for any rational p ≥ 1 or p = ∞, with the coordinates possibly transformed by individual affine functions. On the other hand, Etessami and Yannakakis also proved that one may greatly restrict the class of basic FIXP problems used to define FIXP without changing the class.The domains may be restricted to be unit hypercubes [0, 1] d I and the circuits may be restricted to {+, * , max}-circuits.Both restrictions may in fact be imposed at the same time.The restriction to {+, * , max}-circuits is a consequence of first proving that the problem of finding a Nash equilibrium in a given finite game in strategic form is hard for FIXP with respect to SL-reductions and then proving FIXP-membership of this problem using {+, * , max}-circuits. Another way to restrict circuits is by limiting their depth.The function of Nash for expressing Nash equilibrium as Brouwer fixed points involve divisions but as noted by Etessami and Yannakakis it may be viewed as a constant depth circuit, if one allows for addition gates of arbitrary fanin.Thus in the definition of FIXP one may restrict circuits to be constant depth {+, * , max}-circuits, where the addition gates are allowed to have unbounded fanin. We show in Proposition 3 of Section 3 that one may in fact take this much further and completely flatten the circuits of defining problems for FIXP to be depth 1 circuits of fanin at most 2, additionally also without requiring division.In other words, each coordinate function becomes just a simple function of at most 2 coordinates of the input.We also show in Proposition 1 that FIXP is closed under much more powerful reductions than just the basic SL-reductions used to define the class FIXP. In defining the class Deligkas et al. restrict their attention to spheres with respect to the ℓ 1 -norm as domains and functions computed by {+, −, * , max, min}-circuits.Compared to the definition of FIXP, division gates are thus excluded.However we show later in Section 4 that division gates can always be eliminated.Having thus fixed the set of basic BU search problems what remains in order to define BU is to settle on a notion of reductions.In their journal paper, Deligkas et al. [DFMS21] suggest using reductions computable by general algebraic circuits including non-continuous comparison gates, whereas in the preceeding conference paper [DFMS19] they did not precisely define a choice of reductions.We shall revisit the question of choice of reduction in Section 4 before proposing our definition of BU. Consensus Halving We give here a formal definition of consensus halving with additive measures as real valued search problems. Definition 5.The problem CH is defined as follows.An instance I consists of a list of {+, −, * , ÷, max, min}circuits C 1 , . . .,C n computing distribution functions F 1 , . . ., F n defined on the interval A = [0, 1].The domain is D I = S n 1 and Sol(I) constists of all x for which where t 0 = 0 and t j = ∑ k≤ j |x k |, for j = 1, . . ., n + 1. Tools from Real Algebraic Geometry For obtaining our results concerning strong approximation we need concrete bounds on δ > 0 as a function of ε > 0 witnessing the truth of "epsilon-delta" statements.When such a statement is expressible in the first-order theory of the reals, such bounds can be obtained in a generic way using the general machinery of real algebraic geometry [BPR16].This approach has been used several times previously for establishing FIXP a membership of the problem of strong approximation of Nash equilibrium refinements [EHMS14; Ete20; HL18].Concretely, suppose that Φ(ε, δ ) is a formula with free variables ε and δ of the form where Q i ∈ {∀, ∃}, and F is a Boolean formula whose atoms are polynomial equalities and inequalities involving polynomials of degree at most d and having integer coefficients of bitsize at most τ. In our applications, the formula Φ is defined from a given instance I.Both τ and d will be bounded by fixed polynomials of |I|.The number of blocks ω of quantified variables will be a fixed constant, and k i for 1 ≤ i ≤ ω are bounded by fixed polynomials of |I| as well.In other words there will be a fixed polynomial q such that the formula Φ(ε, δ ) is true for some δ ≥ (1/ε) 2 g (|I|) . The first-order formulas we consider are expressed using also the evaluation of functions computable by algebraic circuits as a primitive.We may in a generic way transform such formulas to having only polynomial inequalities and equalities and required above.Namely, we may perform a Tseitin-style transformation by introducing existentially quantified variables for each gate of the circuit and express using polynomial inequalities and equalities that each gate is computed correctly, and the variables corresponding to the output gates may then be used instead in place of the function.As long as the number of evaluations of functions is constant, this leaves the number of blocks of quantified variables constant. Structural Properties of FIXP Recall that FIXP is defined to be the closure of all basic FIXP problems with respect to the very simple notion of SL-reductions.We first show that FIXP is in fact closed under general circuit reductions. Proposition 1. Suppose that Π is a ∃R search problem defined with unit hypercube domains and reduces to Γ ∈ FIXP by a polynomial time {+, −, * , ÷, max, min, k √ }-circuit reduction.Then Π belongs to FIXP as well. Proof.We may without loss of generality assume the domain of Γ is also the unit hypercube.Let ( f , g) be the assumed reduction from Π to Γ. Let I be an instance of Π. √ }circuit this defines a ∃R search problem Λ in FIXP with the same set of instances as Π.We note that the projection of a fixed point of H to the last m coordinates gives a solution to Π from which it follows that Π in particular SL-reduces to Λ. Therefore Π belongs to FIXP as well. Our next basic result is based on properties of the basic FIXP problem used by Etessami and Yannakakis to show that the division operation is not necessary to express all of FIXP.We give a brief review of their construction.An instance I describes a d-player game in strategic form.Player i has a set S i of n i = |S i | pure strategies and a utility function the total number of strategies.The domain is given as where the (n i − 1)-dimensional unit simplex ∆ n i −1 is identified with the set of probability distributions on S i , for i = 1, . . ., d.The domain D I may be viewed as a subset of R n in the natural way.The utility functions define the function v : and finally let G I : D I → D I be defined by letting G I (x) be the projection of h(x) onto D I .For all i = 1, . . ., d, it holds that G I (x) i,a i = max(h i,a i − t i , 0), where t i is the unique value satisfying ∑ a i ∈S i max(h i,a i − t i , 0) = 1.The fixed points of G I are exactly the Nash equilibria of the game described by I [EY10, Lemma 4.5], and the search problem is therefore FIXP-complete [EY10, Theorem 4.3]. The definitions of the functions v, h, and G I allows us to extend their domain from D I to the ndimensional unit cube [0, 1] n .By definition of G I this does not change the set of fixed points of G I .Likewise, applying the same affine transformation to u i (x), for i = 1, . . ., d, does not change the set of fixed points of G I .We may thus assume that u i has codomain [0, 1].Making use of a sorting network, Etessami and Yannakakis show that G I may be computed by a polynomial size {+, −, * , max, min}-circuit C I [EY10, Lemma 4.6].It is furthermore straightforward to ensure that all constants used in C I as well as values computed by gate functions of C I belong to the interval [0, 1] for any input x ∈ [0, 1] n (cf.[DFMS21]).We summarize these observations below. Proposition 2. There is a basic FIXP problem Π NE , complete for FIXP under SL-reductions, such that for any instance I it holds that D I = [0, 1] d I and such that C I is a {+, −, * , max, min}-circuit that satisfies that all gate functions of C I compute values in [0, 1] given input x ∈ D I . From here we may derive a characterization of FIXP in terms of depth 1 circuits, where the addition and subtraction operators (necessarily) are truncated to the interval [0, 1].This is simply done by a Tseitin-style transformation.One may note that a Tseitin-style transformation is already used in the proof that Π NE is FIXP-hard.This means such a transformation is applied twice at different points of the proof to yield the statement below.We may consider the input as pairs (x, y) ∈ [0, 1] d I × [0, 1] m I and we may think of the output gates as variables, similarly grouped as (z, w) and ranging over [0, 1] d I × [0, 1] m I .If g j is an input gate labeled by x i , we let w j = x i , and if g j is a constant gate labeled by c ∈ [0, 1] we let w j = c.If g j is an addition gate taking as input gates g k and g ℓ we let w j = (y k + y ℓ ) T [0,1] , i.e. the addition of g k and g ℓ is simulated by a truncated addition of y k and y ℓ .The case of subtraction is analogous.If g j is a multiplication gate taking as input g k and g ℓ we let w j = y k • y ℓ .The case of maximum and minimum gates are analogous.Finally if g j is the ith output gate of C I we let z i = y j .By construction C ′ I computes a function F ′ I : D ′ I → D ′ I and F ′ I (x, y) = (x, y) if and only if g j computes the value y j on input x for all j and C I (x) = x.We thus obtain x such that C I (x) = x as the projection of (x, y) to the first d I coordinates. In case we prefer to construct a normal {+, −, * , max, min}-circuit without truncated operations we can clearly simulate the truncated addition and subtraction operations by depth 3 circuits.We can also easily convert the circuits to constant depth {+, * , max} circuits by considering the the domain 4 Definition and Structural Properties of BU and BBU In this section we define two classes of ∃R search problems BU and BBU based on the Borsuk-Ulam theorem corresponding to formulations (1) and (3) of Theorem 1.We start by defining basic BU and basic BBU problems.We shall restrict our attention to the unit n-sphere and unit n-ball, but with regards to any ℓ p norm for p ≥ 1 or p = ∞.For the case of BU this amounts to specializing Definition 4. The condition that the function F I is odd on ∂ B d I p is a semantic condition.However, typically the function F I would be defined from a basic ℓ p -BU problem by a transformation done in a similar way as in the proof of Theorem 1, and thereby F I would satisfy the condition automatically. To define the classes BU and BBU, we restrict our attention to domains with respect to the ℓ ∞ -norm. Definition 9.The class BU (respectively, BBU) consists of all total ∃R search problems that are PLreducible to a basic ℓ ∞ -BU problem (respectively, basic ℓ ∞ -BBU problem) for which the function F I is defined by a {+, −, * , ÷, max, min}-circuit C I . While the definition of BU in [DFMS21] was using as domain the unit sphere with respect to the ℓ 1norm and not allowing for division gates, we show in this section these changes do not change the class.We propose choosing PL-reductions for closing the class under reductions.PL-reductions are sufficient for obtaining all of our results and they are polynomially continuous.Another reason for this choice is that if we restrict the circuits defining the classes FIXP and BU to also be piecewise linear, i.e. be {+, * ζ , max}circuits, we obtain the classes LinearFIXP and LinearBU, that when closed under polynomial-time reductions are equal to PPAD and PPA, respectively [EY10; DFMS21]. Elimination of Division Gates In this section, we show how to eliminate division gates from circuits defining an instance of the BU or BBU problems.Let therefore C denote an algebraic circuit defined over the basis {+, −, * , ÷, max, min, k √ }. Moving Divisions to the Top.In the paper [EY10], it is shown how to move all division gates to the top of the circuit by keeping track of the numerator and denominator of every gate.For sake of completeness we describe this transformation.Every gate g i is replaced by two gates g ′ i and g ′′ i keeping track of the numerator and denominator, that is the value of g i in the original circuit will be equal to the value of g ′ i /g ′′ i in the transformed circuit.Firstly, if g i is an input gate or a constant-c gate we put g ′ i = x j for appropriate j (respectively g ′ i = c) and g ′′ i = 1.In order to maintain the equality g i = g ′ i /g ′′ i , we proceed as follows: if g i = g j ± g k is an addition/subtraction gate in the original circuit, then we update the numerator and denominator to For root gates, we note that if g j = g ′ j /g ′′ j is input to a k √ −gate g i for k even, then g j ≥ 0, from which it follows that sgn(g ′ j ) = sgn(g ′′ j ).With this in mind, we see that we may maintain the numerator and denominator of g i by putting Finally, for the max-gate we note that max(ca, cb) = c max(a, b) for c ≥ 0. Using this we see that if g i = max(g j , g k ), then we may maintain the numerator and denominator via the formulas We note that all this can be done only blowing up the size of the circuit by a constant factor.In the aforementioned paper, the authors then have division gates at the top outputting out i = out ′ i /out ′′ i .However, for our application this is unnecessary and we may completely remove division gates. Removing Division Gates for BBU.Suppose that Π is a BBU problem.Let I be an instance of Π and denote by C I an algebraic circuit computing a continuous function F I : , meaning that F * I is odd on the boundary.In this way we have defined a BBU problem Γ with the same instances as Π.Furthermore, given an instance I of Π one may in polynomial time compute an instance f (I) of Γ by computing C * I .We note that for any x ∈ B d I it holds that F I (x) = 0 if and only if F * I (x) = 0. We conclude that Π SL-reduces to the division-free BBU−problem Γ. Removing Division Gates for BU.Now let I be an instance of a BU−problem Π and denote by C I an algebraic circuit computing a continuous function F I : We make the same reduction as for BBU defining a circuit C * I that computes a function F * I : S d I → R d I whose coordinate functions are given by ) for all i, meaning that x is a BU-point of F * I .Again, we conclude that Π SL-reduces to a division-free BU−problem. In the previous two paragraphs, we have shown the following result. Proposition 4. The classes BU and BBU remain the same even if the circuits are restricted to not using division gates. Relationship with FIXP As a consequence of their results about consensus halving, Deligkas et al. proved that FIXP ⊆ BU.We observe here that the direct proof that the Bosuk-Ulam theorem implies the Brouwer fixed point theorem due to Volovikov [Vol08] gives a much simpler way to derive this relationship.For completeness we present the construction and proof of Volovikov. The above construction immediately give a simple reduction from any basic FIXP problem with domains B d I ∞ to a basic ℓ ∞ -BU problem.The solution mapping of the reduction must map solutions (x,t) to tx.This may be done by simply using multiplication gates.But since any solution (x,t) has |t| = 1 the multiplication tx i may also be expressed as Sel 2 (−x i , x i ,t), which means the solution mapping can also be computed by constant depth {+, * , max}-circuits.∞ and {+, * , max}circuits C I .From that, the instance mapping as described by Proposition 5 produces a {+, * , max}-circuit and domain S d I ∞ .The composition of the SL-reduction and the reduction described above then yields the claimed types of reductions. Change of Domains for BU and BBU In this section we show reduce between different domains for the BBU and BU problems. Proposition 7. Let B be a set of gates that contains {+, −, * , ÷, max, min}.Suppose that Π is an ∃R search problem whose domains are contained in hypercubes that reduces to a basic ℓ p − BBU problem Γ by a polynomial time B-circuit reduction ( f , g).Furthermore, suppose that for any instance I of Π the function g(I, •) mapping solutions of f (I) to solutions of I is odd and assume that C f By assumption of Γ we may in polynomial time compute another circuit C f (I) that defines a function F : [−1, 1] n → R n that is odd on the boundary such that Sol( f (I)) are the zeroes of F. As G is odd and F is odd on the boundary, one may verify that H is odd on the boundary of Given a zero of H one may recover a solution to Π by projecting onto the last m coordinates and multiplying by 2. In particular, Π SL-reduces to Λ. (ii) Now, suppose that the domains of Γ are p-balls, where 1 ≤ p < ∞.Again by assumption we have that x) ∈ Sol(I) for every x ∈ Sol( f (I)).Furthermore, we may in polynomial time compute a circuit C f (I) computing a function is the zeroes of F. Now define an odd function h : B n p → B n p by h(x) = x/ max(1/2, ||x|| p ), which may be computed by a circuit using also p √ • gates, and define H : First we remark that H is odd on the boundary of B n+m p .Clearly, the second coordinate is always odd, and the first coordinate evaluates to 0 if ||y|| p p > 1/2.If (x, y) ∈ S n+m−1 p and ||y|| p p < 1/2, then ||x|| p p > 1/2 which implies that ||x|| p > 1/2.This then implies that h(x) = x/||x|| p and so From the first first coordinate equality max(0, 1/2 − ||y|| p p )F(h(x)) = 0 one then obtains that F(h(x)) = 0 so h(x) ∈ Sol( f (I)).Thus, the set of zeroes of H are contained in {(x, Furthermore, H can be computed by circuit over B ∪ { p √ •}, so this defines a basic ℓ p − BBU problem Λ with (B ∪ { p √ •})-circuits and the same instances as Π.From a zero of H we may again recover a solution to Π by projecting onto the last m coordinates and multiplying the result by 3n.We conclude that Π SL-reduces to Λ. Thus we have defined a map f taking instances I of Π to instances f (I) of a basic ℓ p − BBU problem Γ.We note that f is computable in polynomial time.Furthermore, from x ∈ Sol( f (I)) one may recover a solution by g(I, x) = π(x) to I. As the function g(I, •) is odd and computable by a {+, −, * , ÷, max, min}−circuit we conclude that ( f , g) satisfies the requirements of the previous proposition.We conclude that Π SL-reduces to a ℓ p − BBU using gates from {+, −, * , ÷, max, min, p √ •}. Proposition 9. Any basic ℓ p − BBU problem Π SL-reduces to a basic ℓ ∞ − BBU problem where the circuits are allowed to use p √ • gates. Proof.Let Π be a basic ℓ p − BBU problem and let I denote an instance of Π.We may compute a circuit C I defining a function F : B n p → R n that is odd on the boundary of B n p such that Sol(I) is the set of zeroes of F. Now, define a function h : R n → R n by h(x) = x/ max(1/2, ||x|| p ).We may now in polynomial time compute a {+, −, * , ÷, max, max, p √ •}-circuit computing the function showing that H is odd on the boundary of B n ∞ , so it defines an instance of an ℓ ∞ − BBU problem Γ. Mapping back solutions amounts to computing h(x) which can be done by a {+, −, * , ÷, max, max, p √ •}-circuit.The result now follows from part (i) of Proposition 7. Now we proceed with showing the reductions between basic ℓ p − BU problems. Proposition 10.Suppose that Π is an ∃R search problem whose domains are contained in hypercubes that reduces to a basic ℓ p − BU problem by a B−circuit reduction ( f , g), where {+, −, * , ÷, max, min} ⊆ B. Assume also that for every instance I of Π, and that g(I, Proof.(i) Let I denote an instance of Π and let m = d I .By assumption of ( f , g) we may in polynomial time compute a circuit defining a function F : . By the result in Section 4.1 we may assume that the circuit computing F is divisionfree, and so we may extend the domain of F to be R n+1 .Also, we may in polynomial time compute a circuit defining a function G : R n+1 → [−1, 1] m mapping Sol( f (I)) to Sol(I).Define H : S m+n ∞ → R m+n by H(x, y) = (F(x), y − 1 2 G(x)).We note that H may be computed by a circuit over B, so it defines a basic ℓ ∞ − BU problem Λ with B−circuits and the same instances as Π. ∞ < 1 and so ||x|| ∞ = 1.Also, the first coordinate shows that F(x) = F(−x).This says that x ∈ Sol( f (I)), and so G(x) ∈ Sol(I).As 2y = G(x) we see that Π SL-reduces to Λ. Proof.(i) Let Π denote a basic ℓ p − BU problem.Suppose an instance I is defined by some continuous function F : S n p → R n .By Section 4.1 we may assume that the circuit computing F is division-free and so extend F to be defined in all of R n+1 .Define a function g : R n+1 → [−1, 1] n+1 by g(x) = x/ max(1/2, ||x|| p ) and H : S n ∞ → R n by H(x) = F(g(x)).Let f denote the map sending the instance I to the instance f (I) given by H of a basic ℓ ∞ − BU problem Γ.One may verify that ( f , g) is a reduction satisfying the properties of Proposition 10, so by part (i) of Proposition 10 we have that Π SL-reduces to the basic ℓ ∞ − BU. (ii) Let Π denote a basic ℓ p − BU problem.Suppose an instance I is defined by some continuous function F : S n ∞ → R n .Again, we may extend F. Similarly to the case above, we define a function g : R n+1 → [−1, 1] n+1 by g(x) = x/ max(1/(n + 1), ||x|| p ) and H : S n p → R n by H(x) = F(g(x)).Let f denote the map sending the instance I to the instance f (I) given by H of a basic ℓ ∞ − BU problem Γ. First, the map g satisfies the condition of Proposition 10.If x Sol( f (I)) then it holds that x ∈ S n p , and so 1 = ||x|| p ≤ (n + 1)||x|| ∞ , implying that ||x|| ∞ ≥ 1/(n + 1).From this it follows that g(x) = x/||x|| ∞ by definition.Furthermore, using that g is odd we find that ).We conclude that g(x) ∈ Sol(I).In conclusion, ( f , g) is a reduction from Π to Γ satisfying the properties of Proposition 10.By part (ii) of Proposition 10 we conclude that Π SL-reduces to a basic ℓ ∞ − BU problem.Note that the reductions of this proposition are a special case of PL-reductions.Because these are polynomially continuous, we automatically also get the following result.and π : R n+1 → R n is the projection π(x 1 , . . ., x n+1 ) = (x 1 , . . ., showing that H(π(x)) = 0, so π(x) is a solution to the original problem.Similarly, if x n+1 = −1, then H(−π(x)) = 0, so −π(x) is a solution to the original problem.In the case where |x n+1 | < 1 we have that ||π(x)|| ∞ = 1 and so H(π(x)) = −H(−π(x)), because H is odd on the boundary.By definition of the selection-function Sel this implies that and similarly F(−x) = −H(π(x)).The equality F(x) = F(−x) then implies that H(π(x)) = H(−π(x)) = 0, so both π(x) and −π(x) is a solution to the original instance in this case1 .In conclusion, if we could recover the sign of x n+1 then we could define a solution map sending x to sgn(x n+1 )π(x), but we do not allow this.However, in the approximate version, we may do this.Proposition 14.Any basic ℓ p − BBU a problem polynomial time reduces to a basic basic ℓ p − BU a problem. Proof.After changing domain we may assume that p = ∞.Given an instance (H, ε) of a basic ℓ ∞ − BBU a problem we apply the above construction and the map f outputs the instance (F, ε ′ ) of a basic ℓ ∞ − BU a problem where ε ′ = min(ε, 1/2).Now suppose that x is a solution to the problem (F, ε ′ ) .This means there exists some x * with ||x − x * || ∞ ≤ ε ′ and F(x * ) = F(−x * ).We now claim that we may map back the solution x of (F, ε ′ ) to a solution of (H, ε) by the map g(x) = sgn(x n+1 )π(x). Combining Proposition 13 and Proposition 14 we obtain the following result. Theorem 3. BU a = BBU a Consensus Halving In this section we present the proof of our main result Theorem 2. This result enables an additional structural result, given in Section 6.5 about the class of strong approximation problems BU a = BBU a , showing that the class is unchanged even when allowing root operations as basic operations. Suppose we are given a basic ℓ ∞ − BBU a problem Π a with circuits over the basis {+, −, * , max, min}.Let (I, k) denote an instance of Π a and put ε = 2 −k .We may in polynomial time compute a circuit C defining a function F : We now provide a reduction from Π a to a CH a -problem.In the reduction we will make use of the "almost implies near" paradigm. Proof.Let F and ε > 0 be given.Suppose the claim is false.Then for any n ∈ N there is an By compactness the Bolzano-Weierstrass theorem implies the existence of a subsequence {x n i } converging to some x * ∈ B n ∞ .By continuity of F and || • || ∞ we get that ||F(x * )|| ∞ = lim i→∞ ||F(x n i )|| ∞ = 0, showing that F(x * ) = 0.However, for sufficiently large i ∈ N it holds that ||x n i − x * || ≤ ε contradicting the choice of the x n .This lemma says that for any ε > 0, if ||F(x)|| ∞ is sufficiently close to being zero, then x is ε-close to a real zero of F. When F is computed by an algebraic circuit of polynomial size, it follows by the results in Section 2.7 that there exists some fixed polynomial q with integer coefficients such that the above lemma holds true for some δ ≥ (ε) 2 q(|I|) .The lemma then holds true for δ = (ε) 2 q(|I|) , and we may construct this number using a circuit of polynomial size by repeatedly squaring the number ε exactly q(|I|) times.This number will be used by the feedback agents in our CH a instance in order to ensure that any solution gives a solution to the ℓ ∞ − BBU a instance. Overview of the Reduction Overview.As in previous works, we describe a consensus halving instance on an interval A = [0, M], where M is bounded by a polynomial in|I|, rather than the interval [0, 1].This instance may then be translated to an instance on the interval [0, 1] by simple scaling.Like [FHSZ20], in the leftmost end of the instance we place the Coordinate-Encoding region consisting of n intervals.In a solution S, these intervals will encode a value x ∈ [−1, 1] n .A circuit simulator C will simulate the circuit of F on this value x.The circuit simulators will consist of a number of agents each implementing one gate of the circuit.However, such a circuit simulator may fail in simulating F properly, so we will use a polynomial number of circuit simulators C 1 , . . .,C p(n) .Each of these circuit simulators will output n values [C j (x)] 1 , . . ., [C j (x)] n into intervals I 1 j , . . ., I n j immediately after the simulation.Finally, we introduce the so-called feedback agents f 1 , . . ., f n .The agent f i will have some very thin Dirac blocks centered in each of the intervals I i j where j ∈ [p(n)].These agents will ensure that if z is an exact solution to the CH instance, then the encoded value x satisfies that ||F(x)|| ∞ is sufficiently small that we may conclude that x is ε-close to a zero x * of F. Label Encoding.For a unit interval I we let I ± denote the subsets of I assigned the corresponding label.We define the label encoding of I to be a value in [−1, 1] given by the formula v l (I) := λ (I + ) − λ (I − ), where λ denotes the Lebesgue measure on the real line R.This makes sense as I ± is measurable, because they are the union of a finite number of intervals. Coordinate-Encoding Region.The interval [0, n] is called the Coordinate-Encoding region.For every i ∈ [n], the subinterval [i − 1, i] of the Coordinate-Encoding region encodes a value x i := v l ([i − 1, i]) via the label encoding. Position Encoding.For an an interval I which contains only a single cut, thus dividing I into two subintervals I = I a ∪ I b , we define the position encoding of I to be the value v p (I) := λ (I 1 ) − λ (I 2 ).We note that v p (I) = v l (I) if the labeling sequence is −/+, and v p (I) = −v l (I) in the case the labeling sequence is +/−.From Label to Position.Before a circuit simulator there is a sign detection interval I s which detects the labeling sequence.Unless it contains a stray cut, this interval will encode a sign s = ±1 (to be precise 1 if the label is + and −1 is the label is −).By placing agents that flip the label as indicated below, we may now obtain position encodings of the values sx 1 , . . ., sx n .These values will be read-in as inputs to the subsequent circuit simulator. Circuit Simulators.As mentioned above, the circuit simulator C j will read-in the values s j x 1 , . . ., s j x n and simulate the circuit computing F on this input.They then output their values into n intervals immediately after the simulation. Feedback Agents.By the discussion after the proof of Lemma 1 we may by repeated squaring construct a circuit of polynomial size in |I| computing a tiny number δ > 0 such that if ||F(x)|| ∞ ≤ δ then x is (ε/2)-close to a zero of F. Now fix i ∈ [n] and let c i j denote the centre of the feedback interval I i j outputs the value [C j (s j • x)] i .We then define the ith feedback agent to have constant density 1/δ in the intervals The reason for having the feedback agents have these very narrow Dirac blocks is that if F i (x) > δ for some i, then in any of the "uncorrupted" circuits (i.e.circuits outputting the correct values) all the density of the ith agent will contribute to the same label.Moreover, we will show using the boundary condition of F that the contribution is to the same label in all the uncorrupted circuit simulators.This will contradict that the feedback agents should value I + and I − equally.That is the feedback agents ensure that ||F(x)|| ∞ ≤ δ if x is the value encoded by an exact solution to the consensus halving instance we construct. Stray Cuts.Any of the agents implementing one of the gates in a circuit simulator will force a cut to be placed in an interval in that same circuit simulator.The only agents whose cuts we have no control over are the n feedback agents.The expectation is that these agents should make cuts in the Coordinate-Encoding region that flip the label.If they do not do this we will call it a stray cut.If a circuit simulator contains a stray cut, we will say nothing about its value. Observation 1.If it is not the case that every unit interval encoding a coordinate x i in the Coordinate-Encoding region contains a cut that flips the label, then the encoded point x ∈ B n ∞ will lie on the boundary S n ∞ .With this in mind we may ensure that where the sign is the same as the label of the first interval.This can be done by, if necessary, placing one single-block agent after the Coordinate-Encoding region and each of the circuit simulators (if placing such an agent is necessary depends on, respectively, the number of variables n and the size of the circuits). Construction of Gates In this section we describe how to construct Consensus-Halving agents implementing the required gates {+, −, * , max, min}.First, we show that we may transform the circuit such that all gates only take values in the interval [−1, 1] on input from B n ∞ . Transforming the Circuit.By propagating every gate to the top of the circuit we may assume that the circuit is layered.Let C ′ denote the resulting circuit.By repeated squaring we may maintain a gate with value 1/2 2 d in the dth layer.Suppose g = α(g 1 , g 2 ) is a gate with inputs g 1 , g 2 in layer d.We modify the gates as follows: if α ∈ {+, −, max, min} then we multiply g i by 1/2 2 d before applying α; if α = * , then we multiply the input by 1 before applying α.Finally, we transform C ′ into the circuit C ′′ as follows: on input x, the circuit C ′′ multiplies the input by 1/2 and then evaluates C ′ on input x/2.Inductively, one may show that if g is a gate in layer d in the circuit C ′ , then the corresponding gate in in the circuit C ′′ has value g/2 2 d .As all the gates are among {+, −, * , max, min}, this ensures that all the gates in C ′′ take values in [−1, 1]. Addition Gate [G + ].We may construct an addition gate using two agents.The first agent has two unit input intervals that we assume contain one cut each.This then forces a cut in the long output interval that has length 3. The second agent then truncates this value.Before proceeding with the remaining gates, we construct a general function gate, an agent that implements any decreasing function.The agent that we construct has a block of height 2/(d − c) in the sub-interval [(c + 1)/2, (d + 1)/2] of the output interval and density f (z) := −2h ′ (2z − 1)/(d − c) in the sub-interval ((a + 1)/2, (b + 1)/2) of the input interval.We note that f is positive in this interval as h is assumed to be a decreasing map, so it makes sense for the agent to have density f .One may verify that the agent values the input interval and output interval equally.We further add two rectangles to the output interval colored blue and red in the sketch below.These will that if the cut in the input interval is placed at z ≤ (a + 1)/2 such that v p (I) ≤ a, then the cut in the output interval must be placed at z * = (d + 1)/2, meaning that Maximum Gate [G max ].First we show how to construct a gate computing the absolute value of the input.We may construct gates G 1 , G 2 such that G 1 (x) = − max(x, 0) and G 2 (x) = max(−x, 0) as function gates by using the functions h 1 : [0, 1] → [−1, 0] given by x → −x and h 2 : [−1, 0] → [0, 1] given by x → −x.Now, we may constrcuct the absolute value gate as G |•| = −G 1 + G 2 .We may now construct G max by using the formula max(x, y) = (x + y + |x − y|)/2. Minimum Gate [G min ].We may build this using min(x, y) = x + y − max(x, y). Describing valuation functions as circuits. In the description above, we described the valuations of the agents by providing formulas for their densities.However, an instance of CH actually consists of a list of algebraic circuits computing the distribution functions of the agents.In order to construct gates, it is sufficient for agents to have densities that are piece-wise polynomial.Therefore, consider an agent with polynomial densities f i in the intervals [a i , b i ) for i = 1, . . ., s, and let F i denote the indefinite integral of f i .We note that F i is a polynomial so it may be computed by an algebraic circuit.Now we claim that the distribution function of this agent may be computed by an algebraic circuit via the formula This is the case, because the summands will be equal to F i (a i ) − F i (a i ) = 0 if x < a i , to F i (x) − F i (a) if a i ≤ x ≤ b i and to F i (b) − F i (a) if x > b i , meaning that this formula does indeed calculate the valuation of the agent in the interval [0, x]. Reduction and Correctness Recall that we are given an instance (F, ε) of the BBU a problem and that we have to construct an instance of the CH a problem.The reduction now outputs an instance of the CH a problem where the consensus halving instance is constructed as above with p(n) = 2n + 1 circuit simulators and the approximation parameter is given by ε ′ = ε/(4n).Let z denote a solution to this CH a instance.By definition, there exists an exact solution z * to the consensus-halving problem such that z − z * ∞ ≤ ε ′ .Let x and x * denote the values encoded by respectively z and z * in the Coordinate-Encoding region.Suppose, generally, we are given an interval I with a number of cut points t 1 , . . .,t s .Moving a cut point by a distance ≤ ε ′ we create a new interval I ′ .This changes the label encoding by at most 2ε ′ , that is |v l (I) − v l (I ′ )| ≤ 2ε ′ .Succesively, if we move all the cuts by a distance ≤ ε ′ , then we get an interval I * such that |v l (I) − v l (I * )| ≤ 2sε ′ .As z − z * ∞ ≤ ε ′ and any of the subintervals in the Coordinate-encoding region can contain at most n cuts, we conclude that x − x * ∞ ≤ 2nε ′ = 2n(ε/(4n)) = ε/2.In order to show that x is ε-close to a zero of F, it now suffices by the triangle inequality to show that x * is (ε/2)-close to a zero of F. This will follow from the two following lemmas. As the coordinate-encoding region can contain at most n cuts (corresponding to at most n + 1 intervals), we deduce from the above that the values encoded can be computed as for every i ≤ n.If there is a stray cut then both x and −x are valid solutions by the boundary condition of F. If there is no stray cut, then s 1 = s 2 = • • • = s p(n) = s = sgn(z 1 ) by Observation 1 and in this case we may recover a solution as sx. In this subsection, we argue by going through CH a that the strong approximation problems BU a = BBU a do not change even if we allow the circuits to use root-operations as basic operations. Proposition 15.The class ℓ ∞ − BBU a remains unchanged even if we allow the circuits to use root-gates. Proof.Let Π a be a basic ℓ ∞ − BBU a problem where the circuits are allowed to use gates from the basis {+, −, * , max, min, k √ }.In the previous section, we constructed a polynomial time reduction from Π a to a CH a problem Γ a in such a way that the circuits computing the distribution functions of the agents are defined over {+, −, * , max, min}.Namely, the root gates can be implemented by first noting that the powergate (•) k can be implemented by an agent with polynomial densities by using the general function gate construction.Then, in order to construct an agent implementing the root gate we simply interchange the input interval and output interval of the power-gate.By the proof of the result of Deligkas et al. that CH is contained in BU, the problem Γ a polynomial time reduces to a ℓ 1 − BU a problem Λ that only uses gates from {+, −, * , max, min}.By Proposition 13, Λ reduces to a basic ℓ 1 − BBU a problem Ξ which again uses only gates from {+, −, * , max, min}.Finally, by Proposition 9, Ξ reduces to a basic ℓ ∞ − BBU a problem, again using only gates from {+, −, * , max, min}.Altogether, we see that Π a polynomial time reduces to a ℓ ∞ − BBU a without root-gates. [FHSZ20] compared to previous reductions is in how values are encoded by cuts in subintervals.In previous reductions, values are encoded by what we will call position encoding. Definition 2 . An ∃R search problem Π is a basic FIXP problem if every instance I describes a nonempty compact convex domain D I and a continuous function F I : D I → D I , computed by an algebraic circuit C I , and these descriptions must be computable in polynomial time.The solution set is Sol(I) = {x ∈ D I | F I (x) = x}.The Brouwer fixed point theorem guarantees that every basic FIXP problem is a total ∃R search problem.To define the class FIXP, Etessami and Yannakis restrict attention to a concrete class of basic FIXP problems.Definition 3. The class FIXP consists of all total ∃R search problems that are SL-reducible to a basic FIXP problem for which each domain D I is a convex polytope described by a set of linear inequalities with rational coefficients and the function F I is defined by a {+, −, * , ÷, max, min}-circuit C I . 2.5.2The Borsuk-Ulam theorem and BU A new class BU of total ∃R search problems based on the Borsuk Ulam theorem was recently introduced by Deligkas et al. [DFMS21].The definition of BU is meant to capture the Borsuk-Ulam theorem as stated in formulation (1) of Theorem 1.Following the definition of FIXP by Etessami and Yannakakis, Deligkas et al. first consider a set of basic search problems and then close the class under reductions.Definition 4.An ∃R search problem Π is a basic BU problem if every instance I describes a domain D I ⊆ R d I which is homeomorphic to S d I −1 by an an antipode preserving homeomorphism and a continuous function F I : D I → R d I −1 , computed by an algebraic circuit C I , and these descriptions must be computable in polynomial time.The solution set is Sol ) may clearly be computed by {+, −, * , ÷, max, min}-circuits as well.The result of Deligkas et al. that CH is contained in BU follows.The existence proof of a consensus halving by Simmons and Su as well the formulation of a ∃R search problem by Deligkas et al. match the Borsuk-Ulam theorem as stated in formulation (1) of Theorem 1.We shall also define a variation BCH of CH to match formulation (3) of Theorem 1.A point y ∈ B n 1 may be lifted to the point x = (1 − y 1 , y) ∈ S n 1 .This means that we may view y ∈ B n 1 as describing a partition of A by the partition described by x.Compared to the representation of partitions of A into n + 1 intervals given by points of S n 1 we thus restrict the label of the first interval to be +, in case it has positive length.Definition 6.The problem BCH is defined as follows.An instance I consists of a list of {+, −, * , max, min}circuits C 1 , . . .,C n computing distribution functions F 1 , . . ., F n defined on the interval A = [0, 1].The domain is D I = B n 1 and Sol(I) constists of all y for which By assumption D I = [0, 1] m and D f (I) = [0, 1] n , where m = d I and n = d f (I) .From the definition of ( f , g) we may given I in polynomial time compute f (I) as well as the circuit C I that defines a functionG : [0, 1] n → [0, 1] m such that g(I, x) = G(x) for all x ∈ Sol( f (I)).By assumption on Γ we may in polynomial time compute another circuit C f (I) that defines a function F : [0, 1] n → [0, 1] n such that Sol( f (I)) are the fixed points of F.We now define the function H : [0, 1] n+m → [0, 1] n+m by H(x, y) = (F(x), G(x)).Clearly the set of fixed points of H is equal to {(x, G(x)) | x ∈ Sol( f (I))}, and since H is computable by a {+, −, * , ÷, max, min, k Proposition 3 . There is a basic FIXP problem Π, complete for FIXP under SL-reductions, such that for any instance I it holds that D I = [0, 1] d I and such that C I is a depth 1 {+ T [0,1] , − T [0,1] , * , max, min}-circuit, using only constants from the interval [0, 1].Proof.We reduce from the problem Π NE of Proposition 2. The instances of Π are the same instances of Π NE .Let I be an instance of Π NE and let D = [0, 1] d I and C I be the corresponding domain and {+, −, * , max, min}circuit as given by Proposition 2. Suppose that C I has m I gates g 1 , . . ., g m I .We define the new domain D ′ I for Π simply by D ′ I = [0, 1] d ′ I , where d ′ I = d I + m I .We next define the gates of C ′ I which all are output gates of C ′ I . Definition 7 . A basic BU problems is a basic ℓ p -BU problem if for every instace I we have D I = S d I p .Similarly we define the set of basic BBU problems with respect to the ℓ p -norm.Definition 8.An ∃R search problem Π is a basic ℓ p -BBU problem if for every instance I we have D I = B d I p and I describes a continuous function F I : D I → R d I , which is odd on the boundary ∂ B d I p .The function F I must be computed by an algebraic circuit C I whose description is computable in polynomial time.The solution set is Sol(I) = {x ∈ D I | F I (x) = 0}. As described above, we may transform the circuit C I to a circuit C + I that maintains the numerator and denominator of every gate.In the same way we define a circuit C − I that is exactly like C + I , except it multiplies the input by −1 at the very beginning.Let out n+ i , out d+ i and out n− i , out d− i denote the gates in C ± I representing the numerators and denominators of the output gates of C I .We now define a circuit C * I that on input x feeds this into C + I and C − I and then outputs the values out n+ i • out d− i for i = 1, . . ., d I .If we denote by Proposition 6 . Any Π ∈ FIXP reduces to a basic ℓ ∞ -BU problem with {+, −, * , max, min}-circuit by polynomial time constant depth B-circuit reductions, for both B = {+, −, * } and B = {+, −, * ζ , max, min}.Proof.Any Π ∈ FIXP SL-reduces to a basic FIXP problem Γ with domains D I = B d I Proof.(i) First assume that the domains of Γ are unit hypercubes.Let I denote an instance of Π.By assumption D I ⊆ [−1, 1] m and D f (I) = [−1, 1] n where m = d I and n = d f (I) .From the definition of ( f , g) we may given I in polynomial time compute f (I) and a circuit C I computing a function G : m and D f (I) = B n p where m = d I and n = d f (I) , and we may given an instance I of Π in polynomial time compute a circuit C I defining a function G : B Proposition 8 . Any basic ℓ ∞ − BBU problem SL-reduces to a basic ℓ p − BBU problem using gates in {+, −, * , ÷, max, min, p √ •}.Proof.Let Π be a basic ℓ ∞ − BBU problem.By the previous proposition it suffices to argue that Π polynomial time {+, −, * , ÷, max, min}−reduces to a a basic ℓ p − BBU problem.Given an instance I of Π, compute in polynomial time a circuit C I defining a function F : B n ∞ → R n that is odd on S n−1 p such that Sol(I) are the zeroes of F. Also, define the map (ii) Again let I denote an instance of Π with m = d I .From f (I) we may in polynomial time compute a circuit computing a map F : S n p → R n such that Sol( f (I)) = {x ∈ S n p | F(x) = F(−x)} and a B−circuit computing a map G : R n+1 → [−1, 1] m sending Sol( f (I)) to Sol(I).Again, we may extend the domain of F. Define a map h : R n+1→ R n+1 by h(x) = x/ max(1/2, ||x|| p ) and H : S m+n p → R m+n by H(x, y) = (F(h(x)), y − 1 2 G(h(x)))In this way, we have defined a basic ℓ p − BU problem Λ with B ∪ { p √ •}−circuits and the same instances as Π.If (x, y) ∈ S n+m p has H(x, y) = H(−x, −y) we find that y = 1 2 G(h(x)) so ||y|| p ≤ 1/2.This implies that ||x|| p ≥ 1/2, and so h(x) = x/||x|| p ∈ S n p .Also, the first component shows thatF(h(x)) = F(h(−x)) = F(−h(x))where we use that h is odd.Therefore, h(x) ∈ Sol( f (I)), and so G(h(x)) ∈ Sol(I).As 2y = G(h(x)), we conclude that Π SL-reduces to Λ. Proposition 11.Let B = {+, −, * , ÷, max, min}.(i) A basic ℓ p − BU problem Π with B-circuits SL-reduces to a basic ℓ ∞ − BU problem with B ∪ { p √ •}-circuits.(ii) A basic ℓ ∞ − BU problem Π with B-circuits SLreduces to a basic ℓ p − BU problem using B-circuits. 5 Relation between ℓ p − BU and ℓ p − BBU Let B be some finite set of gates containing {+, −, * , ÷, max, min}.In this section we study reductions between ℓ p − BU problems and ℓ p − BBU problems.Suppose we are given a basic ℓ p − BU problem Π with circuits defined over B. In order to show that Π reduces to a basic ℓ p − BBU problem we follow the proof of Theorem 1.Given an instance of Π we may in polynomial time compute the dimension n = d I and a circuit over B defining a map F I : S n p → R n such that Sol(I) = {x ∈ S n p | F I (x) = F I (−x)}.Define also the map π : B n p → S n p by π(x) = (x, (1 − ||x|| Proposition 13 . Any basic ℓ p − BU a problem polynomial time reduces to a basic ℓ p − BBU a problem.For reductions in the other direction, consider an instance H : B n ∞ → R n of a basic ℓ ∞ − BBU problem.Given this instance we define an instance of a basic ℓ ∞ − BU problem given by F : S n ∞ → R n where F(x) = Sel 2 (−H(−π(x)), H(π(x)), x n+1 ) Constant Gate [G ζ ].Let ζ ∈ [−1, 1] ∩ Q be a rational constant.The agent will have a block of unit height in the sign interval and a block of width ζ /2 and height 2/ζ centered in another interval. Function Gate [G h ].Let −1 ≤ a < b ≤ 1 and −1 ≤ c < d ≤ 1 be rational numbers and consider a continuously differentiable map h : [a, b] → [c, d] satisfying h(a) = d and h(c) = c.Let h denote the extension of h that is constant on [−1, a] and [b, 1].We now construct an agent with input interval I and output interval O computing this map, that is the agent should force a cut in the output interval such that h(v p (I)) = v p (O). vanishes and so π(x) = −π(−x) implying that H(x) = −H(−x), so H is odd on the boundary.As H is computable by a(B ∪ { p √ •})−circuit if p < ∞ (and B−circuit if p = ∞) this definesan ℓ p − BBU problem Γ with the same instances as Π.Furthermore, the set of BU-points of H is exactly{x ∈ B n p | F I (π(x)) = F I (−π(x))},so mapping solutions x of Γ to solutions of Π amounts to computing π(x) which can be done by a circuit over B ∪ { p √ •} if p < ∞ (and over B if p = ∞).However, when p = 1 these reductions make use of p √ -gates for p < ∞ or divison gates for p = ∞.We can remedy this by applying Propositions 8, 9, and 11 which give that we may go back and forth between different domains for BBU and BU by SL-reductions.Specifically, for any ℓ p − BU problem we may SL-reduce to a ℓ 1 − BU problem (that also uses p √ gates if p < ∞).Then we may apply the above {+, * ζ }reduction from ℓ 1 − BU to ℓ 1 − BBU.And from there we may again SL-reduce to an ℓ p − BBU problem.In conclusion we obtain the following result.Proposition 12. Any basic ℓ p − BU problem {+, * ζ }-reduces to a basic ℓ p − BBU problem.
24,028.2
2021-03-07T00:00:00.000
[ "Mathematics", "Computer Science" ]
Two-photon absorption of the spatially confined LiH molecule In the present contribution we study the influence of spatial restriction on the two-photon dipole transitions between the XS and AS states of lithium hydride. The bond-length dependence of the two-photon absorption strength is also analyzed for the first time in the literature. The highly accurate multiconfiguration self-consistent field (MCSCF) method and response theory are used to characterize the electronic structure of the studied molecule. In order to render the effect of orbital compression we apply a two-dimensional harmonic oscillator potential, mimicking the topology of cylindrical confining environments (e.g. carbon nanotubes, quantum wires). Among others, the obtained results provide evidence that at large internuclear distances the TPA response of lithium hydride may be significantly enhanced and this effect is much more pronounced upon embedding of the LiH molecule in an external confining potential. To understand the origin of the observed variation in the two-photon absorption response a two-level approximation is employed. Introduction Studies concerning the spatial confinement phenomenon and its influence on the variety of physical and chemical properties of quantum objects have been attracting increasing research attention. This has been triggered by great advances in nanotechnology as well as the rapid development of chemical synthesis methods, particularly in supramolecular chemistry. These factors open up the possibility of constructing molecular systems with entirely new properties, mostly determined by size effects (e.g. endohedral complexes, inclusion compounds or low-dimensional semiconductor structures). [1][2][3][4][5][6][7] A first glimpse of the confinement-induced changes in the chemical and physical properties of atoms or molecules may be caught on a purely theoretical basis through different types of analytical external potentials (e.g. spherical and cylindrical harmonic oscillator potentials, [8][9][10][11][12][13][14] penetrable and impenetrable boxes 8,12,[15][16][17] or by applying a supermolecular approach. 8,[18][19][20][21][22][23][24][25][26] The analytical based methods allow one to gain an insight into e.g. structural and spectroscopic properties or chemical reactivities of various molecular systems trapped inside confining cavities. For a recent review on the subject, see for example ref. 8 and 27-30. Another area of research of increasing prominence concerns linear and nonlinear electric properties of spatially restricted atoms, ions and molecules. Basically, it is expected that embedding a quantum system in the confining cages will affect its electronic density distribution which, in turn, may be reflected through changes in linear and nonlinear optical (L&NLO) phenomena. Thus far, numerous theoretical results presented in the literature demonstrate that spatial restriction significantly modifies nonresonant electric dipole properties of atoms and molecules. [9][10][11][12][13][14][17][18][19]23,24,31,32 In particular, it was reported that the values of effective linear polarizability (a) as well as second hyperpolarizability (g) decrease together with the increasing strength of orbital compression. 9,10,12,13,[17][18][19]31,33 On the other hand, the behavior of dipole moment (m) and first hyperpolarizability (b) differs depending on the topology of the confining environment and the system under consideration. 9,12,32,33 It should be underscored that there are only a limited number of theoretical studies concerning the evaluation of the molecular quantities that govern the NLO processes in the resonant regime upon confinement. 9 This work aims to fill the existing gap. In so doing, in the present contribution the focus is put on the exploration of the effect of spatial confinement on the two-photon absorption (TPA) response of a model molecular system. TPA, which is a third-order NLO phenomenon, may be described as the electronic excitation of a quantum object induced by the simultaneous absorption of two photons of the same or different energy and, in general, is characterized by several attractive features. Besides the benefits of application of the TPA phenomenon in the field of spectroscopy (it enables the exploration of spectroscopic states which are one-photon forbidden due to symmetry), there are also a number of technological applications of this NLO process. 34 These include high-resolution fluorescence microscopy, 35,36 fabrication of optoelectronic logical circuits, 37,38 three-dimensional optical data storage 39 or nondestructive imaging of biological tissues, 35,40 just to name a few. As the development of multiphoton based applications relies on the quest for chromophores with large TPA responses, considerable efforts are directed toward the design of appropriate molecular species. 34,41,42 Although initially the attention was mainly focused on push-pull dipolar molecular structures, 42,43 over the years it has been shifted to quadrupoles, 44,45 multichromophoric dendrimeric systems 46,47 or nanodots. 48,49 Another promising route to accomplish large TPA responses involves alteration of bond lengths. Particularly, it has been shown that the molecular (hyper)polarizabilities as well as the TPA probability (d gf ) exhibit nonmonotonic changes as a function of the bond-length alternation parameter. 50,51 Moreover, the results of theoretical and experimental studies clearly indicate that environmental effects, especially solvent polarity, significantly influence the TPA strength of molecular systems. [52][53][54] According to the results of some experimental studies, the spatial confinement effect may be considered as another important factor contributing to the changes of TPA strength. [55][56][57][58][59] For example, it was demonstrated that exposing molecular systems to high pressure leads to the reduction of both one-and twophoton absorption responses. 55 On the contrary, an enhancement of d gf was reported for different organic molecules confined between the interlayer spaces of clay minerals. 58,59 Some important conclusions might be also found in recent work concerning the properties of different molecular species enclosed inside metal-organic framework (MOF) materials, which are emerging as unique structures due to their extraordinarily high porosity. Particularly interesting is the observation that systems containing chromophores incorporated into the MOF pores exhibit very strong TPA intensities. 57,60 Thereby, in addition to the already known potential applications of MOFs (e.g. drug delivery, catalysis or hydrogen storage) they are also considered as an element of new two-photon-pumped microlasers. 60 In some measure, these findings are in line with those emerging from our recent study performed on the HCCCN molecule embedded in a repulsive potential of cylindrical symmetry. 9 Based on the conducted analysis it was found that the absolute value of the second-order transition moment (S gf ij ) increases together with the increasing confinement strength. To the best of our knowledge the study in question provides still the only ab initio results quantifying the influence of spatial confinement on the resonant NLO properties. The recent experimental studies concerning two-photon absorption properties of molecular species enclosed inside metal-organic framework materials or confined between the interlayer spaces of clay minerals 57-60 encouraged us to undertake the present investigations. In order to gain a fundamental understanding of various aspects of multiphoton absorption in the presence of spatial confinement, in this article we provide a comprehensive theoretical description of the confinementinduced changes in the two-photon dipole transitions between the X 1 S + and A 1 S + states of the LiH molecule using high-level electron correlation treatments and response theory. Owing to the simplicity of its electronic structure, lithium hydride is often considered as an ultimate benchmark that allows for a precise assessment of the accuracy and reliability of various theoretical methods. Thus, the number of papers reporting highly accurate reference data for various properties of this molecule is very substantial (see for example ref. 10, 12, 18, 31 and 61-76). Among others, the potential energy curves and spectroscopic properties of many electronic states of LiH have been already thoroughly investigated. [61][62][63][64][65] It is also worth noticing that several high quality theoretical studies are available concerning the electronic and vibrational contributions to the dipole moment and (hyper)polarizability of lithium hydride. 10,12,18,31,[66][67][68][69][70][71][72][73][74][75][76] However, with the notable exception of the study devoted to simulations of NLO properties of LiH using damped cubic response theory within TDDFT, 77 we are not aware of previous experimental or theoretical studies on the TPA response of the LiH molecule, particularly under usual external conditions. Thus, although the main focus of this work is to get a deeper insight into the influence of orbital compression on the two-photon absorption phenomena, the value of d gf reported here for the unconfined LiH molecule might be also of significance for subsequent analysis. A further interest of this study is to explore the bond-length dependence of the investigated molecular quantities, for both free and spatially restricted lithium hydride. Methodology The influence of the spatial restriction on the one-and twophoton dipole transitions between the X 1 S + and A 1 S + states of the LiH molecule was studied by applying a two-dimensional harmonic oscillator (HO) potential. This one-particle confining potential might be expressed as where j denotes a constant that allows the strength of orbital compression to be controlled. The j values equal to 0.1 and 0.2 were considered in this study. This range roughly corresponds to exchange repulsion energy between linear few-atomic molecules and carbon nanotubes. 9,14 In all computations the principal axis of the HO potential overlaps with the molecular axis of lithium hydride, assumed to be the z-axis. Thus, within the Born-Oppenheimer approximation, adopted in this work, the potential defined by eqn (1) acts only on the electrons of the confined system. Considering the adopted model of spatial restriction the excitation of guest molecules randomly oriented in cylindrical cavities in zeolites by a propagating light beam may constitute a good illustration of the studied phenomenon. Likewise, endohedrally functionalized carbon nanotubes, that can rotate freely in gaseous or liquid medium, can serve as another illustrative example (supposing that the guest molecules are not subjected to rotations). At this stage, it is worth pointing out that different forms of the HO potential proved to be very useful in the description of a variety of physical and chemical environments in condensed-matter physics and nanotechnology, 8,27,78,79 including studies on the electrical properties of confined molecular systems. [9][10][11][12][13][14]19,33 In an attempt to assess the effect of spatial restriction on the TPA of the LiH molecule we analyze the values of the secondorder transition moment (S gf ij ), which constitutes the basic molecular quantity that describes the two-photon absorption process: 80 In the above equation |gi and | f i correspond to the initial and final state, respectively, while |ki denotes the intermediate state. Labels i, j stand for the Cartesian coordinates and hi|m i | ji is the transition moment between states |ii and | ji. It is assumed that angular frequencies (o) satisfy the resonance condition, i.e. for the one source of photons 2o = o f . The influence of the external potential on the two-photon absorption probability (d gf ) is also discussed, since this quantity might be related to data extracted from the experimental measurements. 81 According to the procedure described by Monson and McClain the magnitude of orientationally averaged d gf (hereafter denoted as hd gf i) in an isotropic medium might be calculated using the following formula: 82 where the coefficients F, G and H depend on the polarization of the incident light beams; for linearly polarized photons F = G = H = 2. Note that hd gf i is called two-photon absorption strength, by analogy to the oscillator strength ( f ), which characterizes the one-photon absorption (OPA) process. Both S gf ij and hd gf i are calculated for the transition to the lowest-lying singlet electronic excited state of lithium hydride, for which a significant intramolecular charge transfer (CT) occurs. In such case the diagonal component of the two-photon transition moment along the symmetry axis (here S gf zz ) is by far the largest and consequently contributes the most to hd gf i. Therefore, although in this article we determine all tensor elements and averaged properties, particular attention will be put on the confinementinduced changes in the values of S gf zz . Under the assumption that the presence of the CT state dominates the response of molecular systems to the external electric field it is possible to reduce the expressions defining the second-order transition moment (eqn (2)) within the wellknown two-level approximation: 34,42,83,84 The importance of the two-level model (TLM) in the field of molecular nonlinear optics stems from the fact that it allows one to define the response of a system in terms of simple spectroscopic parameters, like excitation energy (o f ), transition moment (hg|m z | f i) and change of polarity between ground and low-lying excited charge transfer states (Dm z = h f |m z | f i À hg|m z |gi). By using this relationship it becomes possible to establish how changes in the above mentioned factors, occurring due to the presence of an external confining potential, affect the second-order transition moment value. In order to characterize the electronic structure of lithium hydride in the X 1 S + and A 1 S + states the multiconfiguration self-consistent field (MCSCF) method together with response function formalism was used, as implemented in the Dalton package. 85 Specifically, the MCSCF method was applied for the ground state wave function, whereas the one-and two-photon dipole transition properties were obtained from the multiconfiguration linear and quadratic response functions (MCLR and MCQR). [86][87][88][89] The computations were performed in C 2v symmetry, using the ANO-L basis set. 90 All electrons were correlated and all orbitals included in the active space during the MCSCF calculations. It should be noted that the dependence of the analyzed quantities on the internuclear distance (R) was also investigated. Therefore, the one-and two-photon transition dipole moments and excitation energy values, as well as the dipole moment difference between the excited and the ground electronic state, were computed as a function of R, both under vacuum and in the presence of HO potential. Such analyses were carried out for the internuclear distances between 1.3 and 20 a.u. It should be noted that although the sign of S gf zz is undetermined, we checked phases of wave functions and response vectors at each distance to obtain a smooth curve presented in Fig. 1. The same applies to Fig. 2. Results and discussion We shall start the discussion with the analysis of data obtained for the LiH molecule at the experimental equilibrium distance (R e = 3.015 a.u.), which are presented in Table 1. The values of the diagonal component of the second-order transition moment and TPA probability computed at this particular bond length for isolated lithium hydride are equal to 103.9 a.u. and 3760 a.u., respectively. However, the external confining potential causes a substantial drop of S zz and hd gf i. As one can notice, although the values of the second-order transition moment calculated using the two-level model (S gf,TLM zz ) noticeably underestimate those determined within the response function formalism (S gf zz ), both methods predict the same nature of changes in the analyzed quantity upon confinement. Therefore, the two-level approximation seems to be an adequate approach to explain the behavior of S gf zz . From Table 1 it is evident that the reduction of TPA response of spatially limited lithium hydride is caused by two factors: the decrease of the one-photon transition dipole moment value and the hypsochromic shift of the excitation energy between X 1 S + and A 1 S + states. The latter result closely follows intuitive expectations, as it is well established that the presence of analytical potentials representing the so called ''pure'' spatial confinement (i.e. no attractive interactions between the confined molecule and its environment) would cause an increase of the gap between frontier orbitals relative to that of unconfined atoms or molecules. 16,19,91 The dependence of the diagonal component of the secondorder transition moment of LiH on the internuclear distance, evaluated using the multiconfiguration quadratic response functions and two-level approximation, is depicted in Fig. 1. At this stage it should be mentioned that the R-dependence brings forth important features of the static electric properties (m, a, b and g) of molecular systems, as it has been already disclosed in many valuable scientific works. 10,[92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107][108] Although it is difficult to draw one general conclusion emerging from these studies, there are several important observations worth underscoring. Among others, the sign inversion of the dipole moment with the change in the internuclear separation is characteristic for various molecular systems (e.g. AlCl, AlF, AlH, BCl, BF, CO, CS, HBr, HCl, HF, MgHe, NaLi, SiO, SiS, YBr). [97][98][99][100] Such observation is of relevance as it reflects the process of electron charge transfer inside the molecule. Moreover, on the basis of extensive theoretical studies Maroulis and co-workers have found that the variation of bond length results in substantial changes of (hyper)polarizability, which are quite distinct for different molecular systems. [101][102][103][104] A thorough consideration of the connection of polarizability and hyperpolarizability derivatives to Raman and hyper-Raman spectra was also reported by Quinet and Champagne. 109 From the theoretical point of view an important finding concerns also the fact that changes in the intermolecular distance may have a substantial influence on the electron correlation contribution to the studied electric properties. 102,104 Quite recently, Lo and Klobukowski discussed the electronic structure as well as the response of m and a of lithium hydride to the confining potential and also the dependence of the computed quantities on the internuclear distance. 10 As it was found by the authors, the changes of m and a as a function of bond length are substantial and slightly dependent on the external potential. The curves displayed in Fig. 1(a) clearly demonstrate that the value of the second-order transition moment of lithium hydride varies largely with the internuclear separation as well. For the sake of discussion performed herein, it should be noted that only the absolute values of S gf zz are of significance for the magnitude of the TPA response. As one can notice the S gf zz function exhibits a nonmonotonic behaviour even for the unconfined lithium hydride. Particularly, for R o R e the values of S gf zz computed for the free LiH molecule are smaller than those determined at the experimental equilibrium distance. However, stretching the LiH bond length leads to an increase in the magnitude of the second-order transition moment and the S gf zz (R) function exhibits two extrema equal to 499 and À723 a.u. at R = 5.8 a.u. and R = 8.5 a.u., respectively. Note that in the intermediate internuclear separation range the inversion of the S gf zz sign occurs, while for large R (R 4 16 a.u.) its value converges to zero. Turning the attention to the results obtained for the spatially limited LiH molecule several interesting conclusions can be also drawn. As it turns out, upon embedding in the harmonic potential the second-order transition moment of LiH follows, in general, the same patterns of changes as presented by the unconfined molecule. Yet, the internuclear distances at which S gf zz reaches its maxima (extrema) are shifted toward larger values of R. Likewise, the bond lengths for which inversion of the second-order transition moment sign is observed (so called ''crossing point'') are also noticeably larger. Obviously, the shifts in the S gf zz (R) function can be considered as a natural consequence of the fact that the employed confining potential causes an increase of the energy gap between the X 1 S + and A 1 S + states of LiH. Moreover, it follows from Fig. 1 that the influence of spatial restriction on the S gf zz value is much more pronounced far from the equilibrium bond length of lithium hydride. In contrast to what was observed when R = R e , at larger internuclear distances the presence of an external potential results in a significant enhancement of the second-order transition moment with respect to the value obtained for the unconfined LiH molecule. It is notable that there is a three-fold increase of the maximum values of S gf zz due to confinement. The above observations can be easily understood by the analysis of key parameters for the maximum, crossing point and minimum on the S gf zz curve (for all confinement strengths), which are assembled in Table 2. In particular, for the internuclear distances under consideration, the excitation energy decreases by an order of magnitude, while hg|m z | f i significantly increases, with respect to the values obtained at the experimental equilibrium distance (cf. Table 1). Moreover, for both free and spatially confined LiH, the crossing points are characterized by smaller o f and Dm values and larger OPA transition moments when compared to the data obtained for S max zz and S min zz . According to the TLM, changes in the above mentioned spectroscopic parameters, and their mutual correlation, have a decisive impact on the second-order transition moment values (see the discussion below). As it follows from the data depicted in Fig. 1(b) the estimated S gf,TLM zz values reproduce reasonably well those computed using the MCQR approach. Thus, the two-level approximation is sufficient to qualitatively explain the changes in the secondorder transition moment as a function of internuclear separation. The dependence of the spectroscopic parameters contributing to S gf,TLM zz on the bond length is illustrated in Fig. 2. A close inspection of the presented plots allows one to conclude that the change in S gf zz is mostly governed by the variation of Dm z . Of particular importance are changes occurring for the excited state dipole moment. In this case a maximum and a minimum of the m z (R) function appear at internuclear distances close to those for which the peaks of S gf zz functions are also located. On the other hand, in the ground electronic state of LiH the dipole moment reaches its maximum at bond length, where the potential energy curve crosses the Li + H À ionic potential curve, 10 and yields zero values at larger internuclear distances. The enhancement of the second-order transition moment value at R 4 R e is also due to the decrease of the excitation energy value, accompanied by an increase of the one photon transition dipole moment. Nevertheless, these two quantities have considerably less impact on the nature of S gf zz changes in the function of internuclear separation. Noteworthy, this observation applies to both free and spatially limited lithium hydride. In Fig. 3, the variation of the one-and two-photon absorption strength of LiH with the internuclear separation is presented. Unsurprisingly, the R-dependence of hd gf i bears a strong resemblance to that of the diagonal component of the secondorder transition moment of LiH. The total effect is even more pronounced as the maximum value of TPA strength under double perturbation, that is the confining potential with j = 0.2 a.u. and Table 2 The values of the selected OPA spectroscopic parameters computed for the LiH molecule at bond lengths corresponding to the maximum (S max zz ), minimum (S min zz ) and crossing point (S cp zz ) on the secondorder transition moment curve (for all confinement strengths). The term ''crossing point'' refers to the internuclear distance at which the inversion of the S gf zz sign occurs. Symbols |gi and |fi correspond to X 1 S + and A 1 S + states, respectively. Calculations were performed using the MCSCF wave function and the ANO-L basis set. All values are given in a.u. the bond stretched to 7.6 a.u., is almost ninety times larger than hd gf i computed for unconfined LiH at the experimental equilibrium distance. However, one may find fundamental differences between the changes in the one-and two-photon absorption strength occurring due to the variation in the internuclear separation in the presence of confining HO potential (cf. Fig. 3(a)). S Although the R-dependence of oscillator strength ( f ) demonstrates nonmonotonic behavior, for internuclear distances larger than R e the value of f is always greater than the one computed at the experimental bond distance. This applies to the result obtained under vacuum as well as upon embedding LiH in HO potential. In contrast to hd gf i the spatial confinement diminishes the OPA strength virtually in the whole range of R. The present findings remain in agreement with the observations made in a theoretical study concerning the absorption spectra of the p-nitroaniline (pNA) molecule embedded in different confining cages. 22 In particular, it was demonstrated that under the influence of chemical pressure, imposed by the helium tube, the absorption maximum of pNA is shifted to larger wavelengths, losing some of its intensity. Conclusions The present contribution provides a theoretical description of the two-photon absorption response of the LiH molecule embedded in a two-dimensional harmonic oscillator potential, expected to capture the exchange repulsion within the confining environments of cylindrical symmetry. Comprehensive analyses were conducted for the second-order transition moment and TPA probability values evaluated using the multiconfiguration self-consistent field method together with response function formalism. An important aspect of this work was also to explore the bond-length dependence of the investigated molecular quantities, for both free and spatially restricted lithium hydride. The results of the performed calculations indicate a significant reduction of the two-photon absorption response of lithium hydride at its experimental equilibrium bond length upon confinement. On the other hand, large and nonmonotonic changes of the second-order transition moment value, and consequently TPA strength, were observed due to the variation in the internuclear separation. As it has been found that at distances larger than the equilibrium bond length a substantial enhancement of S gf zz and hd gf i might occur. Moreover, it has been disclosed that the importance of the orbital compression effect increases in the case of highly distorted geometries of lithium hydride. According to the obtained results under double perturbation, i.e. when the bond length of the LiH molecule embedded in an external potential is strongly stretched, the TPA strength could increase by several orders of magnitude. Analysis of the results in terms of the two-level model leads to the conclusion that the observed changes in the TPA response are mostly governed by the variation in the difference between the ground-and excited electronic state dipole moment of LiH. Summing up, the obtained results provide evidence that the ''pure'' spatial confinement effect might have a significant influence on the magnitude of the two-photon absorption response of molecular systems. According to our knowledge, some of the studied topics, including comprehensive analysis related to bond-length dependence of the investigated molecular quantities, have never been considered in the literature before. Moreover, it should not be overlooked that the highly accurate ab initio values of the second-order transition moment and TPA probability of the unconfined LiH molecule were reported herein for the first time in the literature.
6,139
2017-03-15T00:00:00.000
[ "Chemistry", "Physics" ]
Additive adjudication of conflicting claims In a “claims problem” (O’Neill 1982), a group of individuals have claims on a resource but its endowment is not sufficient to honour all of the claims. We examine the following question: If a claims problem can be decomposed into smaller claims problems, can the solutions of these smaller problems be added to obtain the solution of the original problem? A natural condition for this decomposition is that the solution to each of the smaller problems is non-degenerate, assigning positive awards to each claimant. We identify the only consistent and endowment monotonic adjudication rules satisfying this property; they are generalizations of the canonical “constrained equal losses rule” sorting claimants into priority classes and distributing the amount available to each class using a weighted constrained equal losses rule. The constrained equal losses rule is the only symmetric rule in this family of rules. 3 Additive adjudication of conflicting claims ∑ N z i = E and, for each i ∈ N , z i ≤ c i . We refer to z i as the award of claimant i and to c i − z i as her loss. Let Z(c, E) denote the collection of all allocations for (c, E). An adjudication rule is a function f recommending an allocation for each possible claims problem: for each N ∈ N and each (c, E) ∈ C N , f (c, E) ∈ Z (c, E). For each rule f, its dual (Aumann and Maschler 1985) is the rule g defined by setting for each (c, E) ∈ C N , Rules In this paper, we examine the relationship between additivity properties and two canonical rules attributed to medieval philosopher Maimonides (Aumann and Maschler 1985): the "constrained equal losses" and the "constrained equal awards" rules. The constrained equal losses rule, denoted by CEL, equalizes the losses imposed on claimants subject to the constraint that no claimant receives a negative award: for each (c, E) ∈ C N and each i ∈ N, where λ ∈ ℝ + is chosen so as to satisfy ∑ j∈N max{0, c j − λ} = E. The constrained equal losses rule can be extended to allow for asymmetric treatment, by equalizing the weighted losses imposed on claimants (Moulin 2000). 3 The weighted constrained equal losses rule corresponding to a weights profile w ∈ ℝ A ++ , denoted by CEL w , is such that for each (c, E) ∈ C N and each i ∈ N where λ ∈ ℝ + is chosen so as to satisfy ∑ N max{0, c j − w j λ} = E. A weighted constrained equal losses rule can be further extended to include priority classes, whereby claimants in lower priority classes receive awards conditional on full compensation among those in higher classes (Moulin 2000). We refer to such a rule as priority-augmented weighted constrained equal losses rule, or PWCEL rule. Formally, a rule f is a PWCEL rule, if there is a partition of A into n ≤ |A| non-empty priority classes A 1 , … , A n and a weights profile w ∈ ℝ A ++ such that, for each N ∈ N and each (c, E) ∈ C N , f(c, E) can be computed sequentially as follows: In the special case where each of the sets A 1 , … , A n is a singleton, we refer to the resulting rule as a priority rule. The constrained equal awards rule, denoted by CEA, equalizes awards subject to the constraint that no claimant receives more than her claim: for each (c, E) ∈ C N and each i ∈ N, where λ ∈ ℝ + is chosen so as to satisfy ∑ j∈N min{c j , λ} = E. Like the constrained equal losses rule, this rule can also be generalized to allow for weights and priority classes (Moulin 2000). Axioms We start recalling the two classical properties of adjudication rules, consistency and endowment monotonicity. Consistency is a basic property in the theory of distributive justice; it requires that if an allocation is considered desirable for a group of individuals, then it should remain so when restricted to each sub-group (Young 1987). More precisely, suppose that a rule is applied to settle a claims problem and a group of claimants is withdrawn along with their awards. If the situation is re-evaluated from the viewpoint of those who remain, in distributing the remaining endowment, a consistent rule assigns the same awards it did initially. Consistency: . Whereas consistency allows us to deduce that an allocation is desirable for each pair of individuals from its overall desirability, its converse allows us to deduce the desirability of an overall allocation from its desirability for each pair of individuals. Converse consistency: The following two properties are standard: Endowment monotonicity: For each (c, E) ∈ C N and each E � ∈ [0, E], 3 Additive adjudication of conflicting claims Endowment monotonicity implies endowment continuity. We say that a rule satisfies bilateral endowment monotonicity if it satisfies endowment monotonicity for the two-claimant case. 4 We now examine additivity in claims problems. 5 Additivity requires the overall allocation is invariant to whether two claims problems are solved independently or jointly, by aggregating each individual's claims on the aggregate endowment. Additivity: Despite appearing natural at first sight, the condition is demanding. Consider the example in the introduction stated formally: Then, the definition of an allocation and additivity imply that, for each rule f, f i (c * , E * ) = 1 and f j (c * , E * ) = 0. Bergantiños and Méndez-Naya (2001) and Bergantiños and Vidal-Puga (2004) use similar examples to show that, in fact, no rule satisfies additivity. 6 Another difficulty is that it requires ignoring an individual's claim by transferring it into a subproblem with a null-endowment. To rule out pathological cases featuring arbitrary transfers of claims and endowments across the subproblems, we consider a weaker property: additivity holds conditional on all individuals receiving positive awards in each of the smaller problems. Positive-awards-conditional additivity: For brevity, we refer to the above property as PAC additivity. Results The following lemma establishes that each PWCEL rule satisfies all of the properties that we will invoke. A proof can be found in the Appendix. We can now state our result: Theorem 1 A rule satisfies consistency, endowment continuity, and PAC additivity if and only if it is a PWCEL rule. The axioms in Theorem 1 are logically independent (see Table 1 in the Appendix). Each PWCEL rule is not only endowment continuous, but continuous in both the claims and the endowment (a standard property used by Young 1987). Moreover, since each PWCEL rule is endowment monotonic, and hence endowment continuous, our main result is a corollary of Theorem 1: Corollary 1 A rule satisfies consistency, endowment monotonicity, and PAC additivity if and only if it is a PWCEL rule. The basic equity condition in claims problems specifies that claimants with equal claims receive equal awards: Equal treatment of equals: For each (c, E) ∈ C N and each pair i, j ∈ N such that Since the only PWCEL rule satisfying equal treatment of equals is the constrained equal losses rule, the following is a corollary of Theorem 1 as well: Corollary 2 A rule satisfies consistency, endowment continuity, PAC additivity, and equal treatment of equals if and only if it is the constrained equal losses rule. The axioms in Corollaries 1 and 2 are also logically independent (see the Appendix). Duality The following axiom is linked through duality to PAC additivity and is used in the axiomatic derivation of the PWCEL rules. Positive-losses-conditional additivity: For brevity, we refer to the above property as PLC additivity. We say that two properties are dual if whenever a rule satisfies one of them, its dual satisfies the other (Thomson and Yeh 2008). Lemma 2 PAC additivity and PLC additivity are dual. Proof Let f denote a rule satisfying PAC additivity and let g denote its dual rule. Let (c, E), (c � , E � ) ∈ C N be such that g(c, E) < c and g(c � , E � ) < c � . Thus, letting 1 3 Additive adjudication of conflicting claims Conversely, if a rules satisfies PLC additivity, then its dual satisfies PAC additivity. Let g denote a rule satisfying PLC additivity and let f denote its dual rule. Since g satisfies PLC additivity, The next lemma shows that PLC additivity implies that increasing the claims of a claimant does not change the outcome whenever no claimant achieved her claim. Lemma 3 Let f denote a rule satisfying PLC additivity. Then, for each (c, E) ∈ C N and each A dual analysis of the results above concludes that (relying on Lemma 2), if we replace PAC additivity by PLC additivity, then we obtain the dual results of Theorem 1 and Corollaries 1 and 2. Corollary 3 (Dual to Theorem 1). A rule is consistent, endowment continuous, and satisfies PLC additivity if and only if it is a PWCEA rule. Analogously, And, Corollary 5 (Dual to Corollary 2). A rule satisfies consistency, endowment continuity, PLC additivity, and equal treatment of equals if and only if it is the constrained equal awards rule. The axioms in Corollaries 3, 4, and 5 are logically independent. Their independence follows from the independence of the axioms in the original axiomatization. Proofs To prove Theorem 1, we proceed in two steps, corresponding to the following subsections. In the first step, we consider the implications of PAC-additivity and endowment continuity in two-claimant problems (Lemmata 4 and 5 below). Here we establish that, if a rule is endowment continuous and its dual satisfies PAC additivity, then it is either a weighted constrained equal awards or a priority rule (Lemma 6). The dual of a weighted constrained equal awards or a priority rule is a weighted constrained equal losses rule or a priority rule (Lemma 7). The second step of the proof uses consistency to extend the two-claimant result to general claims problems. It relies on the fact that the PWCEL rules satisfy the property of "converse" consistency. Two-claimant problems The lemma below states that, for problems with two claimants, the additivity axioms jointly with endowment continuity imply endowment monotonicity. Lemma 4 If a rule satisfies endowment continuity and either PAC or PLC additivity, then it is bilaterally endowment monotonic. Proof Let g denote a rule satisfying endowment continuity and PLC additivity. Let i, j ∈ A and c ∈ ℝ {i,j} + . We first prove that, for each pair E, E � ∈ [0, c i + c j ], Let E and E ′ be as specified in (3) and let c � ∈ ℝ {i,j} Since g is endowment continuous, E i and E j are well defined. Without loss of generality, suppose that E j ≤ E i . We will prove that By way of contradiction, suppose that there is } is a continuous path in ℝ {i,j} + connecting g(c, E j ) and c, and containing g(c, E * ) , as illustrated by the thick curve in Fig. 1 Fig. 1. This contradicts (3). This contradiction establishes (4). To conclude the proof, note that by (3), g i (c, ⋅) and g j (c, ⋅) are non-decreasing on Recall that the claimants i, j and the claims profile c were chosen arbitrarily, and thus g is bilaterally endowment monotonic. Let f denote a rule satisfying endowment continuity and PAC additivity. Then, immediately, the dual of f satisfies endowment continuity and PLC additivity. By the argument above, the dual of f is bilaterally endowment monotonic. This immediately implies that f is bilaterally endowment monotonic. ◻ The next lemma shows that PLC additivity jointly with endowment monotonicity implies a linearity-type property (endowment linearity) whenever no claimant achieved her claim. Lemma 5 Let g denote a rule satisfying bilateral endowment monotonicity and PLC additivity. Then, for each Proof Let g denote a rule satisfying bilateral endowment monotonicity and PLC additivity. By PLC additivity, for each k ∈ ℕ and each (c, By PLC additivity and (5), we have that, It follows that, as desired. Case 2: ∈ [0, 1] ⧵ ℚ . Take an increasing sequence of rational numbers { t } and a decreasing sequence of rational numbers { t } , both converging to . By bilateral endowment monotonicity, for each t, The lemma below shows that if a rule satisfies bilateral endowment monotonicity and PLC additivity, then we obtain either a weighted CEA rule or a priority rule. 3 Additive adjudication of conflicting claims Lemma 6 Let g denote a rule satisfying bilateral endowment monotonicity and PLC additivity. Then, for each pair i, j ∈ A , one and only one of the following statements is true: where λ ∈ ℝ + is chosen so as to satisfy Proof Let g denote a rule satisfying bilateral endowment monotonicity and PLC additivity and let i, j ∈ A . The proof consists of three claims. Let (c, E) ∈ C {i,j} be such that 0 < g(c, E) < c and define = g j (c,E) g i (c,E) . By Lemma 5, for each ∈ (0, 1], } is a line with a constant slope of connecting g(c, 0) to g(c, E), as illustrated in Fig. 2 . Let Since g is bilaterally endowment monotonic, g(c, ⋅) is continuous and thus, E i and E j are well defined. Without loss of generality, suppose that E j ≤ E i . By definition, for each E ′ < E j , g(c, E � ) < c . Thus, by Lemma 5, for each ∈ (0, 1), This implies that the set of points {x ∈ ℝ {i,j} + ∶ x = g(c, e), e ∈ [0, E j ]} is a line connecting g(c, 0) to g(c, E j ) . Since there is ∈ (0, 1) such that E = E j , the slope of this line is , as illustrated in Fig. 2. Thus, for each E � ∈ [0, E j ], By bilateral endowment monotonicity, the same is true for each This follows immediately from the assumption that there is no (c, E) ∈ C {i,j} such that 0 < g(c, E) < c and the fact that, since g is bilaterally endowment monotonic, g(c, ⋅) is continuous. Claim 3 Let (c, E) ∈ C {i,j} . We have: We prove statement (i); the proofs of (ii) and (iii) are analogous. Let (c, E) ∈ C {i,j} and suppose that g i (c, E) = min{λ, c i } and g j (c, E) = min{ λ, c j } where . By Claim 1, Therefore, g recommends allocations for both claims profiles c and c ∧ c � following the same ray starting from the origin. Similarly, by Lemma 3, c ∧ c � ≤ c � implies g(c � , E �� ) = (c ∧ c � , E �� ) . By Claim 1, Combining the above claims establishes the Lemma. ◻ The lemma below shows that if a rule satisfies endowment continuity and PAC additivity, then we obtain either a weighted CEL rule or a priority rule. Lemma 7 Let f denote a rule satisfying endowment continuity and PAC additivity. Then, for each pair i, j ∈ A , one and only one of the following statements is true: where λ ∈ ℝ + is chosen so as to satisfy max{0, c i − λ} + max{0, c j − λ} = E. (ii) There is k ∈ {i, j} such that, for each (c, E) ∈ C {i,j} , f k (c, E) = min{c k , E}. A proof can be found in the Appendix. Proof of Theorem 1 By Lemma 1, each PWCEL rule satisfies the axioms in Theorem 1. Conversely, let f denote a rule satisfying the axioms in Theorem 1. We will prove that f is a PWCEL rule. For By Lemma 7, for each pair i, j ∈ A, Throughout the rest of the proof, for each pair i, j ∈ A such that i ∼ j , we will use the notation ij for the corresponding parameter in (6). Completeness and transitivity of ≿ : Completeness follows immediately from Lemma 7. To establish transitivity, let i, j, k ∈ A be such that i ≿ j ≿ k . We need to show that i ≿ k . By way of contradiction, suppose that this is not true, so that k ≻ i . Let (c, E) ∈ C {i,j,k} be such that c i = 1∕ ij and c j = c k = E = 1 . Let x = f (c, E) . By consistency, Thus 78 and, by the definition of ≻, We consider four cases: (a) i ≻ j ≻ k . By the definition of ≻ and (7), i ≻ j implies x j = 0 and j ≻ k implies x k = 0 . Since x i + x j + x k = 1 , x i = 1 , contradicting (10). Thus, i ≻ j ≻ k is not possible. (b) i ∼ j ∼ k . By the (9), x i = 0 . Thus, x j + x k = 1 . By (9), x j > 0 . By (8), x i > 0 , a contradiction. Thus, i ∼ j ∼ k is not possible. (c) i ∼ j ≻ k . By the definition of ≻ and (7), j ≻ k implies x k = 0 . Since By the definition of ≻ and (13), i ≻ j implies x j = 0 . Since x i + x j + x k = 1 , by (10), x k = 1 . By (9), x j > 0 , a contradiction. Thus, i ≻ j ∼ k is not possible. Thus, indeed, i ≿ k , and ≿ is transitive. Construction of the priority classes: Recursively define the subsets of A: Since A is finite and ≿ is complete and transitive, there is n ≤ |A| such that A is partitioned by the sets A 1 , … , A n . Construction of the weights: Let m ∈ {1, … , n} and (c, E) ∈ C A m be such that, for each i ∈ A m , c i = 1 and E = 1 . Let r m = c − f (c, E) . Note that r m ≥ 0 . We next prove that Suppose, instead, that there is i ∈ A m such that r m i = 0 and let j ∈ A m ⧵ {i} . Then, f i (c, E) = 1 and f j (c, E) = 0 . By consistency, f i (c {i,j} , 1 + 0) = 1 and f j (c {i,j} , 1 + 0) = 0 . Thus, (ii) in Lemma 7 holds, implying that i ≻ j . However, by definition, for each pair i, j ∈ A m , i ∼ j . This contradiction establishes (11). By (11), we can define a weights profile w ∈ ℝ A ++ such that, for each m ∈ {1, … , n} and each i ∈ A m , w i = r m i . We make an observation regarding the relationship between the coordinates of w corresponding to each agent. Suppose again that |A m | ≥ 2 and let i, j ∈ A m . Thus, i ∼ j . Thus, by (6), Concluding the proof: Let F denote the PWCEL rule specified by the partition A 1 , … , A n and weights profile w constructed above. We prove that f = F. Let N ∈ N , (c, E) ∈ C N , and x = f (c, E) . By consistency, Suppose that i, j ∈ N are such that i ∼ j . Then, by (6), By the definition of F, is chosen so as to satisfy max{0, c i − λ} + max{0, c j − ij λ} = x i + x j . Thus, by (12), Suppose that i, j ∈ N are such that i ≻ j . Then, by (ii) in Lemma 7, By the definition of F, F i (c {i,j} , x i + x j ) = min{c i , x i + x j } as well. Thus, By Lemma 1, F is conversely consistent. Thus, by (14) and (15) and since ≿ is complete, F(c, E) = x = f (c, E) . Since N ∈ N and (c, E) ∈ C N were chosen arbitrarily, F = f . ◻ Note that, as established in the Appendix, the axioms in Theorem 1 are logically independent. Concluding remarks We discuss here some axiomatizations that are related to other results in the literature. For instance, Theorem 3* in Herrero and Villar (2001) states that the constrained equal losses rule is the only one satisfying equal treatment of equals, composition down, and minimal rights first. The result in our Corollary 2 adds consistency and replaces composition down and minimal rights first by endowment continuity and PAC additivity. Similarly, Flores-Szwagrzak (2015) says that a rule satisfies consistency, composition down, and minimal rights first if and only if is a PWCEL rule. Our main result in Theorem 1 replaces once again composition down and minimal rights first by endowment continuity and PAC additivity in order to axiomatize the PWCEL rules. Regarding the constrained equal awards, we can invoke duality to conjecture similar conclusions. For instance, a rule satisfies equal treatment of equals, composition up (dual to composition down), and claims truncation invariance (dual to minimal rights first) if and only if is a constrained equal awards (Dagan 1996). Similarly, Flores-Szwagrzak (2015) states that a rule satisfies consistency, composition up, and claims truncation invariance if and only if is a PWCEA rule. The result dual to Theorem 1 replaces composition up and claims truncation invariance by endowment continuity and PLC additivity in order to axiomatize the PWCEA rules. Finally, if we replace endowment continuity by endowment monotonicity in all our results mentioned above, we obtain the very same conclusions. Interestingly, Flores-Szwagrzak et al. (2020) study a version of additivity where a problem can be decomposed into two smaller problems whenever i) all individuals receive positive awards in each of the smaller problems, and ii) these smaller problems keep the same claims vector. Restricted additivity: For each c ∈ ℝ N + , and each pair E, E � ∈ [0, In contrast, PAC additivity does not impose the restriction of keeping the same claim vectors in the decomposed problems. Note that if we replace PAC additivity by restricted additivity, the main result in Theorem 1 does not hold anymore since solutions like the priority-augmented proportional rules would also satisfy consistency, endowment continuity, and restricted additivity. We finally discuss a conjecture suggested to us by Youngsub Chun, Hervé Moulin, and José Zarzuelo. Consider the following property: for each pair (c, E), Clearly, this property is implied by both PAC and PLC additivity. Is the family of consistent and endowment continuous rules satisfying this property the union of the PWCEA and the PWCEL rules? The answer is no. For a counterexample, see the subsection Counterexample 1 in the Appendix. The following rule satisfies consistency, PAC additivity, and equal treatment of equals but does not satisfy endowment continuity. Let |N| = n and ĉ = (9, 7, 1, … , 1) ∈ ℝ n . Let G such that for each N ∈ N and each (c, E) ∈ C N , When |N| = 2 7) and E = 1 . The rule coincides with the CEL otherwise. • If N ≠ {1, 2} , the rule G coincides with the CEL. To see that G is not endowment continuous consider the following claims c = (9, 7, 1) and The rule H defined below satisfies endowment continuity, PAC additivity, and equal treatment of equals but does not satisfy consistency. Let H such that for each N ∈ N and each (c, E) ∈ C N . For instance, let c = (1, 8, 9) and E = 3 . Thus, H ((1, 8, 9), 3) = (0, 0, 3) . By consistency, if the claimant number one leaves with her outcome, i.e., with 0, the rest of the claimants remain unaffected, but If we replace PAC additivity by PLC additivity, results dual to Theorem 1, and Corollaries 1 and 2 mentioned above hold too. Analogously, we show the independence of the axioms of these dual results below (see a summary in Table 2). The PWCEA rule with non trivial priorities satisfies consistency, endowment continuity, and PLC additivity but does not satisfy equal treatment of equals. For instance, The constrained equal losses rule satisfies consistency, endowment continuity, and equal treatment of equals but does not satisfy PLC additivity. It is well-known that the CEL rule satisfies equal treatment of equals. We provide a numerical example showing that the CEL rule fails PLC additivity. The rule H * satisfies endowment continuity, PAC additivity, and equal treatment of equals but does not satisfy consistency. For instance, let c = (1,8,9) and E = 6 . Proof of Lemma 1 The PWCEL rules belongs to the wider family of consistent, continuous, and endowment monotonic rules characterized by Moulin (2000). An endowment monotonic and consistent rule is conversely consistent (Chun 1999); thus, each PWCEL rule is conversely consistent. We now prove that these rules satisfy PAC additivity. Let f denote a PWCEL rule associated with the partition of A into n ≤ |A| priority classes A 1 , … , A n and the weights profile w ∈ ℝ A Let B 1 , … , B m denote non-empty and distinct elements of {N ∩ A 1 , … , N ∩ A n } such that B 1 corresponds to the N ∩ A t with the smallest index t, B 2 corresponds to the N ∩ A t with the second smallest index t, and so forth; moreover m is chosen so that the union of B 1 , … , B m is a partition of N. Thus, B 1 consists of the claimants in N with the highest priority, B 2 consists of the claimants in N with second highest priority, and so forth. Let B ≡ ⋃ m−1 l=1 B l . By the definition of a PWCEL rule, for each i ∈ B , H((8, 9), 6 − 1) = ( 5 2 , 5 2 ) ≠ (1, 4) Proof of Lemma 7 Let f denote a rule satisfying PAC additivity and endowment continuity, let g denote its dual, and let i, j ∈ A . By Lemma 2, g satisfies PLC additivity. By Lemma 4, g is bilaterally endowment monotonic. Thus, by Lemma 6, one and only one of the following statements is true: (a) There is > 0 such that, for each (c, E) ∈ C {i,j} , g i (c, E) = min{c i , λ} and g j (c, E) = min{c j , λ} where λ ∈ ℝ + is chosen so as to satisfy min{c i , λ} + min{c j , λ} = E. (b) There is k ∈ {i, j} such that, for each (c, E) ∈ C {i,j} , g k (c, E) = min{c k , E}. Suppose (a) is true. Let (c, E) ∈ C {i,j} . Then, by (a) and since g is the dual of f, where λ ∈ ℝ + is chosen so as to satisfy min{c i , λ} + min{c j , λ} = c i + c j − E . Rearranging, Thus, if (a) is true, (i) in Lemma 7 is true. Similarly, if the alternative and mutually exclusive statement (b) is true, (ii) in Lemma 7 is true. ◻ Counter example 1 There are rules that are consistent, endowment continuous, and satisfy property (16) that are neither a PWCEL nor a PWCEA rule. For example, let i ∈ A and define rule F as follows: Clearly, F satisfies consistency and endowment continuity. It remains to show that it satisfies property (16). Since the constrained equal awards rule satisfies the property and F coincides with it when claimant i is not present in the claims problem, there is nothing to show unless i is present. Let N ∈ N be such that i ∈ N and (c, E), (c � , E � ) ∈ C N be such that c > F(c, E) > 0 and c � > F(c � , E � ) > 0 . These inequalities imply that E > 0.5c i and E ′ > 0.5c ′ i . Thus, Since c > F(c, E) and c � > F(c � , E � ), Thus, since the constrained equal awards rule satisfies PLC-additivity, equals f i (c, E) = c i − min{c i , λ} = max{0, c i − λ} f j (c, E) = c j − min{c j , λ} = max{0, c j − λ} max{0, c i − λ} + max{0, c j − λ} = E. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,091
2022-11-29T00:00:00.000
[ "Economics" ]
Adaptive Speaker Recognition Based on Hidden Markov Model Parameter Optimization The Hidden Markov Model (HMM) is a widely used method for speaker recognition. During its training, the composite order of the measurement probability matrix and the number of re-evaluations of the initial model affect the speed and accuracy of a recognition system. However, theoretical analysis and related quantitative methods are rarely used for adaptively acquiring them. In this paper, a quantitative method for adaptively selecting the optimal composite order and the optimal number of re-evaluations is proposed based on theoretical analysis and experimental results. First, the standard deviation (SD) is introduced to calculate the recognition rate considering its relationship with Mel frequency cepstrum coefficients (MFCCs) dimension, then the composite order is optimized according to its relationship curve with the SD. Second, the composited measurement probability with different number of re-evaluations is calculated and the number of re-evaluations is optimized when a convergence condition is satisfied. Experiments show that the recognition rate with the optimal composite order obtained in this paper is 97.02%, and the recognition rate with the optimal number of re-evaluations is 98.9%. I. INTRODUCTION Speaker recognition refers to identifying a speaker's identity using characteristic parameters extracted from the speaker's speech signal [1]. Compared to other biometric authentication methods, speaker recognition based on speech features has advantages such as convenience and economy [2]- [7]. The Hidden Markov Model (HMM) is a stochastic model based on transition probability and output probability [8]. It considers a speech signal as a random process consisting of an observable sequence of symbols. HMM does not require time regulation, which reduces the judgment time and storage. However, certain important initial parameters, including the composite order of the observed probability density matrix and the number of re-evaluations of the initial model, still require to be manually set by a user when training an HMM for speaker recognition. This not only reduces the adaptive ability of the speaker recognition system, but also affects the recognition accuracy. In order to address the initial parameter problem, state-merging and state-splitting were The associate editor coordinating the review of this manuscript and approving it for publication was Behnam Mohammadi-Ivatloo . implemented in some HMM algorithms. The former iteratively merges states until convergence of a model, therefore, it requires large amount of computation. The latter begins with a general HMM and successively splits states until convergence. However, it is difficult to define the stopping mechanism of splitting. Although some factors, such as maximum likelihood or minimum description length, have been introduced to stop splitting, it is still difficult to acquire a balance between accuracy and generality of an HMM model in real applications. Therefore, this paper proposes a quantization method for adaptive acquisition of the Gaussian composite order and the number of re-evaluations through theoretical analysis and experimental verification to improve the accuracy and the training speed of an HMM-based speaker recognition system. This paper is organized as follows: The basic principles of the HMM are introduced in the next section. In the third section, HMM-based speaker recognition system is detailed. The fourth section presents our adaptive acquisition of the Gaussian composite order and the number of re-evaluations through theoretical analysis and experimental verification, and we draw conclusions in the fifth section. II. DEFINITION OF HMM The problem solved by the HMM has two characteristics: (1). State-based characteristics, which includes the hidden state and the observation state. (2). Two types of data, which consists of the observationstate sequence and the hidden-state sequence. First, suppose is the collection of all possible hiddenstates and V is the collection of all possible observed-states as follows: where N and M are the number of possible hidden-states and observed-states, respectively. For a sequence of length T , Q and O are the state sequence and the observation sequence, respectively, and they can be obtained by the following equation: where q t and o t V. There are two assumptions in HMM: (1). The homogeneous Markov chain hypothesis. The Markov chain is described by and A, which determine the shape of Markov chain. The hidden state at any time only depends on the previous hidden state. If the hidden state at time t is q t = θ i and the hidden state at time t +1 is q t+1 = θ j , then the HMM state transition probability a ij from time t to time t + 1 can be expressed as follows: Thus, a ij composites the state transition matrix A: The hidden probability matrix at t = 1 is given by: where π(i) = P(q 1 = θ i ). (2). Independence of observing states. The observation state at any time only depends on the hidden state of the current moment. If the hidden state at time t is q t = θ j and the corresponding observation state is o t = v k , then the measurement probability b j (k) of observation state v k , generated from hidden state q j , at that time satisfies the following equation: where 1≤ j ≤ N , 1 ≤ k ≤ M , M denotes the number of observation states. A Gaussian function is mostly used to describe b j (k) according to the distance between o t and θ j : Then, the measurement probability of observation states generated from hidden state q j can be calculated with a linear summation of multiple Gaussian functions as shown by the following equation: where G(·) denotes the Gaussian function; c jm , µ jm and jm denote the weight, the mean and the variance of the m th Gaussian function, respectively. The measurement probability matrix B, composed by b j , is then given by: Finally, the HMM can be determined by ,A, and B as follows: From (10), we can see that the number of possible hiddenstates and the observed-states determines the size of HMM. A. PREPROCESSING The preprocessing of a speaker recognition system mainly includes: sample quantization, pre-emphasis, frame windowing, and endpoint detection. Sampling converts an analog signal into a discrete analog signal, and quantization divides the continuous amplitude into several levels. As researched, the speech signal is attenuated at a rate of 6 dB/octave when the frequency is greater than 800 Hz. Therefore, sampling and quantization require a pre-emphasis process to raise the high frequency components and flatten the spectrum of the signal. The general preemphasis filter H is expressed as follows: where f is the frequency; α is the coefficient of the preemphasis process and it is normally between 0.90 and 0.97. For the short-time stationarity analysis, a speech signal must be framed and windowed. The commonly used windows are Hamming window and Hanning window. In addition, to remove noise and mute portions in a speech signal, it is necessary to remove some valid speech parts using endpoint detection. B. SPEECH FEATURE EXTRACTION One of common speech features is the Mel frequency cepstral coefficients (MFCCs), which combines the auditory perception characteristics of human ears on the mechanism of speech generation and uses the Mel filter bank to mimic functions of the human cochlea. The frequency scale is close to that of the human auditory characteristics [9]. The MFCCs can be calculated as follows: (1). Convert the actual frequency to the Mel nonlinear frequency as follows: (2). Triangular filters of L channels are arranged on the Mel frequency axis, and the number of L is determined by the cutoff frequency of the signal. The center frequency e(l) of each triangular filter is allocated to equal intervals on the Mel frequency axis. s(l), e(l), and h(l) are the lower frequency, the center frequency and the upper frequency of the l th triangular filter, respectively. (3). Determine the output of each triangle filter based on the amplitude spectrum |X n (r)| of the speech signal as follows: (4). The logarithm operation is performed on all filter output, and discrete cosine transform (DCT) is performed to obtain the MFCCs: where MFCC(i) is the MFCC of the i th channel or dimension and normally it is considered to reflect the static properties of a signal. To present the dynamic properties of a signal, mostly MFCC is introduced, which is obtained through calculating the first order difference of the MFCCs. C. MODEL TRAINING AND IMPORTANT PARAMETERS IN SPEAKER RECOGNITION Training parameters of an HMM is to estimate the optimal parameters of λ assuring P(O|λ) is maximal based on the observation state sequence O. In fact, this is the most complicated problem solved by the HMM because it is difficult to obtain the optimal λ due to the limit size of a given data set in practice. Therefore, the Baum-Welch algorithm is adopted to locally maximize P(O|λ) and obtain the estimated model λ = ( , A, B) with the concept of iterations. First, the parameters of the initial model λ 0 must be defined before the re-evaluation using the Baum-Welch algorithm. As known, B, compared to and A in λ 0 , is closely related to the training quality of an HMM. To calculate component b j in matrix B, it is generally to cluster all the MFCCs of a speech signal into M clusters and obtain the mean, the variance and the weight of a Gaussian function in each cluster. Then, the Gaussian functions of each hidden state are linearly summed together to obtain component b j in matrix B. Here, M has a significant influence on the recognition accuracy of a system because it is closely related to the distance between B and the true distribution of MFCC features. Some researchers tried to capture an appropriate M with a serial of practical experiments during setting parameters for the initial model, however, the result highly depends on the data sets and there is rare theorical and quantified foundation to calculate it. Therefore, it is necessary to research an adaptive acquisition method to obtain an appropriate value for M . Second, the Baum-Welch algorithm is used to re-evaluate the initial model, and the revaluation formula is expressed as follows: The re-evaluation terminates until P(O|λ e ) with estimated model λ e converges. Therefore, more algorithms must be enrolled, such as expectation maximization (EM), to calculate P(O|λ e ) of evaluation step i and judge whether P(O|λ i ) converges. In theory, only when the number of re-evaluations is infinite, P(O|λ) can reach the optimal convergence value. This is not feasible in practice; therefore, the number of reevaluations is generally set according to experimental results when different data sets are used. However, if the number of re-evaluations is too small, the model will deviate too much far from the ideal value. Too many re-evaluations will increase the complexity and the time burden of the algorithm. Therefore, setting the number of re-evaluations of an initial HMM is an important research topic. IV. SPEAKER RECOGNITION BASED ON IMPROVED HMM A. ADAPTIVE ACQUISITION FOR COMPOSITE ORDER In this section, a mathematical relationship between the composite order and recognition accuracy based on HMM is established based on experiments and theoretical analysis. Then, an adaptive acquisition method for the composite order is proposed. First, we select a total of 168 speakers based on the TIMIT dataset and test the relationship between the speaker recognition accuracy and the composite order. The pre-processing and MFCCs extraction feature methods used in the training and identification phases are the same as Reference [9]. Our experiment is conducted on a 3.40 GHz machine with 8GB random access memory (RAM) using Matlab implementation. The specification of our experiment is shown in Table 1. In our experiment, the number of hidden states is set to one and the composite order is M . The speaker recognition accuracy of the system based on HMM is calculated by increasing M , and the result is shown in Fig.1, where we can see that when the composite order M is gradually increased from 2 to 32, the recognition rate of the system rapidly increases from 80.36% to 100%; however, as M continues to increase to 128, the recognition rate gradually decreases to 97%. The experimental result is consistent with the theoretical analysis, because when the composite order is small, it means that the measurement probability of each hidden state is calculated by the linear summation of a small number of Gaussian functions. Although the total number of variables in these Gaussian functions is small, the gap between the summated measurement probability and the true distribution of features is also large, resulting in low recognition accuracy. As the number of Gaussian functions is increased, the gap between them is gradually reduced, and the recognition rate of the recognition system is also improved. On the other hand, when the number of Gaussian functions is too large, the number of variables required for parameter identification of Gaussian functions is also significantly increased. As the size of the training data-set is not increased, the limited training data-set limits the accuracy of parameter identification of Gaussian functions, resulting in a gradual increase in the gap between the measurement probability and the true feature distribution. Thus, the recognition rate then begins to decline. Therefore, in practical applications, the magnitude of M affects the recognition rate of the system, the training complexity, and the computation time. The recognition rate is also limited by the size of the data-set. Theoretically, when training an HMM, the composite order represents the number of clusters based on the MFCCs of speeches and the Gaussian functions describe the feature distribution within each cluster of MFCCs. Therefore, M is related to the clustering quality of MFCCs. In clustering evaluation, the standard deviation (SD) is often used as a quality evaluation factor [10][11], therefore, in this paper, SD is introduced to evaluate the appropriate M . The basic principle of application of SD is based on the concept of average scattering and total separation for clusters. The SD can be obtained by the following equation, where Scat denotes the average scattering for clusters to evaluate the compactness and Dis denotes the distance among all cluster centers to evaluate the separation. They are defined as follows, where D max = max(||d i −d j ||) ∀i, j {1, 2, . . . , M } is the maximum distance between cluster centers. D min = min(||d i − d j ||) ∀i, j {1, 2, . . . , M } is the minimum distance between cluster centers and a is a weighting factor equal to Dis(M max ), where M max is the maximum number of input clusters. The variance of the p th dimension in a data set X is defined as follows, wherex p is the p th dimension of x c , ∀x c ∈ X. The variance of cluster i is called σ (d i ) and its p th dimension is given as follows, We calculate the SD relationship of different dimension of MFCCs, different M and SD. The result is shown in Fig. 2, where we can observe that for each dimension of the MFCCs, as M increases, the SD value has two peaks, one at M = 10 and the other M = 32 and has a distinct trough near M = 20. When M is greater than 40, the SD value decreases. When the M value is fixed, the SD value corresponding to the low dimensional MFCCs is comparatively smaller, and the high dimension is reversed. That is, with a fixed M , the SD value increases as the dimension number of MFCCs increases. Here, the distinction position between high and low dimensions is probably around the 9 th dimension. SD is theoretically minimal with an infinite M ; however, for a fixed-size dataset, selecting an M value between the two peaks of SD shown in Fig. 2 is a practical and efficient solution. To select the most distinctive dimension among the 24 dimensions of MFCCs, we divide the speech signal of a speaker into 24 frames. Then we calculate their corresponding MFCCs and intercept multiple sections of different VOLUME 8, 2020 speech frames along the axis of dimension. The calculation result is shown in Fig. 3, where Fig.3 (a) is the MFCCs in different frames and Fig.3 (b) is the cross-section along the axis of dimension. It can be seen from Fig. 3 that the MFCC value of all speech frames has a significant step edge near the 9 th dimension. Specifically, the MFCC value clearly increases at this dimension. Although the MFCC values of different speech frames are not equal in magnitude, their variation curve along the dimension axis is the same. Therefore, we can easily locate the step edge position of the MFCC value using the cross-sectional view. Then, this dimension of MFCCs is selected to calculate the SD value. In this paper, we select the 9 th dimension MFCCs. To observe the relationship between SD and M with onedimensional MFCCs, we use the 4 th , 9 th , 18 th and 23 rd dimension MFCCs of nine speakers to calculate the SD value as M varied. The result is shown in Fig. 4. As seen in Fig. 4, where Fig. 4(a)-(d) is the experimental result of the 4 th , 9 th , 18 th and 23 rd dimension MFCCs. From Fig. 4, we can see that when using the 4 th and 9 th dimension MFCCs, the minimum SD value mostly appears at M = 16 or M = 32. In comparison, the SD value calculated using the 9 th dimension MFCCs changes more gradually with M . The M -SD curves calculated using the 18 th and 23 rd MFCCs are too complicated to find the position where the minimum SD value appears. Therefore, if the MFCCs of the 9 th dimension is used as the evaluation factor of the clustering quality, the M value with the smallest SD value could be selected as the optimal composite order of the system, and this result is consistent with the case where the recognition rate is the highest in Fig. 1. Therefore, we propose an adaptive acquisition method for the composite order based on HMM in a speaker recognition system. First, the MFCCs is selected with a step edge, the SD value is then calculated, and the value of Gaussian composite number, M , is selected, when the SD value is the smallest, as the optimal composite order of the speaker system. Then, the recognition rate of the speaker system is calculated. For comparison, we also calculate the M value that is adaptively selected using the 4 th , 18 th , and 23 rd MFCCs, and apply them for modeling and identification. The experimental result is shown in Table 2, where we can see that when the M value corresponding to the minimum SD value is chosen as the system composite order with the 9 th dimension MFCCs, the speaker system has the highest recognition rate. The number of correctly identified speakers is 163, and the recognition rate is 97.02%. The recognition rate using the 4 th and 18 th dimension MFCCs is slightly lower at approximately 95%, and the difference between them is small. The 23 rd dimension MFCCs yield the lowest recognition rate, however, the recognition rate is still above 90%. Therefore, we select the 9 th dimension MFCCs as the representative feature for calculating the SD value and the adaptive M value. B. RE-EVALUATION ADAPTIVE OPTIMIZATION In this section, the optimal number of re-evaluations of the initial model in HMM training is discussed. First, suppose the number of observation states in the HMM model is M =3 and the number of hidden states is N . The initial model uses the segmentation k-means algorithm [12]. In this study, we calculated the system identification for different number of re-evaluations of the initial model with respect to the number of hidden states N =1, 2, 3, and 4, respectively. The results are shown in Fig. 5, where the horizontal axis represents the number of re-evaluations, and the vertical axis represents the recognition rate of the system. From Fig. 5, the following conclusions can be made: (1). When the number of hidden states is one, as the number of revaluation initial models increases, the recognition rate of the system decreases. (2). When N = 2 and N = 3, the two curves vary in a similar pattern: As the number of re-evaluations increases, the recognition rate of the system gradually increases from approximately 70%, and finally stabilizes between 95% and 97%. (3). When the number of hidden states is N = 4, the recognition rate of the system is rapidly increased from 76% to 97% when the number of re-evaluations increases from one to five. After that, the recognition rate stabilized at approximately 98%. In a word, compared to the number of re-evaluations, the number of hidden states does not influence the recognition rate very much, especially when N is greater than 2. The recognition rate when N is equal to 2 to 4 is becoming stable with increasing of the number of re-evaluations. Theoretically, the recognition rate could be improved if we increase the number of re-evaluations, however, it will result in a large computational load. Therefore, an adaptive method to obtain an appropriate number of re-evaluations is proposed as follows. In theory, stabilization of the recognition rate indicates that the training model has converged. To quantify the relationship between the system recognition rate and the number of re-evaluations, we set N = 4 and calculate the measurement probability b j for different number of re-evaluations. Consider b 1 as an example. The calculation results are shown in Fig. 6, where b 1 obtained by the linear summation of M Gaussian functions is nearly stable after ten re-evaluations. We can draw the conclusion that the measurement probability b j is consistent with the law that the system recognition rate varies with the number of re-evaluations. In this paper, we adaptively optimize the number of re-evaluations based on the variation of the curve of the measurement probability as a quantization criterion. Theoretically, the optimal number of re-evaluations is obtained when the curve of the measurement probability becomes smooth with increasing of the number of re-evaluations. Therefore, the optimal number of re-evaluations is calculated with the following equation, where NRE op is the optimal number of re-evaluations. In practices, it is not necessary to increase r to an infinite value due to the result in Fig. 7. In this paper we choose a threshold ε to evaluate the optimal r with the following equation. where b is the distance between two consecutive re-evaluations. When b is less than ε, the training model is approaching to converge, and the number of re-evaluations is approximately equal to the optimal value. Therefore, To verify the proposed acquisition method of adaptive number of re-evaluations, we calculated the number of re-evaluations with speech data in the database. First, let the number of observation states be three, and the number of hidden states be four. Then, we calculate the number of re-evaluations and the number of speakers adaptively re-evaluating their models for corresponding number when the consecutive measurement probability difference is minimum. The experimental result is shown in Table 3, where 135 (37 + 42 + 32 + 24 = 135) of 168 speakers have adaptively set the number of re-evaluations to three to six. After adaptively setting the number of re-evaluations, the number of speakers correctly identified by the system reaches 166, and the recognition rate reaches 98.9%. V. CONCLUSION In this paper, a quantitative method for adaptively selecting the optimal composite order and the number of re-evaluations is proposed based on the detailed theoretical and experimental analysis of the HMM. For training of the HMM-based speaker recognition system, the number of observation states is closely related to the recognition rate, which depends on the user's practical experience. Herein, the clustering evaluation factor SD is introduced, and the relationship between the SD value, the MFCC dimension, and the system recognition rate is compared. Then, an acquisition method for the optimal composite order based on single-dimension MFCC feature is proposed. Given that the number of re-evaluations of the initial model directly affect the training speed and recognition accuracy of a speaker recognition system, this paper compares the impact of the number of re-evaluations on the system recognition rate by varying the number of hidden states. According to the theoretical analysis, the mathematical relationship between the measurement probability and the number of re-evaluations, as well as number of hidden states, is established, and an adaptive acquisition method for the number of re-evaluations of the initial model is proposed. Finally, a series of text-independent speaker recognition experiments are performed. The results from these experiments show that the recognition rate with the optimal composite order obtained in this paper is 97.02%, and the recognition rate with the optimal number of re-evaluations is 98.9%. In this paper, in order to validate our proposed method, series of text-independent speaker recognition experiments with English speeches are conducted. However, the characteristics of different language speeches may be different, even though they are from the same speaker. Therefore, different language speeches should be researched in the future to improve the robustness of our method. The other possible research direction is the influence of noises in a practical environment, especially there are different noises existing in the training data set and the recognition data set.
5,933.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Multidirectional Planar Motion Transmission on a Single‐Motor Actuated Robot via Microscopic Galumphing Abstract Insect‐scale mobile robots can execute diverse arrays of tasks in confined spaces. Although most self‐contained crawling robots integrate multiple actuators to ensure high flexibility, the intricate actuators restrict their miniaturization. Conversely, robots with a single actuator lack the requisite agility and precision for planar movements. Herein, a novel eccentric rotation‐dependent multidirectional transmission is presented using a tilted eccentric motor and a simplistic two‐legged structural configuration for planar locomotion. The speed of the eccentric motor is modulated to enable alternating microscopic jumps to propel the system, creating a mode of motion analogous to galumphing of seals. Upon modeling the motion dynamics and conducting experiments, the effectiveness of direct motion transmission is substantiated through microscopic galumphing encompassing left/right crawling and straight‐forward crawling. Finally, a 1.2 g untethered robot is developed, which demonstrates enhanced straight crawling and spot turning, traverses narrow tunnels, and achieves precise movements. Therefore, the proposed motion‐transmission technique provides a comprehensive set of innovative solutions of underactuated agile robots. Introduction Microrobotics is an emergent field focused on the development of small-scale systems, with the objective of extending the functional scope of robotic applications.Specifically, micro DOI: 10.1002/advs.202307738crawling robots represent a prominent category within microrobotics and offer significant advantages for exploration in spatially constrained and narrow environments.However, miniaturization of the robotic structure and preservation of its flexibility are formidable challenges.For example, RoACH [1] is distinguished by its smart composite microstructures (SCM) and dual-shape memory alloy actuators.Although the robot is lightweight (2.3 g), the energy requirement for driving shape memory alloys is relatively high.Several piezoelectric actuators have been employed in HAMRs to improve the energy efficiency and motion agility, including the lightest 1.7 g hexapod robot HAMR 3 . [2]onetheless, the deployment of discrete actuators restricts further miniaturization owing to the manufacturing constraints.In contrast, monolithic piezoelectric actuators have streamlined structural configurations.For instance, MinRAR utilized monolithic piezoelectric elements machined to actuate each bending actuator without mechanical joints, although this design necessitates complex high-voltage excitation circuits. [3]Kilobot [4] is a representative robot actuated by two eccentric motors with reduced structure, simple excitation, and low cost.In general, crawling robots are composed of two or more actuators that promote inplane motion controllability.As such, a further reduction in the number of actuators can potentially hinder the mobility of the robot. Recent studies have attempted to drive robots using a single actuator, because leveraging a single-actuator architecture can simplify the structure and control technique.Nonetheless, such robots remain underactuated for planar motion, thereby necessitating the development of innovative transmission or excitation techniques.To emulate insect-like agility in planar locomotion, capabilities for on-the-spot steering and forward motion are crucial.A hexapod robot, 1STAR [5] actuated by a servo motor, was the first to achieve controlled in-plane motion.However, its transmission mechanism was intricate, and the robot was unable to perform spot turns owing to the coupling between translational and steering motions.PISCES, [6,7] a piezoelectric-actuated walking robot, can perform both forward and spot-turning movements with different walking modes modulated by varying actuation frequencies.However, the miniaturization of PISCES is constrained by its complex high-voltage excitation circuitry.Thus, incorporation of eccentric motors can result in more compact robots.For instance, Simobot [8] simplified and miniaturized it using an eccentric motor that enabled it to execute turns with a small radius.Simobot can establish a forward trajectory by executing multiple small radius turns successively.However, this forward motion was not a product of direct motion transmission and thus demanded intricate path planning and excitation techniques.Currently, existing single-actuator robots still have a range of limitations such as motion uncertainty, control challenges, and structural intricacies, which inhibit their applicability in future scenarios. The planar motions of all micro crawling robots typically depend on the ground slipping of body components, actuated by either conventional active legs [1] or vibrations. [9,10]Robots such as RoACH, HAMR, and 1STAR, which employ active leg-based slipping, are self-contained because of their efficient mechanisms.However, their transmission systems are intricate, and their crawling trajectories are imprecise. [11]In contrast, vibrationbased slipping mechanisms (MinRAR, Kilobot, PISCES, and Simobot) offer stable motion transmission through simplified structures, although are multiple actuators are required for agile locomotion. The planar movement of the robot is a distinct type of motion analogous to the galumphing of seals (as shown in Figure 1C; Note S9, Supporting Information).Characterized by short front flippers and non-rotatable rear flippers, seals traverse land through a belly-wriggling motion that alternately lifts their fore and hind bodies off the ground.Compared to slipping, galumphing appears to be well-suited for underactuated and miniaturized robots.Recent developments have demonstrated that robots utilize galumphing for forward movement.Piezoelectric soft robots [12,13] accomplish rapid galumphing (≈200 Hz) but suffer from motion instability owing to their flexible structures.Conversely, hopping robots, [14][15][16] actuated by servomotors, can achieve stable and efficient galumphing at lower frequencies (<50 Hz).Nonetheless, prior implementations situated the actuator on the vertical plane, necessitating an additional module for horizontal steering.This dual-actuator setup can result in asynchronous movements that impede the forward motion.Addressing this concern, the inclination of an eccentric motor has emerged as a potential solution because it generates both verti-cal and horizontal forces.However, inducing stable galumphing via an eccentric motor presents a formidable challenge, as legbouncing motion is inherently unstable. [9,17]Furthermore, the issue of coordinating horizontal forces during stable galumphing remains unresolved, rendering the transmission of direct forward motion impossible.Therefore, a comprehensive understanding of galumphing motion and transmission, particularly when actuated by an eccentric motor, is yet to be achieved. In this study, we introduce a novel vibration-based galumphing motion-transmission technique for single-motor-actuated microrobots.Utilizing an eccentric motor, the system achieved stable, microscale (30-400 μm) galumphing motions with precision.Notably, the prototype could execute both forward motion and lateral deflection solely through motor speed modulation, obviating the need for motor phase reversion.This phenomenon was identified as eccentric-rotation-dependent multidirectional transmission (ERDMT), establishing a direct methodology for planar motion transmission.Concurrently, the actuation of the eccentric motor enabled a combination of galumphing and slipping motions, thereby introducing spot-turning capabilities for planar movements.A five-degree-of-freedom (DOF) dynamic model was formulated to simulate these motions and elucidate the principles underlying the ERDMT.Subsequent validation of the dynamic model was conducted at both the micro and macro scales by capturing the actual movements of the prototype.Finally, we developed a 1.2-g self-contained galumphing and slipping robot (GASR, Figure 1) that precisely demonstrated spot turning and forward crawling.GASR revealed significant advantages in simplicity, agility, and precision, indicating promising prospects for exploration missions. Galumphing Motion Forward galumphing motions were analyzed using high-speed microscopic imaging.The prototypical motion sequence is shown in Figure 2A.The robot initiated the A-touching phase at 0 ms with leg A ascending into the air at 2 ms.Subsequently, leg B was in contact with the ground at 4 ms and reverted to the A-touching phase at 6 ms.The maximum forward velocity, v G , occurred in the aerial phase, predominantly owing to the actuation of the rotor.Additionally, v G remained positive throughout all phases owing to inertia, whereas the angular speed w Gy around the y-axis exhibited frequent directional changes.Each galumphing cycle was concluded with microscale forward displacement Δx. Figure 2 outlines three specific galumphing modes based on the distinct motor speeds.Mode 1, illustrated in Figure 2B, was observed at low motor speeds ranging from 720-826 rad s −1 .Here, legs A and B alternated the ground contact, achieving jumping peaks of 33 and 240 μm, respectively.As the motor speed increased (826-877 rad s −1 , Figure 2C), the robot transitioned into galumphing mode 2, wherein the jumping peak alternated higher (289 μm) and lower (201 μm) in contiguous motor cycles.Finally, at a high motor speed (877-930 rad s −1 , Figure 2D), leg B jumped higher (345 μm) in the first motor cycle and could only contact the ground in the second motor cycle.This phenomenon was attributed to the increased jumping height and reduced periodic time, resulting in an inadequate time another landing.Unstable galumphing motions ensued when the motor speeds exceeded 960 rad s −1 , manifesting as erratic motion of leg B and inconsistent ground-phase timing for leg A (Figure 2E).The modeling results of the galumphing modes were also observed in (Movie S1 and Figure S6, Supporting Information).The theoretical and practical jumping heights from the approximate calculation revealed a small deviation (<100 μm) that did not affect the validation of the ERDMT. Planar Motions The progressive forward motion of the robot was orchestrated according to a defined motion cycle.Consequently, rectilinear movement was achieved by nullifying the angular speed.Nonetheless, the robot predominantly deviated either to the left or right owing to a specific asymmetry factor.Influenced by an eccentric rotor, this factor can reverse the turning direction.Thus, the critical status demarcating left from right turning constitutes rectilinear motion.In this section, we examine both the intricate factors governing ERDMT and its behavioral tendencies. ERDMT in Microscale The dynamics model replicates ERDMT, hinging on the variation in ground-contact timing.During stable x-y plane motion, the angular velocity w G of the body becomes periodic in each microscopic motion cycle (two motor cycles), indicating that w G always reverts to its initial value at the end of each cycle.Consequently, the total angular momentum about the z-axis is zero in each cycle.Given the motion pattern of the rotor, the angular momentum The red-shaded regions correspond to leg A-touching the ground, whereas the gray-shaded areas correspond to leg A jumping in air.When leg A lands, impact frictional torque (IFT) resists robot motion.In the following ground phase, the ground frictional torque (GFT) was similarly generated.In every two motor cycles, the contributions of the IFTs and GFTs entirely compensate each other to maintain the zero-angular-momentum condition.The frictional and motor contributions together determine the angular velocity w G of the body , and their integration yields the rotation angle Δq G .The curve of w G varies with motor speed to satisfy the zero-angular momentum condition.A) At a low motor speed (w = 782 rad s −1 ), both IFTs were negative, such that both GFTs were positive, indicating that the robot should rotate in the negative direction.Consequently, the mean value of w G was negative, and the robot rotated by a small negative angle Δq G in every motion cycle.B) At a critical motor speed (877 rad s −1 ), the ground phase of leg A changed, and a single IFT remained negative, whereas the other reverted to positive.The w G curve shifted up to a mean value of nearly zero.Therefore, the total rotation angle became nearly zero.C) When the motor speed further increased (920 rad s −1 ), the body angular velocity w G curve further shifted upward, resulting in a positive rotation angle. imparted by the motor is zero.Under this zero-angular momentum condition, the angular momentum resulting from frictional forces must be similarly null. Frictional contributions are explicitly outlined in Figure 3.The red-shaded regions denote periods when leg A makes ground contact, while the gray-shaded regions signify airborne intervals for leg A. Upon leg A's contact with the ground, the generated impact frictional force f Ay gives rise to impact frictional torque (IFT).Likewise, sustained ground contact produces a ground frictional torque (GFT).The directional attributes of the frictional torques were determined by the actual velocity of leg A in the ground coordinate system.Thus, these frictional contributions are modulated by the actual speeds and further influenced by w G .Note that the contributions from leg B were considered negligible. The specificities of the ground-contact timing can affect the w G curve because the zero-angular momentum condition requires a specific frictional force direction.At the motor speed w of 782 rad s −1 (Figure 3A), both IFTs were negative owing to the positive rotational direction relative to the COM when landing.Thereafter, the GFTs must provide equal but opposing momentum to counterbalance the IFT contributions.This resulted in a predominantly negative rotation during the subsequent ground phase, rendering the mean value of w G negative (Figure 3A).Similarly, the rotation angle of each cycle Δq G is also negative.Given that the robot operates under an undamped forced oscillation, the phase position of w G is contingent on the phase position of the excitation (motor force).In other words, w G is maximized whenever the rotor returned to its initial position, facilitated by the integer moment from the prior motor cycle.Thus, variations in the motor speed can only induce vertical shifts in the w G curve to satisfy the zero-momentum condition. Different ground-contact timings result in a unique ypositioning of the w G curve.Similarly, when the motor speed increased to 877 rad s −1 (Figure 3B), the initial IFT was negative but became positive in the subsequent landing.Thus, the value of w G should remain mostly transversely symmetric in the ground phases owing to the directional demand of the GFT.Consequently, the rotation angle Δq G was approximately zero.However, when the motor speed increased to 940 rad s −1 , the groundtouching timing varied, causing the w G curve to shift upward, such that the mean value of w G was positive.Thus, rotation angle Δq G was positive. ERDMT in Macroscale ERDMT induced various planar trajectories under varying motor speeds.In particular, w Gtest , v Gtest , w Gmodel , and v Gmodel are defined as the mean body angular and translational speeds of the COM in testing and modeling, respectively.In Figure 4A, during stable motion periods (2.5 s) at constant motor speeds, as the motor speed increased during various tests, the amplitude of the w Gtest increased to −1.25 rad s −1 but later decreased.As the motor speed sequentially increased, w Gtest reversed its direction at ≈870 rad s −1 and increased to ≈0.5 rad s −1 at ≈930 rad s −1 .Overall, the modeling results revealed a similar trend.When the motor rotated at 630 rad s −1 , the mean w Gmodel was −0.38 rad s −1 , and w Gmodel decreased to −2.27 rad s −1 as motor speed increased to 835 rad s −1 .Thereafter, the amplitude of the w Gmodel returned to 0 at a speed of 877 rad s −1 .Subsequently, w Gmodel became positive and increased slightly to 0.45 rad s −1 .The modeling and testing results revealed a highly consistent tendency.As the motor speed increased in the same scope, the linear speed v Gtest increased from 11.9 to 43.9 mm s −1 , whereas the linear speed v Gmodel increased from 20 to 48 mm s −1 . The ERDMT was validated based on both computational modeling and experimental testing.Minor discrepancies between the two sets of results could arise from variations in the parameter settings or approximate calculations.The inversion of the rotational direction occurred at a motor speed of 877 rad s −1 .At this pivotal speed, the robot exhibited zero-angular velocity, traversing a straight trajectory (purple box in Figure 4A).Moreover, in galumphing mode 1, the amplitude of w Gmodel primarily increased, the value of w Gmodel decreased in galumphing mode 2, and the critical speed of ERDMT was close to the boundary between galumphing modes 2 and 3.The prototype robot demonstrated capabilities for straight, left, and right crawling motions, illustrated in Figure 4B.Within a motor speed range of 754-803 rad s −1 , the robot veered to the right with an average radius of 18.5 mm and a mean speed of 21.8 mm s −1 .Conversely, at higher motor speeds ranging from 912 to 952 rad s −1 , the robot deviated to the left, registering a radius of 130.4 mm and speed of 40.3 mm s −1 .A distinct speed window spanning 874-894 rad s −1 enabled a near-linear motion with an average speed of 41.2 mm s −1 .The root mean square errors for the test trajectories during right, left, and straight crawling were 8.99, 9.81, and 8.53 mm, respectively, when compared to the computational models.Notably, the actual straight crawling trajectories exhibited a mean straightness error of 4.14 mm, whereas the modeled trajectories displayed a significantly lower straightness error of only 0.09 mm.This disparity can be primarily attributed to motor speed fluctuations at a constant voltage, averaging at ±28.4 rad s −1 . GASR: Performance Experiments and Evaluation GASR is a miniature crawling robot actuated by a single eccentric motor and was developed by incorporating galumphing and slipping motions.The robot employs a model-based design methodology, as described in Note S4 (Supporting Information).By merely comprising four components, as illustrated in Figure 1, GASR can execute both forward and steering movements under variable driving voltages.These voltages were modulated by the PWM output of a PCB.The 25-mm-long robot weighing only 1.2 g is self-powered and remote-controlled via Bluetooth. GASR operates in three distinct modes: straightforward crawling, forward left/right crawling, and on-the-spot turning.In Figure 5A, under an applied voltage of 1.02 V, the robot steered counterclockwise at a rate of 45°/s.Conversely, upon applying a voltage of −1.02 V, clockwise steering was observed at −40°/s (Movie S2, Supporting Information).The forward motion (Figure 5B) was generated under 2.01 V, with a mean speed of 28.3 mm s −1 , corresponding to 1.1 body length (BL).The robot initiated motion with a minor deflection angle of 12°and selfmisalignment angle of 14°relative to its direction of movement.At a reduced actuation voltage of 1.95 V, the robot executed a clockwise circular trajectory characterized by an angular speed of 10°/s and a radius of 232 mm.Moreover, under a voltage of 2.08 V, a counterclockwise circular motion was observed, featuring an angular speed of 11°/s and a radius of 143 mm.Additionally, GASR demonstrated stable, straight-line crawling on both firm (aluminum board) and soft substrates, as viewed in Movie S2 (Supporting Information).A repetitive test on foam board indicated a mean lateral error across six trials of 1.1 ± 0.35 mm, or 1.2% (Figure 5D). GASR manifests both agility and precision in various mimetic applications.Figure 5E,F depicts the robot's proficiency in the following square and "Z"-shaped trajectories.It accomplished continuous straight and turning movements to follow the desired paths with minor straightness errors (<6 mm) and turning radii (<5 mm), completing each path in 29 and 30 s, respectively.Figure 5G and Movie S3 (Supporting Information) show the capability of GASR in traversing tunnels; it navigated through a 25-mm-wide tunnel, executed a spot turn to proceed through another tunnel, and returned to its original position.Furthermore, the robot was competent in ascending slopes.On a 15°incline, as shown in Figure 5H, GASR maintained a comparable straightline speed of 28.1 mm s −1 by slightly elevating the applied voltage to 2.21 V.The maximum uphill crawling angle is 22°, while the maximum downhill crawling angle is 30°.Additionally, the robot can currently function well on surface with mild roughness (P320 Grade sandpaper, aluminum plate, foam board, etc.).Details can be found in Note S10 (Supporting Information). Discussion Stable galumphing motion can be achieved over a broad range to the proper structural design.The eccentric motor, oriented with its output in the vertical plane, generates varying vertical forces that actuate the galumphing movement.The prototype exhibits a broad, stable galumphing range, and its stability is grounded in three key elements: i) The strategic placement of the motor at the bottom of the structure creates a small tilt angle relative to the ground; ii) The squared shape of the robot body extends the distance between the two legs, consequently minimizing instability due to pitch angle fluctuations; and iii) Compliant legs mitigate unexpected vibrations, whereas a rigid main body maintains a stable natural frequency.When periodic galumphing was achieved, the system remained stable and robust, even when subjected to input fluctuations (±28.4 rad s −1 ).In contrast, soft robots, with their inherently variable forms, encounter difficulties in achieving stable galumphing.Furthermore, the motion becomes unstable if the vibration absorbers are removed, as rigid legs tend to cause high jumps, leading to unstable galumphing.Nonetheless, configurations exist, such as those employing a rigid hind leg, enabling both low jumps and stable movements, as shown in Figure S8 (Supporting Information). The ERDMT-based galumphing motion is advantageous in terms of motion complexity.Unlike a previous study [18] that reported unpredictable movements, we developed a theoretically predictable prototype capable of straightforward crawling, in accordance with the ERDMT model.The prototype could follow a straight line with minimal error, which was likely attributable to motor instability and ground unevenness.Specifically, the overall trajectory was influenced by fluctuations in motor speed at critical velocities.This critical speed for the ERDMT model is situated near the transitional boundary between galumphing modes 2 and 3. Variations in speed can give rise to asymmetric crawling paths.Notably, the rotation angles change more swiftly when the motor speeds decrease, as illustrated in Figure 4.This is because lower motor speeds result in two negative impact frictional torques, whereas higher speeds produce one negative and one positive torque.Consequently, the overall rotational speed of the robot decreases rapidly with decreasing motor speeds.Additionally, the implementation of galumphing ensures that the microscale trajectory did not contain backward components, in contrast to conventional eccentric motor robots that rely on stickslip mechanisms. [9]This demonstrates greater efficiency, as the COM maintains a forward velocity throughout the movement. Drawing on the galumphing and slipping motions, we propose a lightweight multimodal crawling robot.8][10][11][12][13]17,19, Soft-crawling robots are predominantly externally powered and can weigh as little as 0.9 g. [19] Conversely, many rigid crawling robots utilize battery power and can weigh as little as 1.7 g. [2] The GASR possesses a light weight of 1.2 g and an average speed of 1.1 BL s −1 , nearly rivaling externally powered robots.Furthermore, robots employing unstable galumphing and vibration-based slipping are located within the red and blue regions of the performance space, respectively.Robots that utilize galumphing are characterized by high speed and structural simplicity.However, issues such as unstable or asynchronous galumphing can impair forward motion in two-module configurations.Conversely, vibration-based slipping robots benefit from simple structures and stable motions that are attributed to the underlying slipping principle.In this study, we amalgamate stable galumphing and vibration-based slipping into a multimodal robot capable of both stable forward movement and effective steering. GASR revealed high agility and simple excitation among existing small-scale crawling robots.Meanwhile, the weight and power consumption were also advanced among untethered robots (see details in Table S4, Supporting Information). The robot incorporates advantageous elements from both eccentric rotating drum multi trajectory and multimodal locomotion, resulting in a structurally simple and cost effective design ($3.7).The required excitation is uncomplicated, necessitating only a one-way low-voltage input (<3 V).Moreover, ERDMT is a universal model that is easily replicable using common materials, as elaborated in Note S5 (Supporting Information).Additional benefits of vibration-based actuation include stable and precise movement, even without feedback control.This results in a significantly reduced trajectory error compared to conventional legged crawling robots. [11]Although single-motor actuation typically exhibits less agility and is generally deficient in spot-turning capabilities, we successfully achieved both spotturning and straight crawling through multimodal locomotion. Consequently, the ERDMT effectively expands the repertoire of motion principles available for crawling robots. Structural Configuration In contrast to previous studies, [9,20] that positioned eccentric motors vertically for stable actuation and utilized dual-motor configurations for steering, the present study employs a novel single-motor configuration to significantly reduce the physical dimensions of the robot.In this underactuated system, an eccentric motor is strategically placed at a tilt angle to exert forces on both the vertical and horizontal planes.As depicted in Figure 6A, the motor was installed in the lower section of the robot.Inspired by seal locomotion that necessitates coordinated movements of the fore and hind bodies, we carefully designed two distinct legs: Leg A incorporates the tilted motor, whereas leg B comprises a compliant Kapton plate situated at the rear end of the robot.These legs alternate in contact with the ground during galumphing gait. Addressing the challenges associated with modeling, most vibration-based slipping robots are equipped with circular bodies and tri-leg configurations.While aiming for a geometric center as the COM, they often fail to achieve this ideal configuration owing to the errors induced during fabrication and assembly.In the current design, the circular outer shape of the robot, which is attributable to the motor, provides a self-aligning mechanism through the tumbler effect.Consequently, the COM consistently resides in the longitudinal mid-perpendicular plane, thereby mitigating the modeling errors. According to the dynamic model, the design parameters can significantly affect the movement.The ratio between the robot's weight and the actuation is crucial.The motor output force should match the robot's weight in a specific range; otherwise, a larger motor output force would lead to unstable motion, and a smaller output would not be able to actuate ERDMT.Since the motor speed varies from ≈600 to ≈1000 rad s −1 , the critical motor speed for straight-motion transmission should lie in the middle of this range.Thus, the ratio was designed 3.5 for the best performance of ERDMT (see Notes S7 and S8, Supporting Information). Modeling Overview Three Cartesian co-ordinate systems were constructed (Figure 6B,C), namely, the motor co-ordinate system x motor , y motor , z motor , body-co-ordinate system x 1 , y 1 , z 1, and ground co-ordinate system x 0 , y 0 , z 0, respectively.The planar motion of the robot exhibits three DOFs, and considering the galumphing motions in the x 0 -z 0 plane, the dynamic model should allow translational motion in the z 0 -axis and rotation around the y 1 -axis.The five-DOF dynamics model was implemented by modeling the two-DOF galumphing motion (in vertical plane) and three-DOF planar motion (in horizontal plane).We assumed that the eccentric motor was spinning at a constant speed w with zero output torque.To obtain the output force F, previous studies [8,21] applied the centrifugal force equation and neglected the internal frictional contributions.To improve the modeling accuracy, we measured the output force using an experimental-based lookup table. When the output force F of the motor is sufficiently large to elevate the robot in air, it can galumph.Unlike previous soft robots that galumph unstably, [12,13] GASR can perform regular and predictable galumphing.As depicted in Figure 6D, four phases of motion are involved in a complete galumphing cycle: both-touching, A-touching, B-touching, and aerial.The dynamic parameters of each phase were sequentially solved and are detailed in Note S1 (Supporting Information). Model of Galumphing Motion The robot initiates its motion with the both-touching phase, where both leg A and leg B are in contact with the ground.Assuming that the eccentric mass occupies position at time t, the output force F of the motor can be expressed as ⃖⃖⃖ ⃗ F 1 in the bodyco-ordinate system.Thus, the torques induced with respect to the mass center G M 1 G , leg A M 1 Ay , and leg B M 1 By can be obtained.Suppose the vertical component of F is F 1 z , leg A/B jumps into the air when any of F 1 z , M 1 Ay or M 1 By is adequately large, and the robot performs a swing motion around the supporting leg (Atouching/B-touching phase).If both legs jump off the ground, the robot moves freely in the air (aerial phase), but lands on the ground within a few moments.These phases would occur in certain sequences and form periodic galumphing motions. Landing collision causes an unsmooth motion transition.For instance, we present the collision process involving leg A. For the transient state before and after collision, we assume that leg A exhibits velocities v Az and v ′ Az , whereas COM exhibits velocities v Gz and v ′ Gz as well as angular velocities w y and w ′ y , respectively.Assume the collision recovery coefficient is e according to the classical collision formula, we can obtain As only one force N A acts on leg A with mg is neglectable in the collision.Thus, according to the law of conservation of angular momentum around leg A, we can derive Thereafter, we can obtain w ′ y and v ′ Gz .Assume the collision duration is Δt, we can obtain the impact force N A during collision based on momentum conservation in the z-axis: The action of leg A could be rebound or touchdown depending on the speed of leg A after collision.Thereafter, the motion status of the robot can be decided (detailed in Figure S2, Supporting Information). Model of Planar Motion The iterative method is favored for modeling due to its capability to handle diverse phase sequences, irrespective of their stability.In the context of galumphing motion, the timing and magnitude of ground forces N A and N B , as well as the frictional forces f A and f B , can be altered by the centrifugal forces.Owing to the synergy between motor and frictional forces, the robot's planar motion can be obtained.Assuming that the robot has rotated an angle q G in the z-axis at time t, the corresponding rotation matrix R 0 1 can be derived by transforming the body-co-ordinate system to the ground co-ordinate system. To obtain the magnitudes and directions of frictional forces ⃖⃖⃗ f 0 A and ⃖⃖⃗ f 0 B , the actual velocities of the two legs in the ground coordinate system ⃖⃖⃖ ⃗ v 0 A and ⃖⃖⃖ ⃗ v 0 B should be initially obtained via the velocity and angular velocity of COM.Accordingly, the frictional force ⃖⃖⃗ f 0 A can be obtained as follows: where ⃖⃖⃖⃖ ⃗ F 0 m denotes the motor force in the ground co-ordinate system.The frictional force ⃖⃖⃗ f 0 B can be obtained similarly, and the acceleration of COM in the ground co-ordinate system ⃖⃖⃖ ⃗ a 0 G can be derived as: To obtain the angular acceleration ⃖⃖⃖⃖⃖ ⃗ 1 Gz , the overall torque ⃖⃖⃖⃖⃖ ⃗ M 1 G acting on COM in the body-co-ordinate system can be readily deduced as follows: where I Gz denotes the rotational inertia around the z-axis at COM.Finally, we can obtain the motion parameters in the ground co-ordinate by the iterative algorithm, namely the position of COM ⃖⃖⃖⃖ ⃗ P 0 G and the angle of COM along the z-axis ⃖⃖⃖⃖⃗ q 1 Gz : Conclusion and Future Works In this study, we advance the field of insect-scale crawling robots by introducing a novel effect, namely ERDMT.Utilizing a single eccentric motor, our prototype robot induced a seallike galumphing motion under the principles of the ERDMT.Through rigorous dynamic analysis and empirical verification, the robot was found to be capable of transmitting motion in varying directions by modulating the motor speed.Moreover, the robot exhibited stable straight-line crawling facilitated by a straightforward PWM scheme, thereby simplifying the circuit architecture.Consequently, we developed an untethered GASR with a weight of only 1.2 g, incorporating both straight galumphing and spot-turning capabilities.Although microrobots often necessitate intricate and laborious fabrication processes, the GASR circumvented a majority of these limitations through its simplified two-legged structural design.The operational mechanisms and architecture of GASR displayed remarkable versatility in planar motion, tunnel traversal, and slope climbing.This research represents an inaugural exploration into this new motion paradigm.Future research is expected to focus on enhancing the feedback trajectory control of the robot through mechanisms of self-perception and adaptation, along with the fine-tuning of speed variables.Additionally, this rudimentary structural design offers the potential for further miniaturization by customizing motors and could feasibly be scaled up for navigation across diverse and challenging terrains.Owing to its rigid body and uncomplicated actuation, ERDMT-based motion can be integrated with other systems.This establishes promising av-enues for future multifunctional devices.For example, the vibrational functionality of mobile phones could be repurposed for motion control, thereby enabling capabilities such as fault detection and target tracking. Experimental Section Motor Speed Measurement: To accurately validate the dynamics model, the actual motor speed must be recorded.Thus, the signal from both poles of the DC brushed motor (Figure S3B, Supporting Information) was measured with an oscilloscope (DSOX1204G-200Mhz, Keysight; Figure S5, Supporting Information) at 10 kHz, which was ≈60 times the maximum frequency of the motor.Thereafter, the peaks produced by switching the brush were recorded.Accordingly, the motor speed could be calculated accurately using the equation: motor = 0.5f os , where motor refers to the motor speed and f os refers to the frequency of the peaks recorded by the oscilloscope.The actual rotational motion of the rotor at 10 kHz was directly recorded using a high-speed camera (Ispeed-221 mono, Hadland).The results revealed that the oscilloscope-based measurement method exhibited a mean error of 1.07% with respect to the observed speed.The output force test of the motor is in Note S2 (Supporting Information). Galumphing Motion Observation: Galumphing motions occur at the frequency and microscopic dimensions of the motor (30-500 μm).A highspeed camera (Ispeed-221 mono, Hadland) with a standard lens was used to record a complete view of the galumphing motion beyond 1000 Hz.For a close-up perspective, the same camera with a 26-mm f4 microlens was used from Kuangrenweiyan, Taobao.Notably, the robot must be illuminated using high-power light (EF-200, >23 000 Lm, Jinbei).The motor speed was measured using the aforementioned method. Planar Motion Test: To validate the modeling results, the planar motions of the robot were observed under various supply voltages from 1.65-2.50V in increments of 0.05 V, which were repeated three times for each voltage.A DC power supplier (UTP1306S, UNI-T) and the same oscilloscope was adopted to measure the motor speed, and the motion of the robot was observed using a motion-capture system (VICON Vantage V5, Logemas; accuracy: 0.1 mm) at 200 Hz.The motor speed was measured using the aforementioned method, and the details of the experimental setup are presented in Note S3 (Supporting Information). Statistical Analysis: The sample size (n) for each statistical analysis was n = 3.The data were expressed as mean ± SD (Standard Deviation).Statistical analysis of the data was performed using OriginPro 2021. Figure 1 . Figure 1.Single motor actuated crawling robot.A) GASR comprises a rigid body (PCB and battery), eccentric motor, and compliant hind leg (composed of Kapton), with all components bound together.The robot contacts the ground with the motor and hind legs.B) GASR can perform forward motion in galumphing mode and steering motion in slipping mode.C) A diagram of galumphing, which is characteristic gait of seals during terrestrial movement. Figure 2 . Figure 2.Observation and modeling results of galumphing motion.A) Observed motion sequences corresponding to galumphing mode 2 including three phases: A-touching, aerial, and B-touching.The period of motion corresponds to one motor cycle (6 ms).The forward velocity of the center of mass (COM) v G maintained positive, whereas the angular velocity of the body with respect to the y-axis, w Gy, altered its direction.B) Mode 1 denotes a fundamental mode with a consistent galumphing height of both legs, and the jumping frequencies of (A) and (B) were the same as those of the motor period.C) In mode 2, the heights of leg B alternated high and low in a constant sequence, whereas the jumping frequency of A and B remained the same as motor frequency.D) In mode 3, leg B landed only once every two motor cycles.E) In unstable galumphing, leg B exhibited indeterminism jumping and leg A exhibited unstable toughing timing with the ground. Figure 3 . Figure 3. Driving principle of the ERDMT.The red-shaded regions correspond to leg A-touching the ground, whereas the gray-shaded areas correspond to leg A jumping in air.When leg A lands, impact frictional torque (IFT) resists robot motion.In the following ground phase, the ground frictional torque (GFT) was similarly generated.In every two motor cycles, the contributions of the IFTs and GFTs entirely compensate each other to maintain the zero-angular-momentum condition.The frictional and motor contributions together determine the angular velocity w G of the body , and their integration yields the rotation angle Δq G .The curve of w G varies with motor speed to satisfy the zero-angular momentum condition.A) At a low motor speed (w = 782 rad s −1 ), both IFTs were negative, such that both GFTs were positive, indicating that the robot should rotate in the negative direction.Consequently, the mean value of w G was negative, and the robot rotated by a small negative angle Δq G in every motion cycle.B) At a critical motor speed (877 rad s −1 ), the ground phase of leg A changed, and a single IFT remained negative, whereas the other reverted to positive.The w G curve shifted up to a mean value of nearly zero.Therefore, the total rotation angle became nearly zero.C) When the motor speed further increased (920 rad s −1 ), the body angular velocity w G curve further shifted upward, resulting in a positive rotation angle. Figure 4 . Figure 4. Crawling trajectories induced by ERDMT.A) Evaluation of angular velocities versus motor speeds.At lower motor speed, the robot stick-slipped on the ground and began galumphing as the motor speed increased.Zero-angular speed galumphing appeared at ≈870 rad s −1 .B) Comparison of modeling and testing trajectories.The dashed and solid lines represent the modeling and testing trajectories, respectively.Magnified subplots displayed the microscopic movements of COM in the xy plane, and the purple curvatures represent the trajectory in two motor cycles. Figure 5 . Figure 5. Performance of a miniature and agile robot GASR.A) With a low input voltage, the GASR steered in place at 45°/s.B) With 2.01 V input, GASR crawled straight forward at 1.1 BL s −1 .C) By slightly increasing or decreasing the input voltages with respect to that of straight moving, GASR exhibited left or right crawling, respectively.D) GASR revealed high trajectory precision in the repetitive test (n = 6 repeats; foam board substrate).GASR crawled following a z-shaped path E) and a square path (F).G) GASR crawled in a loop with two narrow tunnels.H) GASR could climb a 15°slope in 28.1 mm s −1 with 2.21 V input voltage.I) Maximum crawling speeds versus mass of robots.Robots in the red region perform unstable galumphing, whereas those in the blue region perform vibration-based slipping.The purple region refers to robots with vibration-based slipping and stable galumphing locomotion. Figure 6 . Figure 6.Schematic of the dynamics model.A) Illustrative 3D model of the structural configuration.Schematic of the dynamics model in B) top view and C) side view (section along the x-z plane).D) Galumphing motions in the x 0 -z 0 plane consisting of four phases: (a) Both-touching: both legs stand on the ground; (b) B-touching: only leg B stands on the ground, and the robot swings around leg B; (c) A-touching: only leg A stands on the ground; (d) Aerial: both legs are above the ground, and the robot's motion is an overlay of rotation around the COM (point G) and translation along the z-axis.
8,894.2
2023-12-14T00:00:00.000
[ "Engineering" ]
Integrated Informatics Analysis of Cancer-Related Variants PURPOSE The modern researcher is confronted with hundreds of published methods to interpret genetic variants. There are databases of genes and variants, phenotype-genotype relationships, algorithms that score and rank genes, and in silico variant effect prediction tools. Because variant prioritization is a multifactorial problem, a welcome development in the field has been the emergence of decision support frameworks, which make it easier to integrate multiple resources in an interactive environment. Current decision support frameworks are typically limited by closed proprietary architectures, access to a restricted set of tools, lack of customizability, Web dependencies that expose protected data, or limited scalability. METHODS We present the Open Custom Ranked Analysis of Variants Toolkit1 (OpenCRAVAT) a new open-source, scalable decision support system for variant and gene prioritization. We have designed the resource catalog to be open and modular to maximize community and developer involvement, and as a result, the catalog is being actively developed and growing every month. Resources made available via the store are well suited for analysis of cancer, as well as Mendelian and complex diseases. RESULTS OpenCRAVAT offers both command-line utility and dynamic graphical user interface, allowing users to install with a single command, easily download tools from an extensive resource catalog, create customized pipelines, and explore results in a richly detailed viewing environment. We present several case studies to illustrate the design of custom workflows to prioritize genes and variants. CONCLUSION OpenCRAVAT is distinguished from similar tools by its capabilities to access and integrate an unprecedented amount of diverse data resources and computational prediction methods, which span germline, somatic, common, rare, coding, and noncoding variants. INTRODUCTION Next-generation sequencing technologies have greatly reduced the cost of genome sequencing, increasing the availability of genomic data and the need for methods to evaluate genomic variants. The majority of variants have unclassified phenotypic consequences, and their systematic exploration is complicated by data resources that are not easily obtainable or combinable. There is a need for more effective, user-friendly genome analysis tools that include interdisciplinary annotations and resources to suit the needs of both novices and bioinformatics experts. Rapid identification of somatic variants relevant to the progression and treatment of cancer are of particular importance to facilitate timely precision patient care. Maintaining patient privacy and data security places additional constraints on variant annotation and analysis, and requires systems that do not expose protected data. Highly informative variant and gene characteristics are distributed across thousands of published works, spanning resources from the medical, biologic, and bioinformatics domains, including experimental assays, computational variant effect prediction, evolutionary context, population databases, and established pharmacologic relevance. This abundance of variant and gene annotations challenges researchers to broadly discover and deploy the best resources, as well as incorporate them within custom annotation pipelines. Furthermore, prediction algorithm software often requires nontrivial computational expertise to install, configure, and run. Recently, genome-wide precomputation of predictor outputs for every possible input variant has been undertaken to make computational tools more accessible. Databases that host precomputes, such as dbNSFP (database for nonsynonymous single-nucleotide polymorphisms' functional predictions), 2 exposing users to new tools. However, the datasets available from these resources were designed for machine rather than human access and require substantial programming investment before a user can incorporate them into an annotation pipeline. Decision support framework (DSF) software tools have been created to integrate multiple annotation resources. 4 Well-designed DSFs require substantial software development; therefore, the majority of DSFs are not freely available. The remaining minority of DSFs are either Web-based portals that expose private data or downloadable tools with complicated installation and configuration requirements. [5][6][7] One such Web-based DSF is the Cancer-Related Analysis of Variants Toolkit 8 (CRAVAT), which prioritizes somatic mutations. 9 In this work, we present OpenCRAVAT, an extension of CRAVAT with improved data security, a much larger collection of annotations, and the capability to generate dynamic and customizable pipelines. OpenCRAVAT is a freely available open-source framework for the annotation and visualization of human genetic variation and genomic elements. The framework can rapidly generate publication-quality visualizations of gene networks, provide the distribution of variants per protein, and support BAM file visualization with an embedded version of the Integrative Genomics Viewer (IGV). 10 Designed to comprehensively annotate both well-characterized and novel somatic and germline variation, the framework can be flexibly adapted to suit a wide spectrum of human variation research projects. In this article, we describe the underlying architecture and present several case studies. Framework Architecture OpenCRAVAT is written in Python, and all code is stored on a public repository. It is open source and free of charge to users, with both command-line and graphical user interface (GUI) functionality. OpenCRAVAT can be installed via user-friendly wizard or through pip. The framework is built around 2 main components: a base module and a store where users can download additional modules. Modules include input format converters, gene mappers, annotators, output format reporters, and graphical widgets. The base module includes converters that support Variant Call Format (VCF), tab-delimited (TSV), and commadelimited (CSV) text files; a mapper that projects genome positions to transcript; protein sequence and protein structure coordinates; a set of basic widgets and reporters that generate output results files in sqlite3, Excel, TSV, CSV, and VCF formats. OpenCRAVAT supports GRCh38, GRCh37, and GRCh36 human genome reference assemblies, and variants are mapped to all GENCODE isoforms. 11 The store offers a large selection of modules, including additional installable converters (Ancestry, 23andMe, dbSNP identifiers); annotators for somatic, de novo, and germline variation (coding and noncoding); and associated widgets and reporters (VCF, pipeline-friendly TSV and CSV). The store is available through both GUI and command-line interface. Within the GUI, available modules are displayed in a format similar to an app store, where each tool is represented by a tile containing documentation, update status, and one-click installation. After installation, Open-CRAVAT downloads each resource locally, which enables secure analysis of private data. The open store is built for continuous community-driven development, so that newly developed tools and resources can be uploaded and made available to a wide audience. Addition of new resources to the store requires data descriptions, appropriately formatted annotation data, and a small script to allow incorporation of the data by OpenCRAVAT. Module developers can select CONTEXT Key Objective To perform informatics analysis of cancer-related variants based on annotation resources that have been integrated using an open-source variant annotation software framework. Knowledge Generated We demonstrate how the Open Custom Ranked Analysis of Variants Toolkit-OpenCRAVAT 1 -can be used to prioritize mutations relevant to cancer susceptibility, diagnosis, and progression by presenting case studies that comprise the germline genome of a single individual, multiple tumor precursor lesion biopsies from a single individual, 182 primary acute myeloid leukemias from The Cancer Genome Atlas, and multiple metastatic lesion biopsies from 20 individuals with breast, colorectal, endometrial, gastric, lung, melanoma, pancreatic, and prostate cancers. Relevance Integration of multiple variant-and gene-level annotations improves the prioritization of genetic variation relevant to cancer diagnosis, prognosis, patient stratification, and selection of appropriate therapies. In a clinical setting, the software described in this work can be applied to integrate and evaluate the relevance of cancer-related variants. whether to openly publish their data or restrict access, with the option to share the module directly with collaborators. Using OpenCRAVAT Configurable workflows within OpenCRAVAT can be created and executed in either the command line or GUI. OpenCRAVAT generates annotations for input files of human genetic variants. VCF, annotated VCF, basic tabular file format, dbSNP identifiers, 23andMe, and Ancestry.com files are supported. To accommodate family and cohort studies, multiple VCF files can be selected and merged within a single annotation run, in addition to support for multisample VCF files. For each annotation run, the user has the option to include all installed annotators or a subset, allowing for the creation of custom annotation pipelines (Fig 1). On completion of a run, the interactive results viewer can be used for exploratory data analysis and filtering. Accessible via both command line and GUI, the viewer comprises 4 tabs: Summary, Variant, Gene, and Filter (Fig 2). The Summary tab displays graphical representation of the submitted variant characteristics, as well as submission details, including the selected annotations and data source versions. The Variant and Gene tabs are divided into an interactive table and widget pane. The interactive table displays each variant or gene on a particular row along with the corresponding user-selected annotations. The widget pane includes several interactive elements which graphically display additional information and visualizations of the annotators, including the IGV with BAM file support and a Protein Diagram to visualize protein-level variation. Within the viewer, table columns and widgets can be resized or hidden, and layout preferences can be saved, shared, and applied to other annotation runs. The Filter tab allows users to generate and save filters, which identify variants in selected samples or genes, population allele frequency ranges, genomic locations, by sequence ontology, or custom annotator-specific thresholds. For example, after installation of the gnomAD module, users may choose to annotate their sample with gnomAD allele frequency and then use the Filter tab to reduce their analysis to variants with allele frequency , 0.01. For more complex filtering tasks, the Query Builder allows users to build advanced SQL queries on the Filter tab of the interactive result viewer. OpenCRAVAT can be installed locally on a user's computer or on a server, allowing multiple users to submit annotation runs on the same system, with administrator monitoring and maintenance. The server implementation adds user authentication, user-specific storage, user access to history, and shared access to analysis and visualization results. Server installation can be performed on both a shared local system or in a cloud environment, where results storage can be controlled and protected data are secure. The entire catalog of resources can be stored in one place and shared among many users, in addition to analysis results. RESULTS In the following case studies, we illustrate the capacity of OpenCRAVAT to evaluate phenotypically relevant genetic variations within inputs of differing size and composition. Case Study 1: Variant Prioritization in Multiple Lesion Cancer Samples Among the somatic variants present in a tumor, a small number of mutations are believed to "drive" tumor growth and may be useful for diagnosis, prognosis, patient stratification, clinical trial eligibility, and selection of appropriate therapies. Of particular interest are clonal driver mutations that occurred in the initiating tumor cell and are present in all tumor cells. Identification of these originating mutations can be enhanced by evaluating mutations from multiple tumor biopsies, including precursor lesions, primary cancers, and metastases from a single patient. In this case study, we investigated early candidate driver mutations in a patient with high-grade serous ovarian cancer (CGOV62), using VCF and BAM files from a published genomic study of high-grade serous ovarian cancers, including fallopian tube precursor lesions; fallopian tube and ovarian tumors; and omental, rectal, and appendiceal metastases; with a normal fallopian tube epithelium control sample. 12 BAM files from whole-exome sequencing were downloaded from the European Bioinformatics Institute (EGAS00001002589), and VCF files were generated with MuTect v.1.1.7 using default parameters. 13 The analysis was carried out using the Query Builder ( Fig 3A) by: 1. Installing cancer-related annotation modules (Cancer Gene Census 14 and Cancer Gene Landscapes 15 ), computational predictors (CHASMplus OV 16 and MutPred 17 ), and a visualization module (IGV). 2. Within the interactive interface, selecting the genome version used in the study (hg18), uploading VCF files for each biopsied lesion, selecting the annotators listed in the step 1, and clicking the Annotate button. 3. On the Filter tab, filtering by sample to exclude any germline variants that were present in the normal fallopian tube epithelium sample. 4. To focus on loss-of-function mutations in tumor suppressor genes (TSG) and missense mutations in oncogenes (OG), applying a Sequence Ontology-filter to select either (missense, splice site, frameshift and nonframeshift indels, and stop gain) for TSG or (missense and nonframeshift indels) for OG. 5. Retaining mutations within known OG and TSG as provided by either the Cancer Gene Landscapes or Cancer Gene Census. Nine mutations were retained after applying these filters, of which 2 were likely clonal mutations: RANBP2:p.M933I and TP53:p.T126N. These mutations were observed in seven of the eight lesions. In the original study, the TP53 mutation was found in an eighth lesion by deep targeted sequencing. The TP53 mutation is a known driver, with CHASMplus OV P value , .01 and is predicted by MutPred to result in loss of sheet structure (P = .0457). The NDEx widget was used to explore interaction partners of the mutated proteins, and the NDEx enrichment tool identified 13 TP53-associated networks from the National Cancer Institute Pathway Interaction Database 18 (Fig 3B). For each truncal mutation, the normal and tumor BAM files were loaded into IGV for viewing and manual validation (Fig 3C). Manual inspection verified that the mutation was truly somatic, it was not present in normal tissue (data not shown), and there was no apparent strand bias. Case Study 2: Identifying Driver Missense Mutations Among Metastases We analyzed exome and genome sequencing data for 76 untreated metastases from 20 patients with breast, colorectal, endometrial, gastric, lung, melanoma, pancreatic, and prostate cancers from a recent study on the heterogeneity of functional driver mutations in cancer metastases. 19 This analysis was performed by: 1. Installing the CHASMplus annotator to score mutations as likely cancer drivers and tsvreporter to generate simple tab-delimited output. 2. Assembling a tab-delimited file of 15,765 somatic mutations identified in the study by Reiter et al 19 . 3. Using the command-line interface to generate a CHASMplus score for each mutation: cravat reiter_et_al_2018.txt -n Reiter_2018 -t tsv -l hg19-cleanup -d output. Integrated Informatics Analysis of Cancer-Related Variants 4. Running a python script fdr.py that took in the output file and created a qvalue for each mutation, which is a correction of the CHASMplus P value for multiple hypothesis testing using the false discovery rate of 0.05. In total, 56 mutations were predicted as drivers, with a significant qvalue (q , 0.01). These included well-known oncogenic alleles (KRAS:p.G12D, SMAD4:p.D351G, and PTEN:p.R173H 20 ). There are 6 KRAS mutations present in these samples, including two mutations that have been observed in more than a single sample (KRAS:p.G12D and KRAS:p.G12V; Fig 4A). The NDEx widget shows that the KRAS and PTEN variants both affect the "Class I PI3K signaling events" network ( Fig 4B). All data and code needed to replicate the analysis are available at the OpenCRAVAT website 21 . Case Study 3: Clinically Actionable Germline Variants in an Individual Genome We identified germline variants that are suspected to be relevant to cancer in a phenotypically normal individual obtained from the Personal Genome Project (Profile hu3BDC4B). 22,23 For this analysis, we used databases of single-nucleotide variations (SNVs), indels, and genes with relevance to cancer, including hereditary predisposition: ClinVar, 24 PharmGKB, 25 and the ClinGen Gene annotator, which includes gene-disease associations curated by the ClinGen consortium. 26 The findings for each annotator are as follows: 1. The ClinVar annotator identified dozens of variants relevant to cancer. Variants with the highest potential for clinical relevance include a variant that is protective for lung cancer, two risk-factor variants (lung cancer and cutaneous malignant melanoma), 5 pathogenic noncoding SNVs (acute myeloid leukemia [AML] with maturation), a pathogenic intronic SNV in EHBP1 associated with hereditary prostate cancer, and 16 drugresponse variants that affect the dosage, efficacy, toxicity/adverse drug reaction or response to various cancer drugs. 2. The ClinGen Gene annotator identified variants within 43 genes related to cancer phenotypes. Of these, the most impactful variant was a frameshift deletion in PALB2, which ClinGen has identified to be related to "familial ovarian cancer; hereditary nonpolyposis colon cancer; hereditary breast carcinoma; Fanconi anemia complementation group." An additional 25 genes related to breast, ovarian, colon, and colorectal cancers are affected by missense variants. 3. The PharmGKB annotator identified two variants. First, an intronic variant in GLDC was associated with increased response to citalopram and escitalopram in people with major depressive disorder. GLDC had been annotated by the ClinGen Gene module as associated with glycine encephalopathy. Second, a 3 prime UTR variant of ENOSF1 was associated with response to methotrexate. The majority of variants in this patient have no known clinical relevance. Among the variants highlighted by the ClinVar module, only a single variant, related to hereditary prostate cancer, may be suitable to consider informing the patient to encourage early intervention. The ClinGen Gene module does not appear to be of clinical utility for this patient, with the potential exception of the frameshift deletion affecting PALB2, which has been associated with susceptibility to several cancer types. If the individual receives pharmacologic treatment of cancer in their lifetime, the variant-drug annotations from PharmGKB and ClinVar may have clinical utility. Case Study 4: Occurrence of Somatic Mutations Within Molecular Subgroups Among 182 Patients With AML For genetically heterogenous cancer types such as AML, partitioning patients into clinical subgroups based on their genomic alterations carries significant prognostic implications. Individuals with AML have previously been partitioned into 11 genomic subgroups, based on patterns of comutation. 27,28 In this case study, we assessed the prevalence of these clinical subgroups using somatic mutations from 182 patients with AML, sequenced by The Cancer Genome Atlas and obtained from the Genomic Data Commons (gdc.cancer.gov). Genomic subgroups that were defined by inversions, translocations, and gene fusion events were omitted from this analysis because these variant types are not currently supported by OpenCRAVAT: Of the 182 total patients in this cohort, 76 were assigned into the 5 molecular subgroups based on protein-coding somatic mutations. The remaining 50 patients most likely harbored inversions, translocations, and/or gene fusion events. We observed that 11 of the total 49 mutations occurred in more than 1 patient and may reflect recurrent driver mutations with prognostic value. DISCUSSION OpenCRAVAT is a flexible and dynamic system to annotate, evaluate, and visualize the characteristics of genetic variation. It has been designed to enable rapid characterization of variants, including functional impact, pharmacologic annotations, and both known and predicted relevance of genetic variants to disease, including cancer. The open store contains dozens of resources relevant to variant interpretation, with new additions weekly. Selection of specialized converters, annotators, and filtering criteria enable researchers to carry out complex analyses and integrate information from a wider array of resources than previously possible. We have described a framework that includes both an advanced GUI for biologists and a command-line interface that supports advanced use cases, including development of custom bioinformatics pipelines. Both GUI and command line can be leveraged in the cloud to handle processing of genomes from large patient populations. Finally, because the OpenCRAVAT store is designed to be community driven, we have incorporated more than 100 tools from dozens of universities and institutes in the past year, and we are actively recruiting tool and resource developers.
4,384.8
2020-03-01T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
Simulated and measured piezoelectric energy harvesting of dynamic load in tires From 2007 in US and from 2022 in EU it is mandatory to use TPMS monitoring in new cars. Sensors mounted in tires require a continuous power supply, which currently only is from batteries. Piezoelectric energy harvesting is a promising technology to harvest energy from tire movement and deformation to prolong usage of batteries and even avoid them inside tires. This study presents a simpler method to simultaneous model the tire deformation and piezoelectric harvester performance by using a new simulation approach - dynamic bending zone. For this, angular and initial velocities were used for rolling motion, while angled polarization was introduced in the model for the piezoelectric material to generate correct voltage from tire deformation. We combined this numerical simulation in COMSOL Multiphysics with real-life measurements of electrical output of a piezoelectric energy harvester that was mounted onto a tire. This modelling approach allowed for 10 times decrease in simulation time as well as simpler investigation of systems parameters influencing the output power. By using experimental data, the simulation could be fine-tuned for material properties and for easier extrapolation of tire deformation with output harvested energy from simulations done at low velocity to the high velocity experimental data. Introduction The ever-increasing energy requirements pose one of the greatest technological challenges of our time.With an annual increase of around 1-2% in human energy use, the majority of energy comes from fossil fuels [1], which negatively impacts the environment.To address this challenge and as digitalization increases in many applications, alternative green sources of energy are emerging.One of the technologies that can aid in using less energy is energy harvesting, which converts ambient energy into electrical energy to power low-power sensor systems [2][3][4][5].This approach towards zero energy devices makes micro devices energy autonomous and can be placed in hard-to-reach locations, thereby positively impacting system installation and maintenance costs and time.It also helps in reducing the environmental impact by minimizing raw materials required for cable manufacturing and reducing the number of batteries thrown away [6].The use of energy harvesting for powering sensing devices in tires is one such application that can enhance safety driving and automatic drive control [7][8][9][10].Due to regulations, from 2007 in US [11] and from 2022 in EU [12], it is mandatory to use tire pressure monitoring systems (TPMS) in new cars.All TPMS utilizes today battery as energy source, which is a burden to environment.To avoid using a battery inside the tire, energy harvesting is a green power solution with promising results as energy can be harvested from tire compression, deformations and vibrations when driving [13][14][15].The energy harvester placement can be at the rim, inner liner, or side of the tire, with preferences for inner liner nowadays. Initially, the main type of harvesters was piezoelectric PZT-based cantilever beam for TPMS [16,17].Different cantilever structures/geometries were tested and while they provided a certain amount of energy (few μW/cm 2 at 50 km/h), they have two major drawbacks.Firstly, there is limited space between the cantilever's gap and tire's deflection, which limits the length of the PZT cantilever beam, its deflection height and thus energy output.Secondly, PZT is highly brittle limiting the applied strain, and the beam is prone break easily as tire moves and deforms.Esmaeeli et al. [18] tried to improve the strain-based piezoelectric harvesters by designing a Cymbal shape obtaining 95 μJ per revolution at 41 km/h and about 600 kg load with an efficiency of ≈5%.Another piezoelectric-based material is poly (vinylidene fluoride) (PVDF) that is flexible, enables large deformation making it also good candidate for tire applications.even it has lower piezoelectric constant than ceramic PZT [19][20][21].Lee and Choi [22] used PVDF film obtaining 380 μJ per revolution at 60 km/h and 500 kg load, converting and using approximately 9.7% of the available energy. To design kinetic energy harvester, various software and methodologies are utilized, such as Finite element method (e.g.COMSOL, Ansys) and analytical equation (e.g., Matlab).Each harvester requires its specific simulation, depending on the application's requirements and specifications.Usually, the piezoelectric cantilever/beam harvester is subjected to base excitation arising from the radial deformation of the tire and centripetal acceleration due to the tire rotation [17] and from tire strain [18]. Simulations for tire deformation and energy harvester voltage output can be done analytically [8,17,18,23] and numerically.For example K. Anil [24] uses Matlab/Simulink software for modelling the design of piezoelectric ceramic placement at the tire-rim interface and tire deformation is introduced as a variable force; M. M. Behera [25] modelled the PZT energy harvester in MATLAB Simscape and simulated with tire using COMSOL Multiphysics 5.0 for output voltage.However, all these simulations are done by considering a static model of the deformation. In this work, we present a new modelling procedure to simulate the combination of tire deformation and its influence simultaneously and dynamically on the attached piezoelectric harvester -dynamic bending zoneonly the tire's bending zone (when in contact with the ground surface) is moving over the tire and harvester instead of the whole tire is rolling.This allows for much shorter simulation time, a more realistic simulation and modelling of deformation and its influence on harvester's deformation, simpler investigation of systems parameters influencing the output power (e.g.harvester geometry, type of piezoelectric material, tire pressure, car weight) and at the same time easier corelation with and improvement from experimental results.The experimental data allows for fine tuning of simulation in relation with material properties (e.g.Youngs modulus for tire and piezoelectric material).Thus, via experimental data at low velocities, it is easier in the simulation to extrapolate the tire deformation and harvested energy done at low velocity to high velocity.This easier method allows also for various shapes of the harvester to be simulated and benchmarked against experiments.None of the above capabilities can be easily done by using the existing state-of-the-art methods. Modelling and simulationstire and energy harvester A simple model of a tire was first simulated in COMSOL Multiphysics 5.0 to determine its mechanical properties.The model consisted of a structural steel cylinder surrounded by a thin layer of rubber with Young's modulus of 50 MPa and Poisson's ratio of 0.3, resting entirely on the ground.To simulate a real tire and avoid the flat roll area between the ground and the tire, two angles 'β' and 'Ω' was introduced between the roll area and the side wall (Fig. 1(a)). At first, the tire was simulated in a stationary state with a body load of 500 kg distributed over an area approximating the contact patch of the tire by using cosine and sine factors to apply the load in the correct direction.The contact area of the tire with the ground was approximated to be 10 cm long (Fig. 1 (b)), the vertical deformation is 0.36 cm-0.8 cm, the entire circumference of the tire was 2.39 m, which corresponds to 2π radians.This allowed the 10 cm length to be converted to an angle α ≈ 0.264 radians (Fig. 1(a)).The load was applied in the direction perpendicular to the centre of the contact patch defined by n → = − (r • cos(Ωt), 0, r • cos(Ωt)), where r is the radius to the outer edge, Ω is the angular velocity of the tire and t is the time.In the interest of simplicity, the carload was applied to only one quarter of the tire.To ensure a non-radial load, the angle of the load was defined by the normal to the middle of the surface defined by α.When simulating the curved tire, the angular velocity was calculated to match the velocities that were going to be used during the experimental phase. To simulate rolling motion, angular velocity and initial velocities along and across the ground were used.However, these simulations were time-consuming and required numerous iterations to avoid errors.Thus, the model was changed such that the force load (simulating ground) was made to move over the tire instead of moving the tire itself (Fig. 2). To model the pressure inside the tire, a cavity in the middle of the tire is added to represent the area where air is inflated in real tires.A standard pressure value is chosen of 2 bar, which was modelled radially outward in a quarter of the tire where the carload was applied. At the edges of the simulated quarter-tire a boundary condition was applied to prevent azimuthal deformations while still allowing radial deformations.On the inner edges of the tire wall, which rest on the rim a similar boundary condition was applied.However, in this case it fixes the position in all directions. The flexible PVDF-TrFe piezoelectric harvester component was added as a 0.01 cm thick layer, 5 cm long (β = 0.132 radians, Fig. 1 (a)) and 2 cm wide.The entire meshed geometry of both the tire and a rectangular harvester can be seen in Fig. 3(a).Rounded corners for square/rectangular harvesters had to be introduced (Fig. 3(b)) to avoid stress concentrations as 90 • corners create, that resulted in easier meshing and faster solving time. The ground was defined as the inner edge of the piezoelectric PVDF-TrFe harvester, and to simulated open circuit voltage, a load resistor of 10 MΩ in parallel to the harvester was added to the model. The PVDF-TrFe harvester is polarised for d33 and its material's properties are approximated from literature with d33 value of − 33.8 pC/N.The material data for rubber was estimated from literature with Young's modulus of 0.19 GPa (as the tire rubber is a mixed combination of different types of materials). An angled polarization was introduced in the model for the piezoelectric materialoriented inward toward the centre of the tire as the d33 directionto generate accurate voltage from deformation. The dependence of the harvester's geometrical shape on the output voltage was also investigated, rectangular and circular shapes by maintaining the same area.However, the area was varied allowing for an investigation of area dependency rather than solely length or thickness dependency. Simulation result The simulations of rotating tire in contact with a physical ground took long time even for very low velocities (e.g.0.5 m/s) but changing the methodology by having the load moving over the tire, higher velocities could be simulated within reasonable simulation times.These results could be successfully correlated with experiments at various velocities.The simulated tire deformation and displacement behaviour acting on PVDF-TrFe harvester are presented in Fig. 4(a-f).By collecting all displacements from the force load moving over the harvester area, the value of total displacement was obtained.Curve fitting was performed on these values to obtain equations that described the displacement with best curve fit as shown in equation ( 1 In Fig. 4(a-f), it can be seen the behaviour of the deformation on and along the tire and thus on the harvester during time evolution of tire in contact with the ground. The output voltages from COMSOL Multiphysics is direct scalable to the weight load over the surface; an increase in carload from 200 kg to 500 kg resulted in a voltage output increase by a factor of 2.5.However, increasing the force load's velocity over the tire surface resulted in a non-linear increase in the voltage generated by the PVDF harvesters, following a 2nd order equation up to a fixed value where it remained constant (not shown here).Rectangular and circular geometrical shapes were analysed using both COMSOL Multiphysics simulations and real-life experiments.The simulations were performed keeping constant the harvester polarization angle and the harvester footprint areas.The resulting voltages have a similar behaviour as those from experimental results with higher output open-circuit voltage for the circular harvester, Fig. 5. Various circular areas were simulated showing that no voltage increase happens after a certain diameter, Fig. 6. Measurement method The PVDF-TrFe harvesters were fixed onto the tire using double-sided tape and connected to measurement electronics module that also incorporates a Bluetooth Low Energy (BLE) unit for sending the data wirelessly during tire measurements, Fig. Testing of energy harvester on the tire is done in the specialised lab at Nokian Tyres PLC with conditions as close as possible to reallife environmental situationtire pressure, car load, velocities, Fig. 8. Results and discussion Experimental tests were performed at Nokian Tyres PLC test facility to characterize and select the best energy harvester regarding voltage and energy output.Measurements were conducted with carload of 200 kg and 2 Bar pressure.Various designs were tested (e.g.one-layer or multiple layers, square/rectangular/circular).A typical behaviour, charging and saturated voltages, and energy for a three-layer 2 × 7 cm 2 harvester with varying velocities ranging, 10 km/h to 120 km/h is shown in Fig. 9.The maximum accumulated energy was approximately 22 μJ.Another example of harvester design and its performance is one-layer circular harvester with energy and saturated voltages at different velocities (-0 km/h to 40 km/h) is shown in (Fig. 10) -7.1 V, 8.3 V, 9.1 V, and 10 V, respectively, with a maximum accumulated energy of 23 μJ. It is seen in these two figures that for lower velocities the voltage reaches a saturation value/plateau much faster than for higher velocities.This can be due to variation of self-leakage in the capacitor. From the experiments, the charge rate for the 1 cm 2 harvester was 1.65 nC/s, while for the 3-layer harvester was 69.3 nC/s, both at 20 km/h.The charge rate nC/(s*cm 2 ) is the same for both rectangular and square harvesters regardless of their dimensions and number of layers, as presented in Table 1.The charge rate per area for the circular shape is nearly 10 times higher compared to the rectangular one, most probably because a bigger area is utilized as active harvesting (and this is under further investigation). It can be seen that there is a small discrepancy between the simulated voltages and the ones measured in output value, Fig. 5 vs Figs. 9 and 10.This difference in output voltage is most likely due to various factors such as the adhesive used for harvester mounting onto the tire that was not simulated and, we could see that it did not keep the harvester on place (there was shift/slide in the position), the actual values for tire's Young's modulus and Poisson ratio due its complex materials combination.To improve the simulation for these issues, we utilized simulations and measurements data for the same harvester geometry and velocities to calculate the factors between the two data sets and derived the best equation fit.Thus, we obtained a very good match for the range 10-40 km/h allowing to estimate the voltages' values for higher speeds than simulated and/or measured, as can be seen in Fig. 11.Moreover, this type simulation and experimental equation fit procedure can be done once for low velocities measurements and then just simulate for higher velocities for PVDF energy harvesters (without the need for experiments at higher velocities). Conclusion The paper describes an easier model and simulation strategy such that both the tire deformation and output energy from flexible piezoelectric harvester mounted on the inner tire can be simultaneously simulated.In this 'dynamic bending zone' approach, the force simulations could be done with high confidence for various harvester designs and higher velocities without the need for experimental data. The simulations and measurements have given insights into the characterization and behaviour of PVDF-based harvester for tire application.It shows that the geometrical shape has a large impact on the output with circular harvester showing higher saturated voltages and charge rates compared to rectangular and square harvester. The measured output energy is in principle enough to power some type of sensors already in used for tire behaviour and its environment characterization.Overall, the study provides valuable insights into the design and optimization of piezoelectric harvesters for energy harvesting in tires. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests Cristina Rusu reports financial support was provided by European Commission.If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig. 1.(a) Tire ground deflection circle arc in radians where 'α' and 'β' represent the contact length and energy harvester length, respectively, (b) Estimation of the flat area during deformation. Fig. 2 . Fig. 2. Comsol simulation of tire where the deformation load is moved over the surface of the tire. Fig. 3 . Fig. 3. (a) Meshed tire geometry (b) Needed finer mesh around the harvester (at the corners compared to straight section). 7 . The measurement electronics have a 470 μF capacitor and a 10 MOhm matching resistance and it measures various electrical parameters for harvester characterization, charging slope, open-circuit voltage, voltage output.The saturated voltage of the capacitor represented the maximum voltage response of the PVDF harvesters as a function of speed. Fig. 5 . Fig. 5. Simulated peak open-circuit voltage values at different velocities for different geometrical shapes -circular harvester has higher output compared to rectangular.The total area of each shape is the same at 4.cm 2 . Fig. 6 . Fig. 6.Simulated open-circuit peak voltage at different velocities for different circular harvester sizes. Fig. 7 . Fig. 7. PVDF-TrFe harvester film attached to the inside of tire with double sided tape and connected to the measurement electronics and BLE module. Fig. 8 . Fig. 8. Schematic of a typical drum setup for tire testing. H . Staaf et al. load moves over the tire and harvester instead of the tire rolling facilitating converging simulations results within realistic time.The model is flexible, allowing for easy parameters modification for tire and harvester geometry, material, and conditions.The simulated model shows very good relation to the experimental results.Even if the first simulations are done for low tire velocity and with no exact material values, the feedback from experiments permit the improvement of the simulation model.By using dynamic bending zone, the Fig. 10 . Fig. 10.Measured harvester voltage and energy for single layer 2 cm diameter circular harvester with 2 bar tire pressure, 200 kg carload and velocities from 10 to 40 km/h. Fig. 11 . Fig. 11.Graph showing the measured voltage for 10-40 km/h and the simulated data multiplied with the factor equation for 10-120 km/h. Table 1 Comparison of the measured voltage and charge rate per area for 20 km/h.
4,221
2024-04-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Block copolymer microparticles comprising inverse bicontinuous phases prepared via polymerization-induced self-assembly Scalable preparation of micrometer-sized diblock copolymer particles exhibiting complex internal structure is achieved by RAFT-mediated polymerization-induced self-assembly (PISA). Introduction It is well-known that AB diblock copolymers undergo spontaneous self-assembly to form ordered nanostructures both in the bulk 1 and also in solution. 2 Microphase separation is primarily driven by an unfavorable enthalpic interaction between the two blocks, which outweighs the relatively small entropy term. Block copolymer self-assembly in the bulk is usually conducted by annealing the initial structure above the glass transition temperature to achieve thermodynamic control and hence equilibrium morphologies. It is well-known that this strategy provides access to a wide range of morphologies, including 2D or 3D periodic structures with long-range order. 1,3 In addition, traditional post-polymerization processing routes in dilute solution usually lead to the formation of spherical micelles, 4,5 cylindrical micelles, 6,7 lamellae 8 or vesicles. 9, 10 Considerable attention has been devoted to the direct formation of structurally complex block copolymer nanoparticles using phase inversion. 11 For example, linear, 12-15 comb-like [16][17][18] or dendritic 19,20 amphiphilic diblock copolymers can self-assemble to form inverse bicontinuous phases in dilute solution. These aggregates are structurally similar to lipid cubosomes, and offer potential applications for templating, separation, catalysis and controlled release systems etc. [21][22][23] According to the packing parameter concept introduced by Israelachvili and co-workers for small molecule surfactants, 24 inverted phases can be obtained when the packing parameter, P exceeds unity. The packing parameter is dened by the equation P ¼ v/a 0 l c , where a 0 is the area of the hydrophilic head-group and v and l c are the volume and length of the hydrophobic component, respectively. Originally introduced to account for the diverse range of surfactant structures that can be formed in aqueous solution, this purely geometric concept was subsequently extended to include block copolymer selfassembly. 25,26 Thus, targeting a highly asymmetric amphiphilic diblock copolymer (e.g. a very long hydrophobic block coupled with a relatively short hydrophilic block) provides a P value greater than unity. Moreover, the copolymer architecture also has a profound inuence on the packing parameter. For example, block copolymers possessing a branched hydrophobic block show a strong tendency to form inverse morphologies in dilute solution. 27 Recent advances in synthetic polymer chemistry such as controlled radical polymerization 28 and click chemistry 29,30 have enabled precise control over copolymer composition and architecture. Nevertheless, the traditional approach used to prepare inverse copolymer morphologies remains both time-consuming and inefficient. Typically, extremely long annealing times are required to attain thermodynamically stable inverse structures. Moreover, the relatively high molecular weight of the hydrophobic block means that the required highly asymmetric block copolymer chains oen become kinetically-trapped during the processing step, which can prevent convenient access to well-dened inverse morphologies. 31 In addition, copolymer concentrations are invariably rather low (typically below 1.0% w/w), which unfortunately precludes many, if not all, potential commercial applications. Over the past decade, polymerization-induced self-assembly (PISA) has provided a versatile new approach for the efficient preparation of block copolymer nano-objects in the form of concentrated colloidal dispersions. 32-38 PISA typically involves synthesizing a soluble homopolymer via RAFT solution polymerization 39-41 followed by chain extension of this precursor with a second block that becomes insoluble when it exceeds a certain critical length. This leads to the in situ formation of sterically-stabilized diblock copolymer nanoparticles. Classical morphologies (e.g. spheres, worms, lamellae or vesicles) similar to those obtained via traditional post-polymerization processing routes can be readily prepared via PISA at up to 50% w/w solids. 42 Moreover, highly convenient "one-pot" protocols have been reported for some PISA formulations. [42][43][44][45][46] However, as far as we are aware, there are only a few studies of the PISA synthesis of inverse morphologies such as hexagonally-packed hollow hoops (HHH) 47 and porous nanospheres, 48-50 which have been reported for polystyrene-based diblock copolymers. Unfortunately, such formulations typically suffer from relatively slow rates of polymerization, which inevitably result in substantially incomplete monomer conversions. For example, less than 15% styrene conversion was achieved aer 48 h at 80 C when preparing the HHH phase. 47 This problem renders such PISA formulations unsuitable for commercial scale-up owing to the prohibitive cost of removing unreacted styrene monomer. In the present study, we report the PISA synthesis of various inverse structures in the form of microparticles at 20% w/w solids using a much more efficient PISA formulation. Previously, we reported the preparation of conventional diblock copolymer nanoparticles via RAFT dispersion alternating copolymerization of styrene (St) with N-phenylmaleimide (NMI) utilizing a 50 : 50 w/w ethanol/MEK mixture and a non-ionic poly(N,N 0 -dimethylacrylamide) (PDMAC) stabilizer. 51 The resulting P(St-alt-NMI) core-forming block has a relatively high T g (219 C), 52 which leads to the formation of oligolamellar vesicles (OLV) during PISA, as well as the more typical sphere and worm phases. The MEK co-solvent aids solubilization of the NMI monomer and solvates the core-forming block within the growing diblock copolymer micelles, thus enhancing the mobility of the growing P(St-alt-NMI) chains. Taking advantage of such high chain mobility, herein we target more asymmetric diblock copolymer compositions to investigate self-assembly beyond the bilayer phase. Three inverse bicontinuous phases were formed by such PDMAC-P(St-alt-NMI) chains, with each morphology depending on the relative volume fraction of the P(St-alt-NMI) core-forming block. Moreover, intermediate species observed by TEM provide useful insights regarding the likely mechanism for the evolution in copolymer morphology during these PISA syntheses. Furthermore, TEM studies of selected ultramicrotomed microparticles reveal complex internal structures owing to their inverse bicontinuous phases. Finally, we demonstrate that it is possible to fabricate such nanostructured microparticles via PISA using a highly convenient "one-pot" synthetic protocol. Results and discussion The PDMAC stabilizer block DP was xed at 48, and all RAFT dispersion alternating copolymerizations were conducted at 20% w/w solids in a 50 : 50 w/w ethanol/MEK mixture (see Scheme 1). These syntheses were much more efficient than the styrene-based methanolic formulations reported in the literature: 47-49 more than 90% conversion was achieved in all cases within 10 h at 70 C (see Table 1). DMF GPC studies indicated unimodal but relatively broad molecular weight distributions (see Fig. S1, ESI †). However, these M w /M n values are comparable with those previously reported for the same PISA formulation, 51 especially given that higher degrees of polymerization are targeted in the present study. The somewhat broader molecular weight distribution is most likely owing to relatively slow activation of the PDMAC 48 stabilizer block rather than loss of RAFT end-groups, as previously suggested. 51,53 GPC studies indicate a high blocking efficiency for these diblock copolymers, suggesting minimal homopolymer contamination. In principle, polydisperse diblock copolymers may behave differently to near-monodisperse copolymers in terms of their microphase separation. However, it is well-known that broad copolymer molecular weight distributions do not prevent (and may well aid) block copolymer self-assembly in the solid state. [54][55][56] According to our earlier study, targeting a PDMAC 48 -P(St-alt-NMI) 350 composition produced micrometer-sized ellipsoidal particles (see Fig. 1a). 51 Small angle X-ray scattering (SAXS) analysis indicated that these relatively large particles comprised oligolamellar vesicles (OLV), comprising two or three stacked lamellae on average. 51 This nding is consistent with the higher contrast (darker) regions observed by TEM in the present study (see Fig. 1f). Interestingly, targeting a P(St-alt-NMI) DP of 450 overshoots the OLV phase, generating polydisperse ellipsoidal particles with internal morphologies (see Fig. 1b). These ellipsoidal particles are relatively large: laser diffraction studies indicated a 'sphere-equivalent' volume-average diameter of around 9 mm, compared to just 2.6 mm diameter for the previously-reported OLV nano-objects. 51 Internal structure can be clearly observed at higher TEM magnication (see Fig. 1g), with evidence for spherical (or elongated spherical) domains ranging from ca. 30 to 290 nm diameter. As far as we are aware, this morphology has not been reported previously for PISA syntheses, so we suggest the term perforated ellipsoidal lamellae (PEL). Further increasing the target core-forming DP to 550 for the coreforming block led to the formation of even larger ellipsoidal particles with a 'sphere-equivalent' diameter of around 12 mm (see Fig. 1c and d). Their internal morphology is difficult to characterize by TEM using an accelerating voltage of 80 kV (see Fig. 1h). However, using a 200 kV TEM instrument increases the penetration depth of the electron beam signicantly, revealing a complex bicontinuous internal morphology (see Fig. 1d and i). Finally, ellipsoids with a 'sphere-equivalent' diameter of 14 mm were obtained when targeting a core-forming block DP of 600. However, TEM studies could not reveal the internal morphology of these microparticles (see Fig. 1e, j and S2, ESI †). Each of these dispersions formed turbid pastes at 20% w/w solids. Given the difficulty in analyzing their internal structure, the larger microparticles were embedded in an epoxy resin and ultramicrotomed to produce thin cross-sections for TEM studies. According to Fig. 2 (see also Fig. S3, ESI †), each ellipsoid is hollow: the darker regions correspond to the P(St-alt-NMI) copolymer, while the lighter regions represent internal voids. In general, increasing the DP of the P(St-alt-NMI) coreforming block produces smaller voids. This is in good agreement with the greater electron opacity observed for such particles in the initial TEM studies (see Fig. 1). The internal structure is illustrated at higher magnication in Fig. 2d-f. At rst sight, the internal segregation observed for the PEL (see Fig. 2d) appears to be analogous to the well-known gyroid phase reported for block copolymers in the bulk 1 (see Fig. 2h). This corresponds to an inverted mixed phase comprising worms and lamellae (Fig. 2d). Similarly, the bicontinuous ellipsoids (BE) particles (see Fig. 2e) seem to correspond to an inverse worm phase (see Fig. 2j), with PDMAC stabilizer chains forming the cores surrounded by P(St-alt-NMI) coronal blocks (see Fig. 2i). In this context, we note that Eisenberg's group were the rst to describe the formation of inverted worms in dilute solution by self-assembly of highly asymmetric poly(acrylic acid)-poly-(styrene) (PAA-PS) 57 PS) 58,59 diblock copolymers. Such nano-objects exhibit a complex internal structure comprising hexagonally-packed hollow hoops or rods (HHH or HHR) distributed within a PS matrix. Moreover, Pan and co-workers recently reported the preparation of a HHH phase via the RAFT dispersion polymerization of styrene. 47 In this latter case, the highly regular arrangement of the hollow hoops was veried by analysis of TEM cross-sections. 47 On the other hand, it seems that PDMAC chains are just randomly packed within a P(St-alt-NMI) matrix for the BE microparticles obtained in the present study, with some spherical microdomains being observed (see Fig. 2e). The PDMAC 48 -P(St-alt-NMI) 546 copolymer chains form large compound micelles (LCM) 60 comprising isolated islands of the PDMAC stabilizer block located within a continuous P(St-alt-NMI) phase (see Fig. 2f). This internal structure resembles the inverted spheres (see Fig. 2l) that are formed by diblock copolymers in the bulk, where the relatively short hydrophilic PDMAC blocks are located within the cores and are surrounded by relatively long hydrophobic P(St-alt-NMI) coronal blocks (see Fig. 2k). The morphology of this series of PDMAC 48 -P(St-alt-NMI) x microparticles was also studied by SEM (see Fig. 3 and S4-S7, ESI †). Low-magnication images clearly indicate the progressive growth in mean particle size and evolution in copolymer morphology that occur on systematically increasing the target DP of the P(St-alt-NMI) core-forming block (see Fig. 3). Highresolution SEM images showed that all inverse phase microparticles are actually enclosed within a perforated surface layer (see Fig. 3b-d, inset). Based on these SEM images, the average surface pore dimensions are estimated to be approximately 100 AE 23 nm, 72 AE 17 nm and 71 AE 12 nm for PEL, BE and LCM, respectively. Conversely, no surface pores were discernible for OLV particles (Fig. 3a, inset). This observation suggests that initial phase separation occurs during the OLV-to-PEL transition (see later). The internal structure of selected microparticles was also studied by SEM by sectioning the microparticles using a razor blade (see Fig. 4). Their original morphology remains intact under the ultrahigh vacuum conditions required for SEM analysis. This is primarily owing to the relatively high T g (219 C) of the hydrophobic P(St-alt-NMI) block. More interestingly, examination of cross-sections of these fractured microparticles conrmed that their internal structure is bicontinuous and bears a supercial resemblance to a triply periodic minimal surface (see Fig. 4). A minimal surface has its local area minimized, i.e., every point has zero mean curvature. 61 Triply periodic minimal surfaces are periodic in all three coordinate directions. At rst sight, the internal structure of PEL (see Fig. 4a) appears to be a Schoen 62 gyroid surface. Increasing the DP of the P(St-alt-NMI) core-forming block to 512 (see Fig. 4b) or 546 (see Fig. 4c) results in an apparent transition from a Schoen gyroid surface to a Schwartz 63 P surface. According to Thomas et al., 64 a high interfacial energy between the two blocks leads to strong segregation during diblock copolymer self-assembly. As microphase separation occurs, an area-minimizing surface is adopted in order to lower the total interfacial energy. Such minimal surfaces best satisfy this geometric constraint and have been observed for block copolymers both in melts [65][66][67] and also in solution. 13,19,68,69 To gain a better understanding of the internal structure of these microparticles, SAXS was used to characterize three copolymer samples (PEL, BE and LCM), both in their powder form and also as 1.0% w/w dispersions in ethanol. A characteristic structural peak at q z 0.02Å was observed for the dispersions in each case (see Fig. S8, ESI †). This q value corresponds to a length scale of approximately 31 nm, which is attributed to the thickness of the continuous phase. As shown in Fig. 2d-f, TEM studies of cross-sectioned microparticles indicate that the thickness of the continuous copolymer phase is $46 nm for all three copolymers, regardless of the P(St-alt-NMI) block DP. However, TEM overestimates this characteristic length scale owing to an intrinsic artefact of the sample preparation. 70 The SAXS structure peaks are slightly shied to higher q for the corresponding dried powders owing to the absence of solvation. In addition, another strong peak was observed at q ¼ 0.0051Å À1 (corresponding to 123 nm) for PEL. This feature was shied to q ¼ 0.0063Å À1 (corresponding to 100 nm) for BE but becomes much less prominent for LCM (see Fig. S8, ESI †). This latter peak is characteristic of the internal porosity within these microparticles, with the mean pore size being in good agreement with TEM images (see Fig. 2d-f). 1 Representative TEM images (a-e) illustrating the evolution in copolymer morphology that occurs for a series of PDMAC 48 -P(St-alt-NMI) x diblock copolymer microparticles prepared at 70 C using a 50 : 50 w/w ethanol/MEK mixture via RAFT alternating dispersion copolymerization at 20% w/w solids. Higher magnification images (f-j) reveal the internal structure of these micrometer-sized particles. Scale bars correspond to 2 mm (a-e) and 0.50 mm (f-j). Moreover, its breadth suggests a relatively broad pore size distribution. In summary, these SAXS studies conrm that these microparticles possess internal porosity but do not support the presence of triply periodic minimal surfaces. For the current PISA formulation, the evolution in morphology from bilayers to various inverted morphologies can be readily rationalized in terms of molecular curvature, 26 which is in turn determined by the relative volume fractions of each block. Initially, the diblock copolymer morphology is OLV. This means that the relative volume fractions of the PDMAC block and the P(St-alt-NMI) block are approximately equal, hence low curvature interfaces are formed. Increasing the DP of P(St-alt-NMI) block increases its effective volume fraction, leading to an increase in copolymer curvature. Like the well-established morphology transitions observed for normal phases, 71 this higher copolymer curvature drives a change in morphology from bilayers to inverted cylindrical micelles (i.e. inverted worms) and inverted spheres. The observation of trapped intermediates (see Fig. 5) during the OLV to PEL to BE transitions suggest that the mechanism for formation of these latter morphologies involves three steps: (i) initial OLV phase separation (see Fig. 5a), which results in the formation of worm-like particles and hollow bilayer structures (see Fig. 5d); (ii) fusion/ stacking of hollow bilayers to generate large aggregates (see arrows in Fig. 5b)both TEM and SEM studies indicate a signicant increase in particle size during the OLV to PEL transition (see Fig. 3); (iii) rearrangement to form more ordered internal structures (see arrow in Fig. 5c). This mechanism is similar to that proposed for the formation of HHH, 47,57 but these earlier studies did not involve intra-particle phase separation in the initial stages. In addition, unlike HHH or porous nanospheres, which were prepared from spherical vesicle precursors, the current inverse structures are generated from self-assembly of micrometer-sized ellipsoidal lamellae (OLV). This explains why (i) the inverse structures obtained in the present study possess an ellipsoidal morphology and (ii) the resulting particles predominantly lie in the mm (rather than nm) size range. Control experiments conducted using a 50 : 50 w/w ethanol/ 1,4-dioxane mixture under otherwise identical conditions (see Table S1, ESI †) yielded only conventional copolymer morphologies, e.g. spheres, worms and worm clusters (see Fig. S9a-c, ESI †). Increasing the target core-forming block DP did not afford inverted morphologies, but instead produced only ill-dened, colloidally unstable aggregates (see Fig. S9d, ESI †). As discussed in our previous study, 51 MEK is a signicantly better solvent for the structure-directing P(St-alt-NMI) block than 1,4dioxane. Therefore, these results suggest that the high chain mobility conferred by the former co-solvent is essential to access inverted morphologies in such PISA formulations. This is consistent with various PISA syntheses involving polystyrenebased diblock copolymers reported in the literature. [47][48][49] As styrene is a good solvent for polystyrene, the styrene-rich domains increase the chain mobility and facilitate the evolution in morphology towards inverted phases. In the present study, BET studies conducted using N 2 gas as an adsorbate at 77 K indicate that these inverted structures possess signicantly higher specic surface areas (42-53 m 2 g À1 ) than that of the OLVs (31 m 2 g À1 ), despite the larger size of the former particles (see Table 1). The higher specic surface areas observed for these porous microparticles indicate a reduction in internal void volume, which is consistent with electron microscopy observations (see Fig. 2 and 4). In addition, this relatively high surface area suggests that the nitrogen probe molecules used in the BET measurements can diffuse through surface pores in the perforated outer shell and hence access the internal bicontinuous network. The specic surface areas determined for these microparticles are approximately a factor of two lower than those reported by Kim and coworkers for similar diblock copolymer microparticles prepared via post-polymerization processing. 19 In principle, the ability to conveniently prepare micrometer-sized particles exhibiting an inverse bicontinuous morphology via PISA is an important advantage for potential industrial scale-up. Therefore, we investigated the feasibility of a "one-pot" synthesis of such particles, targeting a diblock composition of PDMAC 60 -P(St-alt-NMI) 650 (entry 8, Table 1). More specically, DMAC (target DP ¼ 60; 40% w/w solids) was rst polymerized to high conversion via RAFT solution polymerization at 70 C in pure ethanol using 4-cyano-4-(2phenylethanesulfanylthiocarbonyl)sulfanylpentanoic acid (PETTC) RAFT agent and AIBN initiator. Aer 2.5 h, this DMAC homopolymerization had attained 89% conversion. At this point, an equimolar mixture of styrene and NMI monomer in an MEK-rich ethanol/MEK mixture was added to the reaction vessel under nitrogen. For this one-pot protocol, the nal composition of ethanol/MEK mixture was 50 : 50 w/w and the target copolymer concentration was 20% w/w solids. The alternating copolymerization was allowed to proceed for a further 10 h under RAFT dispersion polymerization conditions at 70 C. Laser diffraction studies indicated that the nal PDMAC 60 -P(Stalt-NMI) 650 microparticles obtained from this "one-pot" synthesis protocol were somewhat smaller (ca. 4.92 mm) than the PDMAC 48 -stabilized microparticles described above. However, SEM studies conrmed that the former microparticles had a similar perforated surface layer (see Fig. 6, le) and, more importantly, exhibited a bicontinuous internal morphology (see Fig. 6, right). Moreover, 1 H NMR studies indicated an overall comonomer conversion of 96% for this one-pot PISA protocol. This preliminary nding augurs well for the convenient synthesis of micrometer-sized particles with complex internal structures on a larger scale. In principle, these highly porous microparticles could be used as organic opaciers for paint formulations. In this context, more intense light scattering could be achieved by optimizing the mean size of the internal voids so that they are approximately half the wavelength of visible light (e.g. for efficient scattering at a wavelength of 500-600 nm, the void dimensions should be 250-300 nm, which is somewhat larger than the voids currently achieved). A further requirement for this particular application is that the diblock copolymer microparticles must remain intact during solvent evaporation as the wet paint dries. This should be feasible for this particular PISA formulation given the relatively high T g of the structure-directing P(St-alt-NMI) chains. Conclusions In summary, polymerization-induced self-assembly has been exploited to prepare a range of PDMAC 48 Table 1); arrows indicate PEL particles, possibly formed via fusion/stacking of hollow bilayers. (c) The PEL to bicontinuous ellipsoid (BE) transition (entry 5, Table 1); red arrow indicates a relatively compact BE, possibly formed via intraparticle rearrangement from looser, more open structures such as that indicated by the blue arrow. (d) Schematic mechanism proposed for the initial phase separation, which involves expulsion of worms from the initial OLVs. copolymer particles via RAFT dispersion alternating copolymerization of styrene with N-phenylmaleimide using a 50 : 50 w/w ethanol/MEK binary solvent mixture. TEM analysis conrmed that the resulting micrometer-sized particles possess an ellipsoidal morphology and complex internal nanostructures. For a xed PDMAC stabilizer block DP, increasing the DP of the P(St-alt-NMI) block leads to a gradual evolution in morphology, from inverse gyroids (PEL) to inverse worms (BE) and nally to inverse spheres (LCM). SEM studies indicate that these inverse structures have highly porous surfaces with bicontinuous internal networks. The formation of such structures is most likely driven by minimization of the total interfacial energy. The mechanism for the formation of these inverse morphologies appears to be similar to that proposed by Eisenberg and co-workers, who obtained a HHH morphology via traditional post-polymerization processing of diblock copolymers in dilute solution. 47,57 Control experiments utilizing 1,4dioxane instead of MEK suggest that sufficiently high chain mobility is essential for achieving such inverse phases, otherwise only kinetically-trapped morphologies are obtained when increasing the relative volume fraction of the structuredirecting block. Such nanostructured micrometer-sized particles can be conveniently synthesized at high solids via a "onepot" PISA protocol using cheap starting materials and relatively benign solvents, so they may offer some potential as new organic opaciers. However, such an application would require the mean size of the internal voids to be further optimized in order to maximize the light scattering and hence hiding power. This renement is beyond the scope of the current study. Conflicts of interest The authors declare no competing nancial interest.
5,366.2
2019-03-11T00:00:00.000
[ "Materials Science" ]
On the Rank and Periodic Rank of Finite Dynamical Systems A finite dynamical system is a function f : An → An where A is a finite alphabet, used to model a network of interacting entities. The main feature of a finite dynamical system is its interaction graph, which indicates which local functions depend on which variables; the interaction graph is a qualitative representation of the interactions amongst entities on the network. The rank of a finite dynamical system is the cardinality of its image; the periodic rank is the number of its periodic points. In this paper, we determine the maximum rank and the maximum periodic rank of a finite dynamical system with a given interaction graph over any non-Boolean alphabet. The rank and the maximum rank are both computable in polynomial time. We also obtain a similar result for Boolean finite dynamical systems (also known as Boolean networks) whose interaction graphs are contained in a given digraph. We then prove that the average rank is relatively close (as the size of the alphabet is large) to the maximum. The results mentioned above only deal with the parallel update schedule. We finally determine the maximum rank over all block-sequential update schedules and the supremum periodic rank over all complete update schedules. Mathematics Subject Classifications: 05C38, 05C50, 15A03, 06E30 the electronic journal of combinatorics 25(3) (2018), #P3.48 The architecture of an FDS f : [q] n → [q] n can be represented via its interaction graph IG(f ), which indicates which update functions depend on which variables.More formally, IG(f ) has {1, . . ., n} as vertex set and there is an arc from u to v if f v (x) depends on x u .In different contexts, the interaction graph is known-or at least well approximated-, while the actual update functions are not.One main problem of research on FDSs is then to predict their dynamics according to their interaction graphs.However, due to the wide variety of possible local functions, determining properties of an FDS given its interaction graph is in general a difficult problem. For instance, maximising the number of fixed points of an FDS based on its interaction graph was the subject of a lot of work, e.g. in [1,2,6,13,14].The logarithm of the number of fixed points is notably upper bounded by the transversal number of its interaction graph [2,14].This upper bound is reached for large classes of graphs (e.g.perfect graphs) but is not tight in general [14].Moreover, there is a dramatic change whether we assume that the FDS has an interaction graph equal to a certain digraph or only contained in that digraph (this is the distinction between guessing number and strict guessing number in [5]). In this paper, we are interested in maximising two other very important dynamical parameters of an FDS given its interaction graph.First, the rank of an FDS f is the number of images of f .In particular, determining the maximum rank also determines whether there exists a bijective FDS with a given interaction graph.This is equivalent to the existence of so-called reversible dynamics, where the whole history of the system can be traced back in time.Second, because there is only a finite number of states, all the asymptotic points of f are periodic.The number of periodic points of f is referred to as its periodic rank.In contrast with the situation for fixed points, we derive a bound on these two quantities which is attained for all interaction graphs and all alphabets.In particular, there exists a bijection with interaction graph contained in D if and only if all the vertices of D can be covered by disjoint cycles.Moreover, we prove that our bound is attained for functions whose interaction graph is equal to a given digraph, and not only contained, for all non-Boolean alphabets.We then show that the average rank is relatively close (as D is fixed and q tends to infinity) to the maximum. These results can be viewed as the discrete analogue to Poljak's matrix theorem in [11], which proves tat the maximum rank of M p , where M is a real matrix with given support D and p 1, is given by the maximum number of pairwise independent p-walks in D (see the sequel for a precise definition).However, our results extend Poljak's result for the discrete case in three ways (but Poljak's result cannot be viewed as a consequence of our results).Firstly, they hold for all functions, not only linear functions.Secondly, they explicitly determine the maximum periodic rank.Thirdly, the average rank of a real matrix cannot be properly defined, hence our result on the average rank of finite dynamical systems is completely novel. The results mentioned above hold for the so-called parallel update schedule, where all entities update their local state at the same time, and hence x becomes f (x).We then study complete update schedules, where all entities update their local state at least once, and block-sequential schedules where all entities update their local state exactly once (the parallel schedule being a very particular example of block-sequential schedule).We then prove that the upper bound on the rank in parallel remains valid for any blocksequential schedule but is no longer valid for all complete schedules.We also determine the maximum periodic rank when considering all possible complete schedules.In particular, there exists a function f with interaction graph D and a complete schedule σ such that f σ is a bijection if and only if all the vertices of D belong to a cycle. The rest of the paper is organised as follows.Section 2 introduces some useful notation and describes our results on the maximum (periodic) rank in parallel.Section 3 then proves our result on the average rank.Finally, the maximum rank and periodic rank under different update schedules are investigated in Section 4. 2 Maximum (periodic) rank in parallel 2.1 Background and notation Let D = (V, E) be a digraph on n vertices; let V = {1, . . ., n} be its set of vertices and E ⊆ V 2 its set of arcs.The digraph may have loops, but no parallel arcs.The adjacency matrix M ∈ {0, 1} n×n has entries m u,v = 1 if and only if (u, v) ∈ E. We denote the in-neighbourhood of a vertex v in D by When there is no confusion, we shall omit the dependence on D. This is extended to sets of vertices: N − (S) = v∈S N − (v).The out-neighbourhood is defined similarly.A source is a vertex with empty in-neighbourhood; a sink is a vertex with empty out-neighbourhood.The in-degree of v is the cardinality of its in-neighbourhood and is denoted by d v . A walk w = (v 0 , . . ., v p ) is a sequence of (not necessarily distinct) vertices such that (v s , v s+1 ) ∈ E for all 0 s p − 1.A path is a walk where all vertices are distinct.A cycle is a walk where only the first and last vertices are equal.We refer to p as the length of the walk; a p-walk is a walk of length p.We say that two p-walks w = (w 0 , . . ., w p ), w = (w 0 , . . ., w p ) are independent if w s = w s for all 0 s p.We denote the maximum number of pairwise independent p-walks as α p (D). Edmonds gave a formula for α 1 (D) in [3], based on the König-Ore formula: This was greatly generalised by Poljak, who showed that α p (D) could be computed in polynomial time and who gave a formula for α p (D) for all p 1 in [11].Suppose that C 1 , . . ., C r and P 1 , . . ., P s are vertex-disjoint cycles and paths.The cycle C i = (c 0 , . . ., c l−1 ) produces l independent p-walks of the form W a = (c a , c a+1 , . . ., c a+p−1 ), where indices are computed mod l and 0 a l − 1.The path Poljak's theorem asserts that this is the optimal way of producing pairwise independent p-walks.We denote the number of vertices of a cycle C and of a path P as |C| and |P |, respectively.Theorem 1 ([11]).For every digraph D and a positive integer p, where the maximum is taken over all families of pairwise vertex-disjoint cycles and paths C 1 , . . ., C r and P 1 , . . ., P s . where the maximum is taken over all families of pairwise vertex-disjoint cycles. if and only if f v depends essentially on u, i.e. there exist x, y ∈ [q] n which only differ on coordinate u such that f v (x) = f v (y).The set of all functions over an alphabet of size q and whose interaction graph is (contained in) D is denoted as We consider successive iterations of f ; we thus denote f 1 (x) = f (x) and f k+1 (x) = f (f k (x)) for all k 1. Recall that x is an image if there exists y such that x = f (y); x is a periodic point of f if there exists k ∈ N such that f k (x) = x.We are interested in the following quantities: 1. the rank of f is the number of its images: |Ima(f )|; 2. the periodic rank of f is the number of its periodic points: |Per(f )|. It will be useful to scale these two quantities using the logarithm in base q: Moreover, the maximum (periodic) rank over all functions in F[D, q] is denoted as and ima(D, q) and per(D, q) are defined similarly.We finally note that per(f ) = ima(f p ) for all p q n − 1.Therefore, the main strategy is to maximise the scaled rank of f p for all p; we thus denote ima[D, q, p] := max{ima(f p ) : f ∈ F[D, q]}, ima(D, q, p) := max{ima(f p ) : f ∈ F(D, q)}. We then have and similarly for ima(D, q) and per(D, q). The case q = 2 is indeed specific, for there exist graphs D such that max{ima(f p ) : for all p 1. We shall investigate this in the next subsection. We obtain two immediate consequences of Corollary 4. Firstly, we determine which graphs admit so-called reversible dynamics, i.e. for which graphs D we can find a permutation in F[D, q].Corollary 6 (Reversible dynamics in parallel).For any q 3, there exists f ∈ F[D, q] which is a permutation of [q] n if and only if all the vertices of D can be covered by disjoint cycles. Secondly, Robert's seminal theorem indicates that if the interaction graph of f is acyclic, then f n is constant (i.e.per(f ) = 0) [16].Since α n (D) = 0 if and only if D is acyclic, we obtain the following result. Corollary 7. The graph D is acyclic if and only if f n is constant for all q and all f ∈ F[D, q].The rest of this subsection is devoted to the proof of Theorem 3. We begin with the upper bound on the scaled rank, which follows a form of max-flow min-cut theorem (or at least, the min-cut uper bound). We now review the communication model based on terms from logic introduced by Riis and Gadouleau in [15].Let {x 1 , . . ., x k } be a set of variables and consider a set of function symbols {f 1 , . . ., f l } with respective arities (numbers of arguments) d 1 , . . ., d l .A term is defined to be an object obtained from applying function symbols to variables recursively.We say that u is a subterm of t if the term u appears in t.Furthermore, u is a direct subterm of t if t = f j (v 1 , . . ., u, . . ., v d j ), and we denote it by u ≺ t. Let Γ = {t 1 , . . ., t r } be a set of terms built on variables x 1 , . . ., x k and function symbols f 1 , . . ., f l of respective arities d 1 , d 2 , . . ., d l .We denote the set of variables that occur in terms in Γ as Γ var and the collection of subterms of one or more terms in Γ as Γ sub .To the term set Γ we associate the acyclic digraph The set of sources in G Γ is Γ var and the set of sinks is Γ.The min-cut of Γ is the minimum size of a vertex cut of G Γ between Γ var and Γ. An interpretation for Γ over [q] is an assignment of the function symbols ψ = { f1 , . . ., fl }, where fi : [q] d i → [q] for all 1 i l.We note that fi may not depend essentially on all its d i variables.Once all the function symbols f i are assigned functions fi , then by composition each term t j ∈ Γ is assigned a function tj : [q] k → [q].We shall abuse notations and also denote the induced mapping of the interpretation as ψ : [q] k → [q] r , defined as ψ(a) = t1 (a), . . ., tr (a) . Intuitively, if S is a vertex cut of G Γ between Γ var and Γ, then the terms in Γ "depend on" the terms in S. As such, the scaled rank of any induced mapping ψ cannot be greater than the size of S. This intuition is given formally as follows. We illustrate the communication model and Theorem 8 by the following example.Consider the term set The set of variables is Γ var = {x 1 , x 2 , x 3 }, while the set of subterms is The graph G Γ is displayed below.We see that {u, v} forms a vertex cut of G Γ between Γ var and Γ: In fact, the min-cut is indeed 2. A possible interpretation for Γ over [2] is (all operations mod 2) The corresponding induced mapping is and its scaled rank is log 2 3, which is indeed no more than 2. Lemma 9.For any p 1 and f ∈ F(D, q), ima( f p ) α p (D). Proof.For all v ∈ V , denoting N − (v; D) = {u 1 , . . ., u k } sorted in increasing order, we have fv (x) = fv (x u 1 , . . ., x u k ).By definition, f p is the induced mapping of an interpretation for Γ p = {t p 1 , . . ., t p n }, where Γ 0 = {t 0 1 = x 1 , . . ., t 0 n = x n } and for all 1 s p, The graph G Γ p = (V Γ p , E Γ p ) is then given by A flow in G Γ p is a set of vertex-disjoint paths from Γ 0 to Γ p .Such a path is of the form t W = (t 0 w 0 , . . ., t p wp ) where w s−1 ∈ N − (w s ; D); it naturally induces a walk in D: W = (w 0 , . . ., w p ).Since the paths t W and t W are vertex-disjoint, the corresponding walks W and W are independent.Therefore, the max-flow of G Γ p is at most α p (D).By the max-flow min-cut theorem and Theorem 8, ima( f p ) α p (D). Let W 1 , . . ., W α be α := α p (D) independent walks of length p, where we denote W i = (w i,0 , . . ., w i,p ).According to Theorem 1, those arise from families of disjoint cycles and paths.By construction, if w precedes w on one walk and w appears on another walk and has a predecessor there, then w precedes w in the other walk as well.For all 0 s p, we denote W s = {w i,s : We can now construct the finite dynamical systems which attain the upper bound on the scaled rank.The case q = 2 and f ∈ F(D, 2) is easy.We use a finite dynamical system where w i,s+1 simply copies the value x w i,s ; this will transmit the value x w i,0 along the walk W i . Lemma 10.The function f ∈ F(D, 2) defined as It is easy to show, by induction on s, that for all 0 For q 3 and f ∈ F[D, q], we use a finite dynamical system where w i,s+1 wishes to copy the value x w i,s whenever it can.Each other vertex u ∈ N − (w i,s+1 ) has a red light (the value 2).If all lights are red, then w i,s+1 cannot copy the value x w i,s any more; instead it flips it from 0 to 1 and vice versa. Lemma 11.For q 3, the function f ∈ F[D, q] defined as )\w i,s = (2, . . ., 2), x w i,s otherwise Proof.The proof is similar, albeit more complex, than the one of Lemma 10. Proof of Claim 12.We prove the first assertion.First, suppose there exists w i,s ∈ W s where x w i,s 2 and x w i,s = y w i,s , then the electronic journal of combinatorics 25(3) (2018), #P3.48 Second, suppose that for any w i,s ∈ W s such that x w i,s = y w i,s , we have {x w i,s , y w i,s } = {0, 1}.Then For the second assertion, let v ∈ U s+1 , then either v ∈ U or v = w i,t+1 with 0 t = s.If v ∈ U , then f v (x) ∈ {0, 1} for any x.Suppose that v = w i,t+1 such that f w i,t+1 (x) / ∈ {0, 1}.Then x w i,t / ∈ {0, 1}, which implies w i,t ∈ W s , say w i,t = w j,s ; but then, Claim 13.For all 0 s p, |f s W s (X)| = |X| and for any x ∈ X, f s U s (x) ∈ {0, 1} |U s | .Proof of Claim 13.The proof is by induction on s; the statement is clear for s = 0. Suppose it holds for up to s.For any distinct x, y ∈ X, we have Maximum rank in the Boolean case We first exhibit a class of digraphs for which the upper bound on the rank is not reached in the Boolean case.Proof.Suppose f ∈ F[D, 2] is a permutation of {0, 1} n , then all the local functions f v must be balanced, i.e. (2).Therefore, f (x) = M x + c, but since every vertex has even in-degree, the sum of all rows in M (in GF(2)) equals zero and M is singular. For instance, if D is the undirected cycle on n vertices, or the directed cycle on n vertices with a loop on each vertex, then for all p 1, It is unknown whether there exist other such examples.On the other hand, we can easily exhibit a class of digraphs which do reach the bound.For instance, let D = Kn be the clique with a loop on each vertex (alternatively, E = V 2 ).Then the following f ∈ F[ Kn , 2] is a permutation: the electronic journal of combinatorics 25(3) (2018), #P3.48 indeed f is the transposition of (0, . . ., 0) and (1, . . ., 1).Less obviously, the clique also admits a permutation of {0, 1} n .Proposition 15.For any n = 3, ima[K n , 2] = n. Proof.Firstly, let n be even.Then we claim that f (x) = M x is a permutation, or equivalently that det(M ) = 1.For det(M ) = d(n) mod 2, where d(n) is the number of derangements (fixed point-free permutations) of [n].Enumerating the permutations of [n] according to their number p of fixed points, we have Since n! and n 1 , . . ., n n−1 are all even, it follows that d(n) is odd, thus det(M ) = 1.Secondly, let n 5 be odd.We prove the result by induction on n odd.Let us settle the case where n = 5.We construct f ∈ F[K 5 , 2] as follows: It is easy to check that f is a permutation of [2] 5 . The inductive case is similar.Suppose that g ∈ F[K n , 2] is a permutation, then construct f ∈ F[K n+2 , 2] as follows: Again, it is easy to check that f is a permutation of [2] 3 Average rank Theorem 17.The average scaled rank in F[D, q] tends to α 1 (D): the electronic journal of combinatorics 25(3) (2018), #P3.48 Proof.The case α 1 (D) = 0 is trivial, thus let a := α 1 (D) 1 and (u 1 , v 1 ), . . ., (u a , v a ) be a collection of pairwise independent arcs.Let q be large enough and f be chosen uniformly at random amongst F[D, q].Let h 0 = (x u 1 , . . ., x ua ) : [q] n → [q] a and for any 1 i a, let Let c i be defined as c 0 = 1 and c i = |Ima(h a )|, all we need is to prove the following claim: with high probability, |Ima(h The proof is by induction on i.The claim clearly holds for i = 0; suppose it holds for i.Let g = (f v 1 , . . ., f v i , x u i+2 , . . ., x ua ) : [q] n → [q] a−1 and consider the set Z of images of g which appear frequently in the image of h i : for otherwise, we would have Now let N be the in-neighbourhood of v i+1 ; note that u i+1 ∈ N .Therefore, for each z ∈ Z, there exist at least 1 2 c i q values of x N such that z = g(x N , y V \N ) for some y V \N ; denote this set of values as X.On X, f v i+1 (x N ) is chosen uniformly at random. Claim 18. With probability exponentially small . Therefore, with high probability, |f v i+1 (X)| > 1 2 |X| for all z ∈ Z, and hence the electronic journal of combinatorics 25(3) (2018), #P3.48 Conversely, let us remove all the arcs connecting strong components of D and all the chords of any cycle in D. We obtain a new graph D which is the disjoint union of strong chordless graphs; the trivial components T 1 , . . ., T t of D are exactly those of D. Let C 1 , . . ., C k be a collection of cycles of D which cover all the vertices that do not belong to a trivial component and σ x u mod q, where an empty sum is equal to zero and the neighbourhood is according to D .It is easy to check that f is a permutation for all 1 i k and hence that {x ∈ [q] n : x T 1 = . . .x Tt = 0} is a set of q n−T (D) periodic points of f σ . Next, by a similar argument we prove that per(D, q) actually approaches n − T (D). Theorem 23.For all D, sup Proof.Let C 1 , . . ., C k be a collection of cycles which cover all vertices belonging to a cycle, W denote the set of remaining vertices and let σ = (W, C 1 , . . ., C k ).Let q − 1 = 2 m be large enough (m 2 n 2 +1 ) and let α be a primitive element of GF(q − 1).Denote the arcs in D as e 1 , . . ., e l .Let A ∈ GF(q − 1) n×n such that a u,v = α 2 i if (u, v) = e i and a u,v = 0 if (u, v) / ∈ E and let g(x) = Ax.Now f ∈ F[D, q] is given as follows: view [q] = GF(q − 1) ∪ {q − 1} and f w (x) = 0 if x u ∈ GF(q − 1) for all u ∈ N − (w), q − 1 otherwise, ∀w ∈ W f v (x) = g v (x) if x u ∈ GF(q − 1) for all u ∈ N − (v), q − 1 otherwise, ∀v / ∈ W. Then f acts like g on the set of states X = {x ∈ GF(q − 1) n : x W = (0, . . ., 0)}; in particular, we have f (X) ⊆ X.We can then remove W and consider h ∈ F[D \ W, q − 1] such that h v (x V \W ) = g v (x V \W , 0 W ) for all v / ∈ W instead.All we need to prove is that h (C 1 ,...,C k ) is a permutation of GF(q − 1) n−T (D) . Denote the square submatrix of A induced by the vertices of C j as A j .Then we remark that det(A j ) = 0 for any 1 j k.Indeed, let K 1 , . . ., K l denote all the hamiltonian cycles in the subgraph induced by the vertices of C j (and without loss, K 1 = C j ).For any 1 a l, let S(a) = e i ∈K l 2 i .We note that S(1), . . ., S(l) are all distinct, hence α S(1) , . . ., α S(l) are all linearly independent (when viewed as vectors over GF(2)) and det(A j ) = Now h (C j ) (x) = A j x, where A j = A j B j 0 I , where (A j |B j ) are the rows of A corresponding to C j and I is the identity matrix of order n − T (D) − |C j |.Since A j is nonsingular, so is A j .Hence h (C j ) is a permutation of GF(q − 1) n−T (D) , and by composition, so is h (C 1 ,...,C k ) . If W is empty, then we can simplify the proof of Theorem 23 and work with GF(q) n instead of GF(q − 1) n−T (D) (this time q = 2 p ), hence we obtain a permutation.This yields the following corollary on the presence of reversible dynamics. Corollary 24.There exist q, σ and f ∈ F[D, q] such that f σ is a permutation of [q] n if and only if all the vertices of D belong to a cycle. The theorem brings the following natural question. Problem 25.Is there an analogue of Theorem 23 for the rank? Proposition 14 . Let D be a digraph such that α 1 (D) = n and d v = 2 for all vertices v ∈ V .Then ima(f p ) < α p (D) for all f ∈ F[D, 2] and all p 1. n . Problem 16 . Find a good lower bound on the maximum rank or maximum periodic rank in F[D, 2].
6,521.2
2018-09-21T00:00:00.000
[ "Mathematics" ]
ANALYSIS OF METROLOGICAL PROPERTIES FIBER BRAGG GRATINGS WITH A CONSTANT AND VARIABLE PERIOD The paper presents periodic structures in terms of metrological properties in the distinction for a fiber Bragg grating (FBG) with a constant and changeable period. The process of their formation and characteristics as well as applications in many areas have been described. On the basis of the literature, the results of research and measurements of measurable quantities such as temperature and stress made by periodic structures applied to the fiber of the optical fiber are presented. Analysis of the presented measurements allowed to mark the ranges and accuracy of measurements of individual applications. Słowa kluczowe: fiber Bragg grating, optical sensors, uniform fiber Bragg grating, chirped fiber Bragg grating ANALIZA WŁAŚCIWOŚCI METROLOGICZNYCH SIATEK BRAGGA ZE STAŁYM I ZMIENNYM OKRESEM Abstract. W pracy przedstawiono struktury periodyczne pod kątem własności metrologicznych w rozróżnieniu na światłowodowe siatki Bragga (FBG – ang.: Fiber Bragg Grating) o stałym i zmiennym okresie. Opisano proces ich powstawania oraz cechy charakterystyczne jak i zastosowania w wielu dziedzinach. W oparciu o literaturę zaprezentowano wyniki badań i pomiarów takich wielkości mierzalnych jak temperatura i naprężenie dokonywanych strukturami periodycznymi naniesionymi na włókno światłowodu. Analiza zaprezentowanych pomiarów pozwoliła nakreślić zakresy oraz dokładności pomiarów poszczególnych aplikacji. W pracy przedstawiono struktury periodyczne pod kątem własności metrologicznych w rozróżnieniu na światłowodowe siatki Bragga (FBG – ang.: Fiber Bragg Grating) o stałym i zmiennym okresie. Opisano proces ich powstawania oraz cechy charakterystyczne jak i zastosowania w wielu dziedzinach. W oparciu o literaturę zaprezentowano wyniki badań i pomiarów takich wielkości mierzalnych jak temperatura i naprężenie dokonywanych strukturami periodycznymi naniesionymi na włókno światłowodu. Analiza zaprezentowanych pomiarów pozwoliła nakreślić zakresy oraz dokładności pomiarów poszczególnych aplikacji. Introduction The phenomenon of photosensitivity in optical fiber is the main phenomenon involving the fabrication of Bragg gratings in the fiber core, which for the first time showed K.O.Hill et al. In 1978 [14].However, only eleven years later (1989), published by G.Meltz and colleagues [39], a pioneering work in the production of Bragg grids, has become a milestone for fiber optic sensors.It describes a method for producing bragga nets using two intersecting UV rays, directed to the side of an optical fiber falling on its cladding.This method enabled the creation of a mesh with a fixed period and depth of modulation, while being much more efficient in comparison to previous methods that did not give such possibilities to change the parameters of the mesh [16]. The information that contains the aforementioned publications initiated the dynamic development of optical fiber technology around the world.In a short time, many new methods for the production of Bragg grids arose, and as a result, their quality and the number of potential applications drastically increased. The replacement of the first methods of producing periodic structures such as internal writing [14] and holographic technique [39], phase mask technique [2,15] allowed for a significant reduction in manufacturing costs through the use of cheaper devices in the production process while increasing product quality.The disadvantage of the phase mask technique is the need to use a separate mask for each other Bragg wavelength.However, it is possible to tune the wavelengths by tensioning the fibers during the photocoding process; the Bragg wavelength of the loosened fiber will vary by 2 nm [16]. The phase mask technique ensures not only high-quality periodic structures, but is very flexible, it can be used to produce meshes with controlled spectral characteristics.For example, the typical spectral response of a uniform Gragge (uniform grating), one that has a constant period of refractive index and a constant depth of modulation along the length of the fiber, has side bands/secondary maxima on both sides of the main reflection peak.In applications such as wavelength division multiplexing, this type of response is not desirable [16].Along with the use of the phase mask technique, it is possible to silence the sidebands with the apodization procedure, the change in modulation depth [1,37]. The phase mask technique has also been adapted to produce periodic structures with variable periods, those that have a variable period of refractive index along the length of the mesh to extend their spectral response. Another method that enabled the development of fiber optic sensors is the point-by-point (PbP) method.Despite the first presentation of the structure made in this PbP method in 1993 [36], it did not arouse much interest until the use of light sources with femtosecond pulse lengths [46].This technique allows the recording of Bragg grids in photonic fibers [28].The advantage of this technique is also the possibility of using the produced periodic structures at temperatures close to 1000°C due to structural modifications of the glass during their production [38]. The use of fiber-optic periodic structures as a sensor is very popular due to the possibility of using them in flammable and chemically aggressive environments, their great advantage is that they are insensitive to changes in the electromagnetic field.Negligible weight and size make it possible in most cases to omit their influence on the object under study [26]. Thanks to continuous research on the use of fiber optic Bragg grids as measuring sensors, many methods have been developed for their application.Most of the methods presented in the published works are based on the multiple use of FBG sensors in a single application. Simple batch structures such as homogeneous Bragg gratings, thanks to the properties of linear processing of magnitude measured on the Bragg wave shift, are great transducers of physical quantities such as temperature and stress [24,35], displacement [48] or force [10].These structures are also widely used in the study of many physical quantities simultaneously, eg: elongation and temperature [25], or strength and temperature [22]. Homogeneous optical fiber optical sensors are also tested for their application in the measurement of stress values occurring in the structures of aircraft wings, masts of sea-going ships, or examination of the condition of bridge structuresconstructions particularly exposed to the influence of external forces.Noteworthy is the use of this structure in the field of medicine.The optical sensor allows you to monitor the vibrations of the body caused by basic life activities, such as the rhythm of the heartbeat and breathing.This use of periodic structure gives a lot of possibilities in the field of monitoring the physiological state of the examined person without the need of direct contact with the skin of the patient (gel or dry electrodes) [7]. In addition to the use of homogeneous Bragg grids as sensors, tests are also carried out in the field of their use as optical switches based on optical bistability.The first works describing the optical bistability phenomena using the Bragg's single lattice appeared in 1995.They present the optical switching threshold, which occurred for the input optical grid signal of 200 W, which is a much higher value than the optical switching power level [43]. An article from 2015 proposes an optical bistable system that uses two homogeneous Bragg grids and an erbium doped optical fiber.It has been shown that the level of switching power of the system can be reduced to 12.5 mW by increasing the length of the erbium fiber characterized by a high value of the non-linear refractive index, and it was noted that the change in the length of the grid from 5 mm to 4 mm causes almost double increase in switching power [21]. Bragg grids with a variable period have many applications.Particularly linear periodic structures of this type have found a special place in optics: as devices for correcting dispersion and compensation.This application has also resulted in the production of very long, high-quality wide-band meshes, intended for highspeed transmission over long distances [27,33], and in WDM transmission [8,11,40].Some of the other applications include high order fiber dispersion compensation, ASE attenuation, amplifier gain flattening, bandpass filters [20] and Fourier transform in real time [3]. A very interesting application of CFBG is its use in an application to replace an optical spectrum analyzer.In the article from 2017.presents a CFBG interrogation system that can simultaneously measure positive/negative deformations and temperature changes, with a resolution of about 1με (thanks to a photodiode with a sensitivity of 0.3-0.4nW).A chirp of 5 nm can provide a strain measurement range of around ± 4000 με [34]. Bragg's chirp grids have also found their application as measuring courts used to measure impact velocity, detonation velocity, shock wave profile or pressure profile in inert and energetic materials.The diameter of the measuring judge using the chirp structure does not exceed 150 μm, which allows it to be placed directly in the material without interfering with physical phenomena.The sensor placed in this way enables the shockwaves to be traced inside the material using the Bragg wavelength.In this application, the speed (several km/s) and measurements of the shockwave profile are carried out by recording the reflected spectrum from the CFBG [4]. Common application of the sensor using the Bragg mesh with a constant and variable period is presented in the publication from 2012.The author presented a method enabling the simultaneous measurement of deformation and temperature using a single, uniform Bragg mesh with a properly selected chirp zone.Providing the same sensitivity to temperature and different sensitivity to deformation of two parts of the sensor and experimental measurements of the quality of the proposed system made it possible to state that the presented application is fully functional.The sensor grid was placed in such a way that its half was in the zone of variable axial stresses induced by changes in the cross-section of the sample, while the other half was in the zone with a constant cross-section of the sample and constant strain value.In this work, the author also presented the obtained nonlinear errors in the processing characteristics for measuring the deformation and temperature of the proposed system, amounting to 2.7% and 1.5% respectively, with the sensitivity factors for strain and temperature being respectively 0.77× m/ε and 4.13× m/K.The maximum differences shown between the values obtained from the intermediate measurement and the set values were 110 με for the deformation and 3.8ºC for the temperature, while for the deformation 2,500 με and the temperature 40°C [23]. While discussing the subject of Bragg grids with a constant and variable period, it is worth mentioning the possibility of transforming homogeneous FBG into chirp using, for example, strain gradient or temperature along the length of a homogeneous FBG.The deformation or temperature gradient can be produced by various methods:  combining FBG with the base using a soft adhesive, which gradually relieves the stress of the mesh [12],  narrowing of the external FBG diameter using acid [42],  using a cantilevered beam with non-uniform cross-section [13],  deposition of different thicknesses of the retaining film on the FBG surface with a constant period [9].The paper presents issues concerning optical parameters of homogeneous periodic structures and variable-period structures, and their use as transducers for temperature and stress measurement. Optical parameters of periodic structures with a uniform and variable period The Bragg fiber optic mesh is referred to as glass regions with different refractive index values stored in the single-mode optical fiber core. Fig. 1.Schematic of optical fiber with the Bragg mesh applied [20] These periodic changes have a sinusoidal character, which characterizes their period Λ, and their amplitude Δn.Propagation of a beam of light in a fiber with a direction perpendicular to the Bragg grating causes the reflection of a specific wavelength from the beam incident on it.The structure in question is capable of backward reflection of the wavelength satisfying the Bragg condition Reflected rays add in the phase creating a ray obliterated from the Bragg structure on the principle of constructive interference, while the remaining part of radiation, which does not meet the condition (1) is subject to further lossless propagation [20]. Considering the fixed network and the distribution of the refractive index inside the periodic structure, we are able to list different types of Bragg grids.In the further part of the work, two types of fiber-optic Bragg nets will be discussed, homogeneous and chirpova in terms of metrology. The homogeneous Bragg mesh is the basic type of this type of structures.It is characterized by a constant value of modulation depth and a constant period of refraction along the fiber axis, as shown in Fig. 2. Fig. 2. Diagram of a single FBG with its corresponding spectrum [20] A structure that has a heterogeneous value of the period along its length has been called Chirp.Chirp can take many different forms, it can differ symmetrically, growing or decreasing in the IAPGOŚ 2/2018 p-ISSN 2083-0157, e-ISSN 2391-6761 period.Chirp may be linear, square or may have spikes over a period of time.The grid may also have a period that changes randomly along its length.In Figure 3. the diagram of the Bragg's chirp grid and its corresponding spectrum are shown.[20] Proper analysis of metrological properties should be based on qualitative indicators of optical fiber periodic structures.The assessment is made by analyzing the spectral spectra obtained in the result of the tests, these may be transmission or reflection spectra. One of the qualitative indicators is the reflectance of the periodic structure.In Figure 4, the reflectance value R is indicated, which is the ratio of the difference in power values for the part of the characteristic out of the transmission peak, which is the reference power and the minimum power value at the summit point of the transmission peak to the reference power value.Spectral changes in the power level of the spectrum in the real measurement of the spectral characteristics result from the light source used to measure.The method to minimize the impact of the shape of the spectral characteristic of the source is to use the quotient of two measured spectra, the optical fiber spectrum without the recorded periodic structure and the spectral characteristics of the Bragg structure [20]. Fig. 4. Determination of FBG reflectance based on its reflection characteristics The actual spectral characteristics of the Bragg structures differ from the ideal characteristics.Deformation consists in the occurrence of sidebands on both sides of the reflective peak and the slope of the slope of the same reflective peak of the net as shown in Fig. 5. The RL sideband reflexivity is an important qualitative indicator affecting the applicability and accuracy of the Bragg structure used.It was defined as the ratio of the amplitude of the reflection of the radiation of the first-order ribbon and the main peak. Obtaining minimal reflectance of the first-order lateral webs (with the highest amplitude) in the Bragg structure manufacturing process indicates high quality of the obtained structure. The reason for the formation of side bands that are part of the spectrum are the extreme areas of the Bragg structure with a steep slope of the refractive index.The method for leveling the sidebands is to change the shape of the refractive index profile along the optical fiber axis (apodization) during the Bragg structure fabrication process. Fig. 5. The spectral characteristics of the Braga structure with marked slope line and sidebands and determination of their reflectance The research results presented in the literature prove that for a Bragg structure with a length of L = 5 mm, the maximum reflectance is about 60%, while for a structure with a length of L = 25 mm, the maximum reflectance is already 99.98%.However, increasing the length of the grid, despite obtaining high reflectance, causes a drastic increase in the refinement of the sidebands [17].After applying apodization of the Bragg structures, the mesh with a length of L = 10 mm obtained a reflectance of about 60%, while a reflectance of 99.99% was indicated with a Bragg structure equal to L = 45 mm.Despite the necessity to increase the length of Bragg structures after apodization in order to achieve the highest possible reflectance, nearly ideal spectra were obtained that were not burdened with side bands [17]. Half width FWHM (Full Width at Half Maximum), is a qualitative indicator defining the spectral width of the reflection of the periodic structure.Figure 6 shows the reflection spectrum with the half-width FWHM indicated, which is defined as the spectral distance between two points of the main peak of the reflection grid, at which the course takes half of peak values [20]. The studies presented in the literature define a range of typical FWHM half-width values, starting at 10 pm for Bragg structures with a constant period [6], and ending with CFBG grids, reaching a FWHM value equal to 100 nm [38]. Half-width measured in fixed-term structures is a very important parameter considering the use of the Bragg fiber optic mesh in sensor applications.The narrowing of the FWHM width makes it possible to increase the detection range leading to the measurement of very small deformations.As an example, it is worth citing a special type of mesh, πFBG, whose spectrum reflects the discontinuity caused by the π-phase cut in the central part of the mesh.Thanks to the application of πFBG in the sensor application, a half width equal to FWHM = 10 pm can be achieved, and the use of the π phase shift area allows to reduce the effective length of the sensor, making it particularly suitable for detecting high frequency ultrasound [6]. Each of the qualitative parameters described above marked in Fig. 4, Fig. 5, Fig. 6, presented as reflection spectra for a Bragg structure with a fixed period, can be directly related to a structure with a variable period as shown in Fig. 7. Uniform and chirp grids as transducers for temperature and stress measurement Analyzing the research presented in the publications, we can observe a great interest in sensors based on Bragg fiber optic nets.Among the many applications, FBG sensors are particularly useful for measuring stresses or temperatures, since both stress and temperature information are encoded in the optical fiber as a wavelength shift. An important parameter conditioning the use of the Bragg structure as a measuring transducer is a typical measuring range.In this part of the article a typical measuring range will be described for the temperature measured with different types of Braaga structure. Uniform Optic Bragg type I fiber mesh, characterized by a monotonic increase in the depth of the refractive index modulation as a function of the amount of energy delivered to the fiber in the recording process.They are usually used at temperatures up to 300°C because of their strength [30]. Bragg type II grids are created by increasing the energy of radiation during the recording process which can lead to physical damage in the fiber core or at the edge of the shell and core.A characteristic feature of this type of mesh is high temperature resistance, exceeding 800°C, and in some cases reaching 1200°C.This high temperature resistance was achieved due to the fabrication of the structures under investigation with the use of laser pulses of femtosecond lengths [39,40].In 2002, for the first time, the type IA structure was described, which is characterized by a higher temperature resistance compared to type I mesh, reaching the limit of 500°C [5,32]. Another group of FBG structures are type IIA grids.Their temperature resistance fluctuates within 500°C, yet they are characterized by the highest temperature sensitivity from all types of meshes, taking into account stresses [19,41]. Analyzing publications published in recent years, we can notice other types of Bragg's products.Table 1.shows all types of fiber optic Bragg grids with respect to the maximum temperature at which they can be used. The fact that data relating to temperature and stress is signaled by the shift of the wavelength in a fiber optic cable forces the execution of a system that allows measuring the stress of the examined element, distinguishing the influence of the ambient temperature and the tested element.The Bragg wavelength of the FBG sensor depends mainly on the deformation, but varies with the change in temperature.At a temperature change of 1°C, the measured stress usually has an error of 11 με [45]. Tab. 1. Types of fiber optic Bragg grids [5] Grids type Thermal durability To achieve higher accuracy, FBG sensors require temperature compensation.Many methods have been developed to measure stresses with temperature compensation.One of the first methods published in 1995 is the use of a stretched net in a packet containing two materials with different coefficients of thermal expansion.As the temperature rises, stress gradually releases, compensating for the temperature dependence on the Bragg wavelength.The Bragg fiber optic mesh mounted in the above-mentioned package, 50 mm long and 5 mm in diameter, showed a total variation of Bragg wavelength 0.07 nm in the temperature range of 100°C, compared with 0.92 nm for uncompensated mesh as shown in Figure 8 [18].Fig. 8. Bragg wavelength graph with respect to the temperature with marked values for Bragg mesh without compensation and temperature compensation [47] The high interest in fiber-optic temperature and stress sensors translates into published works in which we can distinguish many methods and applications of these periodic structures.The results of publications that outline the accuracy and measurement ranges of given structures will be presented below. One of the first publications is the article from 1996.describing the technique of simultaneous, independent temperature and deformation measurement using the Bagga fiber optic sensors.Two structures with closely spaced mid-wavelengths are recorded on both sides of the weave between two fibers of different diameters (Corning PMF-38 -80 μm and Snectran FS SMC-A0780B -125 μm).Batch structures exhibit similar temperature sensitivity but different strain reactions to applied stresses.The maximum error is ± 17 με and ± 1°C for a measuring range of 2,500 με and 120°C.The test results are presented in the form of graphs Fig. 9 and Fig. 10 [18] Analyzing the above graphs, we can notice that the stress response of both periodic structures differs to a large extent from each other.The gradient of the stress graph shown is 0.42±5× pm/μstrain for the Bragg mesh made on Corning fiber and 0.81±7.8×pm/μstrain for the Bragg mesh made on the Spectran fiber [18].Fig. 10.Graph showing the temperature response of a pair of periodic structures [18] In the case of a graph showing the temperature response, we can observe that the responses of individual periodic structures are very similar to each other.The gradient of the temperature response graph is 7.0±0.1 μm/°C of the Brgg mesh made on the Corning fiber and 5.7±0.1 pm/°C for the Brgg mesh made on the Spectran fiber [18]. The temperature and stress can also be measured with one Bragg fiber optic mesh.The publication from 2010 presents a method of simultaneous measurement of both these quantities using a single periodic structure made on a tapered fiber.The implementation of a homogeneous periodic structure on a tapered optical fiber allows to obtain a heterogeneous (chirp) structure after undergoing stress.The existence of stress-induced chirp allowed the authors to collect information encoded not only in Bragg's wavelength but also in the FWHM grid.An important feature of the periodic structure, related to the insensitivity of FWHM to temperature changes, allowed to measure temperature and strain with uncertainty of ±1.9°C and 15.3 με respectively.The measurement was carried out at a constant stress of 1500 με and a temperature change in the range from 20 to 65°C, and at a constant temperature of 40°C and variable tension [31].The above Figure 11 shows the response of the sensor to different values of the applied strain, which reveals the linear dependence of FWHM on the applied strain.The peak wavelength response is also linear with applied strain [31].[31] Figure 12 shows the temperature response of the sensor head.It can be seen that as the temperature rises, the peak wavelength also increases linearly, but FWHM remains relatively at around 0.13 nm [31]. Conclusion The literature examples cited in the paper clearly indicate the very accurate use of fiber-optic periodic structures in the technique of measuring non-electrical quantities such as temperature and stress.Presentation of fiber optic Bragg grids through the spectrum of metrology enabled accurate scratching of measurement intervals, presentation of features affecting the accuracy of measurement and various types of periodic structures themselves, that is grids with a constant and variable period.Taking into account the described research results taken from the literature, it is clearly visible that periodic structures work well in the role of measuring transducers of temperature and stress.The described applications thanks to the use of fiber-optic periodic structures in various configurations also enable their adaptation to other scientific fields. where: mnatural number defining the order of reflection of radiation, -Bragg wave length, the average value of the modulation component of the refractive index modulation in the Bragg structure, Λ -Bragg grid. Fig. 9 . Fig.9.Graph showing the stress response of a pair of periodic structures[18] Fig. 11 . Fig.11.Graph showing the response of the sensor to different stress values[31] Fig. 12 . Fig.12.Graph showing the response of the sensor to different temperature values[31]
5,741.4
2018-05-30T00:00:00.000
[ "Physics" ]
The Effect of Cluster Size Variability on Statistical Power in Cluster-Randomized Trials The frequency of cluster-randomized trials (CRTs) in peer-reviewed literature has increased exponentially over the past two decades. CRTs are a valuable tool for studying interventions that cannot be effectively implemented or randomized at the individual level. However, some aspects of the design and analysis of data from CRTs are more complex than those for individually randomized controlled trials. One of the key components to designing a successful CRT is calculating the proper sample size (i.e. number of clusters) needed to attain an acceptable level of statistical power. In order to do this, a researcher must make assumptions about the value of several variables, including a fixed mean cluster size. In practice, cluster size can often vary dramatically. Few studies account for the effect of cluster size variation when assessing the statistical power for a given trial. We conducted a simulation study to investigate how the statistical power of CRTs changes with variable cluster sizes. In general, we observed that increases in cluster size variability lead to a decrease in power. Introduction The cluster-randomized trial (CRT) is a common study design in public health research, in which interventions are administered to groups rather than to individuals. In situations where dividing a group of individuals into treatment and controls is unethical or impossible, a CRT design retains many of the strengths of an individually randomized study design [1]. By comparing the outcomes of small populations (clusters), CRTs can observe the impacts of interventions on a community as a whole. The number of published articles utilizing CRTs has increased every year since 1997 (See Fig. 1). Due to its rising popularity, this relatively complex study design is facing greater scrutiny from the scientific community. The Consolidated Standards of Reporting Trials (CON-SORT) Group issued guidelines for conducting CRTs in 2004 [2], with an update published in 2012 [3]. One important component of CRT design is the sample size calculation; in which researchers must find the correct number of clusters to achieve sufficient statistical power. In CONSORT 2010 statement: extension to cluster randomised trials, there was an added focus on sample size reporting, which included a note about accounting for varying cluster sizes. This has been a subject of interest since Donner and Klar's seminal paper Randomization by cluster: sample size requirements and analysis in 1981 [4]. As detailed in Unequal cluster sizes for trials in English and Welsh general practice: implications for sample size calculations [5], Kerry and Bland derived a formula to calculate the design effect of a CRT based on the number of participants in each cluster. The design effect is the ratio of the sample size required for a CRT over that of an individually randomized trial with the same power. Eldridge et al built upon this formula so that design effect could be calculated knowing only the mean cluster size and the coefficient of cluster size variation [6,7]. Another approach to accounting for cluster size variation is by using a measure of relative efficiency [8][9][10][11]. Relative efficiency is mathematically derived and easy to implement once computed. However, there are several ways to calculate relative efficiency, all of which use complicated methods that may create obstacles to their use in practice. We seek to simplify the process of estimating the needed sample size for a CRT in the presence of variable cluster sizes. We hope this effort will encourage continued improvement in efficient implementation of CRTs within the medical and social science communities. Using the statistical programming language R v3.02 [12] and the package clusterPower [13,14], we designed a controlled simulation experiment in which we simulated hundreds of thousands of hypothetical CRTs. Using the results from these simulations, we examined the effect of variability in cluster sizes on statistical power of CRTs and developed simple and concrete quantitative guidelines for researchers who design CRTs with high variability in cluster sizes. This manuscript file was typeset reproducibly using the R package knitr [15,16], which we used to call files of pre-processed results from the simulation studies. All code and data for this project is available at https://bitbucket.org/nickreich/clustersizepaper. Project Overview We developed a framework that allowed us to measure the impact that variability in cluster size has on the power of a CRT. This framework was built on the foundation of the cluster-Power package in R [13]. Statistical power is defined as the probability of rejecting the null hypothesis given that the null hypothesis is not true. In brief, to estimate statistical power for a given CRT design, we stochastically simulated data (i.e. results from a trial) from a hypothetical CRT design with a known, non-zero intervention effect size. Among all simulated trials, the percentage of time the null hypothesis is rejected is therefore an accurate estimate of the power for this design. We have leveraged this simple simulation framework to answer a complex question about CRT study design: how does variability in cluster size impact the power of a CRT? We designed a simulation study to systematically gather data on how cluster size variability impacts power and sample size requirements for a CRT. To provide a focused study of the effect of cluster size variability on CRT sample size, we limited the current investigation to a common CRT design. Namely, we focused on equal-armed CRTs that had a continuous outcome measure (as opposed to binary or count data) and assumed that the design did not incorporate a "controlled comparison" (e.g. a crossover, baseline comparison, or matching). We used a data generating model similar to that given by Reich et al. [13]: where Y jk is the observed outcome for person j (j = 1, . . ., J k ) in cluster k (k = 1, . . ., K; K = total number of clusters), X k is a binary variable indicating whether cluster k was assigned to the treatment (X k = 1) or control (X k = 0) group, Δ is the non-standardized treatment effect size, η k is the effect due to variation between clusters, and ε jk is the effect due to variation between the individuals in each cluster. Simulation study The following list provides a brief description of each step in our simulation study. We provide a more detailed explanation of each step in the next section. 1. Defined the parameter sets (θ i ). Each θ i is a vector of variables used to calculate the statistical power (P) of a theoretical CRT. The following components make up θ i and are also listed (with the values we assumed for each) in Table 1: i. Type I Error (α, fixed at 0.05 for all experiments) ii. Mean cluster size (μ) iii. Intraclass correlation coefficient (ICC ¼ v. Number of clusters needed to reach 80% power with fixed cluster sizes (C 80 ) vi. Effect size (Δ, calibrated on the combination of above variables) Using parameters (i-v) and a type II error (β) of 0.2 in Murray's effect size equation (defined in Methods), we calculated the minimum effect size required to find a significant result in a properly powered CRT, Δ. By jointly varying parameters (i-v) over realistic ranges of values, we created carefully calibrated hypothetical CRT settings. Then, when we simulated data from CRTs in these setting to make power estimates with clusterPower, we allowed for the actual number of clusters in the study (C A ) to be different than C 80 . 2. Estimated the statistical power when cluster sizes were fixed at μ. We refer to these estimates as our fixed cluster size power estimates,P F θ i ðC A Þ. To do this, we simulated hypothesis tests on 5000 unique datasets for each (θ i , C A ) pair. 3. Estimated the statistical power when cluster sizes have variability, defined by several coefficient of variation (cv) levels. We refer to these estimates as variable cluster size power estimates,P cv θ i ðC A Þ. To do this, we generated S = 2000 variable cluster size sets for each (θ i , C A , cv) combination and ran a simulation on each variable cluster size set, which output a dataset. Hypothesis testing was performed on each dataset and produced aP cv 4. Calculated the required number of clusters for each combination of θ i and cv. The vectorŝ P F θ i andP cv θ i form power curves for each θ i along the range of C A . Using the values just above and below P = 0.8, we interpolated the point where C A is equal to the number of clusters required to achieve a statistical power of 80% for fixed-size cluster sets (Ĉ F θ i , which approximates C 80 ) and variable cluster sets (Ĉ cv θ i ). 5. Using our results, we observed the effect of cv on the required number of clusters (Ĉ cv θ i ). Step 1: Parameter Selection To test the impact of cluster size variability on P, we simulated from 420 parameter sets (θ i ). This is the number of unique combinations of μ values (5), ICC values (7), BCV values (3), and C 80 values (4). Each θ i was simulated across a range of C A values (6 to 9, depending on C 80 ) to create a total of 3,255 data points. For each θ i parameter set, we calculated a Δ value based on α, β, μ, ICC, BCV, and C 80 according to these equations [17]: Parameter selection required balancing the desire for well-spaced values across a meaningful range for each parameter with the computational burden of the simulations. (Over 68 million simulations took approximately two weeks running in parallel in 21 threads on a 12-core Mac Pro Desktop.) The values for α and β were set at the standard type I and type II error rates of 0.05 and 0.2, respectively. The five values of the mean cluster size, μ, were distributed between 20 and 125. We limited the maximum μ to 125 because simulations with large cluster sizes take considerably more computational time. The ICC quantities were seven evenly-spaced, rounded log-linear values between 0.001 and 0.2. The three BCV values were equally spread out on a log scale, spanning from 0.01 to 1. The four C 80 values were chosen based on the range of popular numbers of clusters for CRTs. A CRT with 10 clusters would constitute a small trial and a CRT with 60 clusters is decidedly larger. We set the maximum C 80 value at 60 because larger quantities of clusters take significantly longer to simulate. The C A values ensured that the power curves for each CRT ranged from near 0 to 1. Initial tests showed that a wide range (5 C A 120) was needed to accomplish this when C 80 = 60. However, for smaller C 80 values, the statistical power reached 1 sooner and thus the larger values of C A were unnecessary. Step 2: Generate fixed cluster size power estimates Prior to investigating the effect of variable cluster sizes on statistical power, we calibrated the empirical estimates of power from the clusterPower package against the formula-based estimates of power (2). Since existing CRT formulas assume equal cluster sizes, we ran one 5000simulation fixed cluster size CRT for each (θ i , C A ) pair. This produced fixed cluster size power estimates,P F θ i ðC A Þ, where every cluster size in each CRT was fixed at μ. In Step 4, these estimates are used to compare the difference between the estimated number of clusters needed based on formulaic and simulated power calculations. Step 3: Generate variable cluster size power estimates Next, we generated variable cluster size sets, sets of C A randomly-drawn cluster sizes from a negative binomial distribution with mean μ. The negative binomial distribution was chosen due to the fact that the cluster sizes from trials that study investigators have worked on have shown over-dispersion similar to that of these distributions. We fixed the coefficient of variation (cv) of the negative binomial distribution, to roughly match observed skewness in cluster sizes from published CRTs. We used three levels of variability in cluster sizes: low variance (cv = 0.5), mid variance (cv = 1.0), and high variance (cv = 1.5). In our high variance cluster size set, the cv was fixed at 1.5, which is about twice the size of the largest cv in other CRT papers [8][9][10]. This ensured that our high variance cluster size power estimates represented a near-upper bound on the number of variable-sized clusters required in practice. By holding the cv constant, the size parameter of the negative binomial distribution (r) and consequently the variance, are functions of μ [18]: When cv = 1.5, draws from a negative binomial distribution with a mean of 20 yield a value of zero 18% of the time, a value of fifty or greater 12% of the time, and a value of 140 or greater 1% of the time. Because a cluster cannot consist of zero participants, we set the minimum cluster size to 3. Specifically, to obtain a mean cluster size of 20, we drew from a negative binomial distribution with a mean of 17 before adding 3 to all cluster sizes. The probability mass functions that these distributions are drawn from are shown in Fig. 2. The number of participants in the variable cluster size sets fluctuated immensely. Since this has an effect on the estimated power of a CRT, many variable cluster size sets (S = 2000) were created for each (θ i , C A ) pair, so that the mean cluster size of all sets would converge to μ. We ran one clusterPower hypothesis test simulation for every variable cluster size set. This generated a collection of binary outcomes (1 if the null hypothesis was correctly rejected, 0 if not), P cv θ i;s ðC A Þ for s = 1, Á Á Á, S. These outcomes were averaged to create one point: Step 4: Estimate number of clusters needed with and without variability LetP denote anyP F θ i orP cv θ i , a vector of power values for each of the θ i across C A , with or without variance. This vector creates a power curve that allows us to estimate the number of clusters required for that parameter set at a certain level of variability. For eachP, we found thePðC A Þ and C A at the points just below and above P = 0.8.P 0:8 − and C 0.8 − are the power and number of clusters below our point of interest.P 0:8 þ and C 0.8 + are the power and number of clusters above our point of interest.P 0:8 − andP 0.8 + were placed in a vector and C 0.8 − and C 0.8 + into a matrix as follows: In solving this equation, we find the slope (â) and intercept (b) of a line that passed between the points (C 0.8 −,P 0.8 −) and (C 0.8 +,P 0.8 +). From there, we set P = 0.8 to find the number of clusters required to achieve sufficient power,Ĉ: WhereĈ is the estimated number of required clusters to achieve a statistical power of 0.8. When usingP F θ i ,Ĉ ¼Ĉ F θ i , which is a simulated estimate of C 80 . ForP cv θ i ,Ĉ ¼Ĉ cv θ i , which is a value for which Murray's formulas (2) cannot be used. For the sake of simpler notation, we will refer to these asĈ F andĈ cv . A graphical representation of this process is shown in Fig. 3. Additionally, we defined and compared the percentage change in the number of clusters needed to achieve 80% power between those required by the Murray equation and the variable cluster size sets as:Ĉ cv % ¼Ĉ cv À C 80 C 80 : ð7Þ Step 5: Analyze effects of cluster size variance WithĈ cv ,Ĉ F , and C 80 , we observed the effect of cluster size variance on the required number of clusters. We compared and contrastedĈ F and C 80 . If the numbers were similar, then the cluster-Power simulations approximate the Murray equations andĈ cv could be used with confidence. If they were substantially different, further analysis would be required before evaluatingĈ cv . C cv was analyzed with respect to C 80 . From this analysis, we observed the effects of cluster size variation on the required number of clusters. Without cluster size variability, simulations approximate formula The first clusterPower simulation used each unique parameter set (θ i ) across a range of sample sizes (actual number of clusters, C A ) to generate fixed cluster size power estimates (P F ¼P F θ i ðC A Þ). These estimates were plotted to form the power curves represented by the black lines in Fig. 4. Using these results and the technique described in Methods Step 4, we calculated the number of clusters required to achieve a statistical power of 80% for fixed-size cluster sets (Ĉ F ). To validate clusterPower,Ĉ F was compared to Murray's formula-based estimate of clusters required to reach 80% power (C 80 , from Equation 2). The clusterPower simulations produced accurate, though conservative estimates. One example of this is the result from the lower left corner of Fig. 4. In this scenario, the first five θ i parameters (type I error(α) = 0.05, mean cluster size (μ) = 75, ICC = 0.006, BCV = 0.1, C 80 = 60) require an effect size (Δ) of 0.417 when the cluster sizes are fixed. Using these parameters and a C A = 60, we should find that the statistical power is near 80%. The average statistical power across 5,000 simulations,P F , was 79.04%. Since this is smaller than 80%, from this point and the one at C A = 80 we can interpolate to find thatĈ F = 61.74. This is a slightly conservative result, in that the statistical power was less than one percent lower than expected and the required number of clusters were two greater than expected. More than half of our simulatedĈ F values were closer to Murray's C 80 than in this example. In situations where C A = C 80 , theP F ranged from 75.04% to 81.44% with a median of 79.52%. The difference betweenĈ F and C 80 ranged from -1.55 to 8.28 with a median difference of 0.51 (with negative values indicatingĈ F < C 80 ). TheĈ F and C 80 were highly correlated (R = 0.9982). A paired two-sample t-test showed thatĈ F was significantly larger than C 80 and that the average difference is within one cluster (Estimate: 0.81, CI: (0.69, 0.92)). WhileĈ F did not perfectly reflect C 80 , the high correlation and generally conservative estimates suggest that clusterPower can be used to simulate variable cluster size CRTs. With cluster size variability, power decreases The subsequent clusterPower simulations used θ i with variable cluster size sets of C A clusters and three levels of coefficient of variance (cv = 0.5, 1.0, 1.5) to generate variable cluster size power estimates (P cv ¼P cv θ i ðC A Þ). These estimates were plotted to form the power curves that form the blue, green, and orange lines in Fig. 4 (which represent cv = 0.5, 1.0, 1.5, respectively). By performing the same process for calculatingĈ F , we derived the number of clusters required to achieve a statistical power of 80% for variable cluster size sets,Ĉ cv . Using the lower left plot again as an example, one can see how power decreases and required number of clusters increases with greater cluster size variation. Looking below theP F point at (60, 0.79), we see that the power decreases as variance increases:P 0.5 = 0.77;P 1.0 = 0.73; and P 1.5 = 0.69. Looking to the right of theĈ F point at (61.74, 0.8), we see that more clusters are required in the presence of variation:Ĉ 0.5 = 64.55;Ĉ 1.0 = 72.6; andĈ 1.5 = 79.82. This is merely one example of a consistent pattern of decreasing power as cluster size variability increased. Using paired t-tests, we observed that the difference betweenĈ 0.5 and C 80 was significantly greater than zero (Estimate: 2.19, CI:(2.05, 2.34)); thatĈ 1.0 was significantly greater thanĈ 0.5 (Estimate: 4.09, CI:(3.82, 4.36)); andĈ 1.5 was significantly greater thanĈ 1.0 (Estimate: 4.91, CI:(4.59, 5.23)). Within each cv level, there is substantial variability in statistical power and required number of clusters due to the large diversity of parameter sets. We can observe this by looking at the percent increase in number of clusters required by variable cluster size sets over those from the Murray equation (Ĉ cv %, from Equation 7). The range forĈ 0.5 % spans −2.03% to 30.83%, with a median of 7.47%;Ĉ 1.0 % covers 2.8% to 43.27%, with a median of 21.49%; andĈ 1.5 % ranges from 13.04% to 63.72%, with a median of 38.38%. In 21 cases (5%),Ĉ 0.5 was less than both the equivalent C 80 and C F values. However, there does not seem to be a common link between these scenarios, and thus their occurrence may be due to sampling variation from the stochastic simulations. Discussion This study demonstrates that variability in cluster sizes reduces the power of a cluster-randomized trial when compared to a trial with no variation in cluster sizes. We observed between a 2% decrease and a 64% increase in the number of clusters needed across all scenarios studied. As the variability in cluster sizes increases, additional clusters are needed to maintain 80% power. This phenomenon has been described before [4,6,8,10]. Our simulation study has confirmed these results, and allowed us to quantify the expected loss of efficiency across many different possible study design scenarios. These results may only hold for equal-armed clusterrandomized trials that have a continuous outcome measure and no controlled comparison. A key feature of this paper is demonstrating the utility of the clusterPower package for R as a tool for conducting controlled simulation experiments to answer questions about CRT design. In this paper, we focused on estimating power in CRT designs with varying cluster sizes, but we have provided a template of a simulation study that could be used to answer many other types of questions. For example, different designs or analysis methods could be compared to determine the most efficient strategies for implementing CRTs. Our simulations do not fix the overall sample size of a given study with variable cluster sizes. So the estimated powers within a given parameter set reflect, to some extent, different total sample sizes of the studies. However, these differences are averaged out across the many simulations. In general, this setting mimics situations where the cluster sizes cannot be controlled by investigators. This might be the case when, for example, health care workers at different-sized clinics will be enrolled in a study; or, students in different-sized classes will be enrolled in a study. In these situations, the investigator will not know the exact number of The CRT power curves for two different parameter sets over four levels of cluster size variance. When C A = 60, both fixed cluster size power estimates (P F θ i ), the solid, black lines) should equal 0.8. By looking below the point (60, 0.8), one observes that power is lost as the cluster size variance increases in both scenarios. By looking to the right of the point, the observer notices that more clusters are required to attain a statistical power of 0.8 with increased variability. participants who will be enrolled in the study, just how many clinics and the average number per clinic. Another difficulty that we did not confront in this paper is the effect of purposefully distributing different clusters into the treatment and control group. All group assignment was done in a random fashion. The impact of putting large clusters into one group and small clusters into the other is still unknown. Using some controlled comparison technique (such as matching on cluster size) may increase the efficiency of studies, but these methods were not examined in the present study. Due to the experimental design, we cannot observe the impact that changing any of the CRT parameters may have upon the required number of clusters because all of them are used to calibrate the effect size. As the use of cluster-randomized trials continue to expand in many scientific disciplines, it is vital that we continue to build our knowledge about how to design these trials efficiently. The results presented in this paper demonstrate the value of a new method for the efficient design of cluster-randomized trials in the presence of cluster size variability.
5,797.4
2015-04-01T00:00:00.000
[ "Mathematics" ]
Change detection analyses using simulated and actual ExoMars TGO-CaSSIS images: A case study based on past and present Gasa Crater gully activity Vidhya Ganesh Rangarajan, Livio L. Tornabene, Gordon R. Osinski, Frank P. Seelos, Susan J. Conway, Manish R. Patel, Nicolas Thomas, Gabriele Cremonese, Maurizio Pajola, Giovanni Munaretto, Alice Lucchetti, and the CaSSIS Team Institute for Earth and Space Exploration/ Department of Earth Sciences, University of Western Ontario, London, ON, Canada<EMAIL_ADDRESS>Applied Physics Laboratory, John Hopkins University, Laurel, MD, USA CNRS Laboratoire de Planétologie et Géodynamique de Nantes, Université de Nantes, 2 rue de la Houssiniére, 44322, Nantes, France School of Physical Sciences, STEM, The Open University, Milton Keynes, UK Physikalisches Institut, University of Bern, Sidlerstr. 5, 3012 Bern, Switzerland INAF-Osservatorio Astronomico di Padova, Padova, Italy Department of Physics and Astronomy, University of Padova, Padova, Italy Introduction: The Martian surface hosts a variety of active surface processes [1-3] whose regular monitoring is key to providing us insights into past and present-day surface, geologic and climatic conditions [4]. Most change detection studies on Mars utilize time-series image acquisitions from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE; 25-50 cm/px) [5] and the Context Camera (CTX; 5-6 m/px) [6]. However, the relatively narrow HiRISE colour swath (~20% of the image swath) results in a lower probability of observing surface changes with multiple wavelengths. The Colour and Stereo Surface Imaging System (CaSSIS) onboard the ExoMars Trace Gas Orbiter (TGO) [7] permits 4-band VNIR colour coverage at 4.6 m/px and an image swath >6 km. Furthermore, TGO/CaSSIS is able to observe Mars at multiple times of day, permitting detection/monitoring of diurnal processes. While TGO has only been in operation for a short period of time (MY34-35), the development of simulated CaSSIS images from MRO datasets [8] permits the monitoring of long-term surface changes with CaSSIS from as early as MY28 to present. This work assesses the change detection capabilities of CaSSIS by using a combination of simulated and actual CaSSIS images of one of the most active Martian gully sites to date -Gasa Crater [9-12] . Methods: We initially restricted the simulated CaSSIS images for this study to pre-2012 acquisitions, as our ability to fully photometrically correct CRISM targeted observations for alongtrack variations in emission/phase is confounded due to the loss of the full gimbal range of CRISM in late 2012 [8]. Three coordinated CRISM/CTX pairs were selected for production into simulated CaSSIS cubes based on a combination of favourable geometries, coverage, estimated atmospheric dust opacities and notable changes. Simulated CaSSIS images were generated using the procedures in [8] where spectrally and spatially resampled CaSSIS-compatible CRISM and CTX products are combined into a rigorous fully-simulated CaSSIS image using a Gram-Schmidt spectral pansharpening algorithm, which retains I/F information and minimises colour/spectral distortions [13]. To reduce atmospheric contributions, a dark-object-subtraction technique [14] was applied to both simulated and actual images. All images, including the first actual CaSSIS image acquired on Ls 350, MY34 were overlain and compared to one another to identify visible colour and/or morphologic changes. Notable changes were then compared with previously documented activity. Results and Discussion: We observe 28 possible changes between the simulated and actual CaSSIS image cubes spanning MY28 to MY34 (Fig. 1). Of these, 20 are previously undocumented changes, including 8 putative new changes and 12 fading flows (black-arrows in Fig. 1). All new/previously unrecognized changes are currently under active investigation with associated HiRISE coverage to verify if observed physical changes are not a manifestation of variable illumination conditions. Prominent changes previously observed between MY28 to 30 by [9,10] are all on the northern and north-eastern crater walls (red-arrows in Figs. 1a-c). While one prominent physical change (orange-arrow in Fig. 1c) was previously identified with a simulated CaSSIS image by [8], we note that 5 meter-scale physical changes noted by [9,10] are unresolved by CaSSIS products. not readily visible with HiRISE due to lack of colour coverage. One prominent example includes a bright-bluish deposit in the eastern part of the crater (Figs. 1b-c). These deposits have an NIRsignature that suggests they are possible ferrous-bearing materials sourced from the gully alcoves [8,12,15]. The most-notable recent putative change based on our first CaSSIS acquisition of Gasa (Ls 350, MY34) shows a bright-bluish deposit on the eastern crater wall that spills partially onto the crater floor (Figs. 1d, 2). Although previous HiRISE acquisitions between MY31 and MY34 seem to show possible morphological changes, lack of HiRISE colour coverage over this deposit makes it difficult to verify the activity. A new optimally-positioned HiRISE acquisition later this year will enable us to verify this surface change (if it has not since faded). Conclusions: This study demonstrates how both simulated and actual CaSSIS cubes are useful for detecting both previously documented and potentially new gully activity at Gasa Crater. While CaSSIS may not capture all small meter-scale physical changes that HiRISE does, it allows for a much-improved colour-change capability over HiRISE. However, continuing to monitor with both instruments is pivotal, as CaSSIS detections of prominent colour changes can be used to reposition HiRISE to better target colour-coverage to validate and characterise meter-scale surface changes. Despite anticipated photometric complications that post-2012 CRISM targeted observations offer, future work will also include an assessment of simulated CaSSIS products generated with post-2012 CRISM and CTX coordinated data to assess identification of both new and previously documented
1,243.2
2020-08-04T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
Widespread Natural Occurrence of Hydroxyurea in Animals Here we report the widespread natural occurrence of a known antibiotic and antineoplastic compound, hydroxyurea in animals from many taxonomic groups. Hydroxyurea occurs in all the organisms we have examined including invertebrates (molluscs and crustaceans), fishes from several major groups, amphibians and mammals. The species with highest concentrations was an elasmobranch (sharks, skates and rays), the little skate Leucoraja erinacea with levels up to 250 μM, high enough to have antiviral, antimicrobial and antineoplastic effects based on in vitro studies. Embryos of L. erinacea showed increasing levels of hydroxyurea with development, indicating the capacity for hydroxyurea synthesis. Certain tissues of other organisms (e.g. skin of the frog (64 μM), intestine of lobster (138 μM) gills of the surf clam (100 μM)) had levels high enough to have antiviral effects based on in vitro studies. Hydroxyurea is widely used clinically in the treatment of certain human cancers, sickle cell anemia, psoriasis, myeloproliferative diseases, and has been investigated as a potential treatment of HIV infection and its presence at high levels in tissues of elasmobranchs and other organisms suggests a novel mechanism for fighting disease that may explain the disease resistance of some groups. In light of the known production of nitric oxide from exogenously applied hydroxyurea, endogenous hydoxyurea may play a hitherto unknown role in nitric oxide dynamics. Introduction Hydroxyurea is a remarkable compound that has been known to science since1869 when it was first synthesized [1]. Various studies show it has antiviral, antibacterial, and antineoplastic properties [2]. Its mechanism of action involves inhibition of ribonucleotide reductase (EC 1.17.4.1) which inhibits DNA synthesis [3] in a variety of organisms. It is, or has been used in the treatment of a variety of neoplastic diseases, sickle cell anemia, psoriasis, myeloproliferative diseases and infectious diseases such as HIV [2]. It is listed as an "essential medicine" by the World Health Organization [4]. Hydroxyurea, however, is virtually unknown in nature with a record of its presence in the bacterium Streptomyces garyphalus as an intermediate in cycloserine synthesis [5] and a report in human plasma at levels close to the limits of detection (2.6 μM) [6]. We examined the levels of hydroxyurea in tissues of representative of invertebrate and vertebrate groups. Materials and Methods Animals Animals collected in Passamaquoddy Bay, New Brunswick, Canada were collected under Department of Fisheries and Oceans Canada permit number 323401. Euthanasia procedures for this specific study were approved by University of Guelph Animal Care Committee Protocol number 11R014. Surf clams, Spisula solidissima, were collected at low tide at Bar Road, St. Andrews, New Brunswick Canada. Lobsters, Homarus americanus were purchased from a local (Guelph, Ontario, Canada) seafood retailer. Hagfish, Eptatretus stouti tissues were donated by D. Fudge, Department of Integrative Biology, University of Guelph. Little skates (L. erinacea Mitchill 1825) of either sex were collected by otter trawl in Passamaquoddy Bay (New Brunswick, Canada), before transport to holding facilities in the Hagen Aqualab, at the University of Guelph (Guelph, Ontario) where they were maintained for several months to several years. Skate eggs were obtained from this colony. African lungfish (Protopterus dolloi) were held and sampled as previously described [7]. Adult rainbow trout (Oncorhynchus mykiss Walbaum 1792) of either sex were purchased from a local fish farm (Belleville, Ontario) and transported to holding facilities at the University of Guelph. Trout were held as previously described [8,9]. Frogs, Lithobates pipiens tissues were donated by P. Smith, Department of Integrative Biology, University of Guelph. Sheep, Ovis aries, tissues were obtained from a local slaughterhouse (Guelph, Ontario, Canada). Animals collected in Passamaquoddy Bay, New Brunswick, Canada were collected with permission of the Department of Fisheries and Oceans Canada permit number 323401. Sampling Fish were euthanized by cervical section. Tissues were rapidly excised, frozen in liquid nitrogen and stored at -80°C until used. Blood was drawn by cardiac (skates) or caudal (other fish) puncture using heparinized syringes. Erythrocytes were separated from plasma by centrifuging blood at 2,430 g for 10 minutes at 4°C. Sheep tissues were collected from a federally regulated abattoir at the University of Guelph. Preparation of tissues for use in hydroxyurea and urea assays Tissues were homogenized in a small volume of ddH 2 O using a Polytron PT1200 homogenizer set at high speeds (25, 000 rpm) for three 10 second bursts, with a cooling period of 30 seconds between each burst. Homogenized samples were then spun at 9,700 g with a Sorval SA-600 rotor and 4°C for 10 minutes to remove cellular debris. The resulting supernatants and diluted plasma samples were collected and deproteinized with 60% perchloric acid (PCA) to a final concentration of 0.5M PCA. Acidified samples were then spun at 22,000 g for 20 minutes with a Sorval SA-600 rotor and 4°C. The supernatants were collected for use in hydroxyurea and urea assays and the pellets discarded. Measurement of hydroxyurea and urea in biological samples Determination of hydroxyurea content in deproteinized plasma and tissue samples followed the colorimetric assay of Fabricius and Rajewsky [10]. Absorbance of hydroxyurea samples was measured at 540 nm using a Cary 300 UV/Vis spectrophotometer (Agilent Technologies). Urea was measured according to the protocol originally described by Rahmatullah and Boyde [11] at 525 nm. In our study, hydroxyurea was measured chemically by the method of Fabricius and Rajewsky [10] with analate addition. Its identity was confirmed by gas-chromatography mass spectrometry using the method of Scott et al. [12] in plasma and liver samples from L erinacea (Fig 1). Gas Chromatography-Mass Spectrometry (GC/MS) Samples were derivatized as detailed in Scott et al. [12]. Once derivatized, tubes were cooled to room temperature and the contents transferred to autosampler vials containing tapered inserts before being placed in the autoinjector of the GCMS and run. Injections of 1 ml were used for GCMS analysis. GC/MS operating conditions were adapted from those detailed in Table 26.3 of Scott et al. [12]. The GC/MS was operated in selected ion mode after electron impact fragmentation, selective for ion 277 (hydroxyurea tri-TMS). Statistics ANOVA with a Tukey Post Hoc Test was conducted to identify significant differences (P<0.05) in HU or urea concentrations between tissues. Results and Discussion We initially found tissue specific accumulation of hydroxyurea in the elasmobranch L. erinacea with values up to 250 μM in the spiral valve (intestine) (Fig 2a). The extensive literature on the antibiotic and antineoplastic effects of hydroxyurea lead to the obvious conclusion that its biological role in animals is as part of the innate immune system to combat viral and other infections. The nominal concentrations we report for the little skate are in the range of concentrations causing 50% inhibition (ED 50 ) of a variety of processes including DNA synthesis, ribonucleotide reductase activity in viruses and growth of some cell types ( Table 1). The use of the values presented in Table 1 in comparison to the values we report here has several caveats that must be considered. Firstly, the times used for determination of the ED 50 reported in Table 1 range from 10 minutes to several days in vitro. However, according to Haber's law, the severity of a toxic effect depends on the total exposure (i.e. exposure concentration multiplied by the duration of exposure) [13]. Maintenance of chronic high levels of hydroxyurea in vivo would thus reduce the concentration needed for a given effect. Thus, the hydroxyurea levels we report would be even more effective in vivo than the values in Table 1 would predict. Secondly, our hydroxyurea tissue concentrations are likely to be underestimates in the vertebrates we examined since hydroxyurea reacts with hemoglobin and some would be destroyed during preparation in tissues with substantial blood supplies [14]. Concentrations in the spleen especially may be higher than measured since, as a storage site for erythrocytes the spleen has the highest concentrations of hemoglobin of any tissue. The values we report for L. erinacea are in the range that would affect some viral and bacterial processes (Fig 2a and Table 1). Generally, viral processes are more susceptible to inhibition by hydroxyurea than bacterial or mammalian cell lines (Table 1) [31]. Interestingly, our plasma concentrations for L. erinacea (87 μM) correspond to the range maintained to treat human HIV Type 1 patients (10-130 μM): the range that inhibits HIV in vitro [32]. Elasmobranchs are an ancient vertebrate group with unusual physiological and biochemical characteristics [33]. They are the earliest known vertebrate group to have an adaptive immune system using antibodies. Among the features of their biology that has attracted public interest is their anecdotal resistance to disease, especially cancer. There is little hard science to validate such claims but the available literature records few viral and bacterial diseases from this group in spite of a considerable interest [34]. Among vertebrates, the incidence of neoplasia is lowest in elasmobranchs [35,36]. Reports of the unusual occurrence of bacteria in plasma [37] and tissues [38] of apparently healthy elasmobranchs may also be due to the effects of hydroxyurea in preventing bacterial growth. Although the mechanism of synthesis of hydroxyurea in vivo is not currently known, the presence of hydroxyurea in embryos from eggs of L. erinacea and the increase in concentration as the embryo grows (Fig 1b) provides evidence of the capacity for hydroxyurea synthesis in L. erinacea. Levels of hydroxyurea in tissues of other species including invertebrates and vertebrates are for the most part lower than those of the little skate (Fig 3a-3g). Similar to the situation in the skate, the distribution is tissue specific. In the surf clam, S. solidissima levels were high in mantle (100 μM) and in the lobster, H. americanus high levels were found in the intestine (138 μM). In the non-elasmobranch vertebrates, levels were generally low with the highest levels being found in skin (64 μM) of the frog L. pipiens. In the lungfish, P. annectens, the highest levels were found in the gills (38 μM). In the trout, O. mykiss, the highest levels were found in the pyloric caecae (32 μM). In the sheep, O. aries, the highest levels were in kidney (25 μM). If one assumes that endogenous hydroxyurea confers some defense against viral or other infection, significantly higher levels in some tissues may mean these are sites that need to be defended most. The tissue specific distribution of hydroxyurea indicates either it can be synthesized locally or transported and concentrated. Its structural similarity to urea (both are polar, with a low molecular weight differing only in the presence of a hydroxyl group) suggests it could be transported by the same carriers as urea but the tissue distribution of these 2 compounds in L. erinacea is very different (Fig 2a). The main organic osmolyte of marine elasmobranchs is urea that is accumulated to levels more than one thousand times that of hydroxyurea [33] (Fig 2a). In general, tissue specific differences in urea content are small (~20%). Hydroxyurea concentrations, on the other hand can vary by as much as 25 fold between tissues. Thus there must be transporters that can distinguish urea and hydroxyurea and these are known in mammals [39]. In vitro studies show hydroxyurea is far less permeant than urea in mouse erythrocytes due to the capacity of the urea transporter B (UT-B) to distinguish between them [39]. Several carriers for hydroxyurea have been identified including the organic anion transporting polypeptides (OATP) OATP1A2, OATP1B1 and OATP1B3, organic cation transorters (OCT), OCTN1 and 400 μM 1 hour Chinese hamster cells [29] OCTN2 and urea transporters A and B [40]. Active transport of hydroxyurea by OCT1B3 has also been suggested [40]. An important consideration in understanding the impact of retaining high levels of hydroxyurea in tissues is its inhibitory effect of ribonucleotide reductase (RR), an enzyme needed by all cells for DNA synthesis. Mammalian RR is less susceptible to inhibition by hydroxyurea than viral or bacterial RR (Table 1). This would be important for inhibition of viral and bacterial replication without affecting mammalian cell growth. Hydroxyurea thus could provide a level of protection against viral and other challenges as part of the innate immune defense mechanism. Our findings of hydroxyurea in a mammal, although at levels 5-10 fold lower than in the elasmobranch, may be particularly important for understanding mammalian disease resistance. Exogenously applied hydroxyurea has been shown to stimulate nitric oxide production Values are means ± standard error (SE) of the mean. Values with the same letter above the bar are not significantly different from each other. Tissues of the clam were not included in the statistical analysis due to the low n values. There were no differences between tissues of the Pacific hagfish or trout. Due to the low n value for plasma, gill and eggs of the lungfish is these tissues were not included in the analysis and have no letter above the bar. in mammalian systems [41,42]. Nitric oxide synthase plays a key role in the killing of pathogenic organisms by phagocytes [43] although the mechanism is not known [44]. We suggest a role for naturally occurring hydroxyurea in phagocyte function via the following mechanism. Nitric oxide is produced from arginine by nitric oxide synthase (NOS) via the intermediate hydroxyarginine. Arginase is known to react with hydroxyarginine in vitro to produce hydroxyurea instead of urea [45]. We propose that in vivo some hydroxyarginine is diverted to hydroxyurea synthesis by arginase and the hydroxyurea converted to NO as depicted in Fig 4. This mechanism helps explain the paradoxical colocalization of arginase and NOS in cells such as human endothelial cells [46]. In endothelial cells arginine metabolism is highly compartmentalized [47] and arginase is known to compete with NOS for arginine [48]. Conclusions Our finding that hydroxyurea occurs in many animal groups at levels that could act as a defense against viral and other challenges implies: a) a new component to the innate immune system of animals that may explain superior disease resistance of some groups and b) a new intermediate in the pathway for NO production in animals.
3,245.2
2015-11-24T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Mobile Applications in Mastering Mining Engineers ' Competences Being one of the leading coal mining regions in the world, Kuzbass (Russia) demands from its regional higher educational institutions to master a range of competences of the graduates, namely mining engineers. Foreign language competence is considered to be among the key ones. The article reveals the concept of the competence, its relevance for mining engineers. We also analyze existing mobile applications from the point of view of their educational potential and present the results of the experiment conducted to assess effectiveness of mobile applications in mastering foreign language competence of mining engineering undergraduates. Our methods included interviews with students, classroom observations and surveys of students. The results suggest that integrating mobile applications in educational process is likely to have a positive impact on foreign language competence and increase students’ motivation and satisfaction with foreign language learning. Introduction The notion of competence appeared in the second half of the 20 th century, when a number of researches on the theory of management dealing with personal traits and characteristics of an employee, which contribute to his/her efficient fulfillment of professional duties and requirements, were conducted [1].Furthermore, the term competence was introduced into education by David McClelland [2].The 21 st century higher education standards worldwide describe the results of graduates' training in the terms of competences.In the USA the concept of competence is connected with personal characteristics, which determine one's advanced achievements in a particular professional activity.In the UK the opinion dominates that competence means a concordance of the results of any educational or professional activity with authorized standards.Along with European countries, Russia adheres to both behavioral and functional notions of competence [3]. According to Russian state standards of higher education, mining engineers are supposed to have both key competences, which are universal for every educated person, and professional competences specific for a particular profession.The list of key competences contains foreign language competence defined by the European Council as "communicating in a foreign language: ability to express and interpret concepts, thoughts, feelings, facts and opinions both orally and in writing, also includes mediation skills (i.e.summarizing, paraphrasing, interpreting or translating) and intercultural understanding" [4]. The importance to master foreign language competence (mainly, in English) for mining engineers is evident.Over the past fifty years English has acquired the status of lingua franca in different spheres: medicine, education, culture, business, science, industry, etc.The existence of the single universal language offers new opportunities and prospects in these areas, simplifying and increasing the speed of exchange and dissemination of information, expands potential audience, promotes attractive realization of the professional purposes, etc.According to the data of Research Trends, this fact has predetermined the growth in the ratio of articles in English in comparison with articles in national languages in some countries since the beginning of the 21 st century.For example, this ratio in Russia was equal to 5:1 in 1990-s, increased five times more by 2011 and it is still growing now [5]. In spite of the fact that such situation, according to Joe Lo Bianco, shows "the progressive deterioration of competence in high-level discourses" or weakening in particular field of knowledge (the author uses the term "domain collapse" in relation to the language losing the competitive position) [6], the current indexation of scientific articles in English in Scopus makes more than 80% in comparison with other languages.This is over 20000 scientific works more than from one hundred countries [7].According to Google Scholar, among over 70000 publications only a little more than 30% are the publications not in English [8].Such difference in number is caused by the fact that scientists are seeking to inform bigger audience about the results of their research. Priority of publications in the field of mining is a task for all leading experts of mining regions in the world, interested to enlighten their achievements, advanced technologies and developments by means of international scientific discourse in English.Consequently, foreign language competence allows a mining engineer from Russia, aspiring to approach high level of his/her proficiency, to enquire cutting-edge scientific and technological advances alongside with sharing one's own experience with the colleagues worldwide by means of scientific publications. Although foreign language competence is recognized as essential for mining engineers, there are some problems, which hinder its mastering.Some of them being of general character have been disclosed in [9].Furthermore, in case of a debate with Englishspeaking experts of the same area of research (e.g. while giving presentations at international conferences and symposiums), authors from not English-speaking countries are constantly forced to prove relevance of their scientific research caused by their lack of foreign language competence. Moreover, it has to be admitted that to some degree there is an inevitable language barrier that causes difficulties in contact with editors of English-language journals.The authors face the following responses to the articles after their preliminary consideration -"the paper is unreadable"; "poor language"; "the paper needs to be fully readable"; "a lot is unclear", etc.The reason is that in some cases publications are initially prepared in author's native language, and then they are translated into English.As a result, possible spelling errors, language interference leading to mistakes in grammar and complex sentences determine poor readability of some papers. Besides, the authors often use machine translation that may lead to some language embarrassment, above-mentioned mistakes, misuse of terminology, etc. Machine translation can facilitate and accelerate the work on the text, but it is still far from 2 E3S Web of Conferences 41, 04041 (2018) https://doi.org/10.1051/e3sconf/20184104041III rd International Innovative Mining Symposium perfection, not always providing positive result and constantly demanding strict control from the user. Nevertheless, information technologies (machine translation being one of them) have proved to be useful in education.The technological level of development of communications in society has created new opportunities of information exchange during the recent years.Smartphones, tablets, computers have become a part of everyday life of an ordinary person.Electronic devices are used as an access to broad variety of resources including educational ones.In this regard we speak of "mobile learning" or "m-learning" in educational environment" [10], that is understood as an educational activity realized by means of mobile portable devices equipped with technologies of reception-transmission of information.Mobile learning modifies educational process, makes it more productive, creates new ways to present the material, and meets specific demands of students. The above-mentioned problems predetermined the purpose of this research viz. to seek the opportunities of applying mobile technologies in mastering foreign language competence of the students majoring in mining engineering.We consider this form of education to be a regular and relevant one, because the tendency to increase the number of hours for students' independent work and to reduce the amount of classroom hours in higher education institutions makes the methods enhancing the ability of any student to browse in information space and design one's foreign language competence essential. It is obvious that key changes in the field of teaching a foreign language in a higher education institution are closely connected with the integration of mobile devices with the proper educational applications into educational process.It is confirmed both by technical readiness of students, and their desire for independent work via mobile devices that allows them to use intensively information technology resources in foreign language teaching to master one's professional competence, to continue one's education abroad, to succeed in professional communication in a foreign language with colleagues from other countries after the graduation from a higher education institution [11,12]. Materials and methods Mobile application is a program installed and run on a smartphone or a tablet computer.We have analyzed and estimated the following mobile applications with the purpose to define their potential in the process of foreign language learning (see Table 1): Fun Easy Learn, Babbel, Duolingo, LinguaLeo and Quizlet. The criteria of the choice of these products have been the following ones: • cross-platforming -the versions of applications for Android and iOS; • feedbacks -general analysis of positive reviews of users; • number of installations -more than 1 000 000; • praises and awards by mass media for the last 5 years in categories: "Best application", "The application of the year", "The best developer", "The best startup", "The most downloaded application", etc.; • ranking -4,5 -5 "stars" (on the basis of voices not less than 20.000 users in Google Play and App Store). Thus, the potential educational characteristics of mobile applications prove to be quite promising.They are a good choice for anyone who learns foreign languages independently. We conducted the experiment to estimate the effectiveness of studying professional English with the help of mobile applications.Moreover, our goal was to define the degree of students' satisfaction with their level of professional English after the use of mobile applications.Forty-six students learning English as a foreign language participated in the experiment in 2016-2017.Their English level ranged from beginners to intermediates.We suppose tablet PCs and smartphones to be the most promising mobile devices for foreign language learning from a wide range of other mobile devices (iPhones, netbooks, laptops, iPads).Their advantages are: 1) multitasking as they allow listening to music and podcasts, watching videos, reading books, writing, searching for information via the Internet, playing games, etc., 2) small size, portability and touch screen, 3) the ability to download mobile applications for learning foreign languages (dictionaries, word simulators, language courses, podcasts, etc.). To start with, we conducted a survey among the students to reveal whether they use tablet computers and smartphones for learning.The vast majority (98% of respondents) responded positively.The first place by popularity belonged to language translation programs (87% of respondents).Watching videos from YouTube was on the second place (35% of respondents).And only 9% of students used special mobile applications for learning foreign languages. Moreover, students were interviewed to identify their attitude to the possibility of using mobile technologies in teaching a foreign language at University classes.The results showed that almost all respondents (95%) welcomed this opportunity in the classroom. When arguing their answer, the respondents noted such positive aspects of mobile applications as an increasing interest in the subject, the possibility of permanent access to the subject "on-the-go".At the same time some students did not know which mobile application could help them learn a foreign language.Another significant problem was an expensive Internet access or unavailable access to it, lack of free Wi-Fi. Considering that the students are to study English for professional purposes (namely, mining engineering), they should master a certain professional vocabulary.However, many students face the problem of memorizing words in a foreign language.The survey on the methods students use to study foreign language vocabulary showed that the majority of students (92%) used the drilling method. To solve this problem, we suggested them to use the mobile application Quizlet, which is a service for creating training cards with words.It should be noted that this application focuses on vocabulary, its enlargement but not on an in-depth study of the language.The application aims to help a learner remember as many words as possible.This service allows: • create your own word sets which include word cards (flashcards), adding pictures and audio files to them, or search for existing word sets on a specific topic; • do various online exercises with word cards and even play online; • use embedded word cards on various websites and share them in social network sites; • listen to foreign words; • print out word cards; • search for word sets created by other teachers; • study word cards even without registration; • work with word cards in an off-line mode. In addition to traditional two-way cards (flashcards) mode, the following modes are also available: • LEARN (students have to print the answer); • WRITE AND SPELL (spelling training); • TEST (different types of tasks are allowed: write the answer, multiple choice answer, true/false answer); • MATCH (a game where students need to match a word and its definition to make the word cards disappear); • GRAVITY (a game in which students need to type a word, the definition of which is floating on the screen); • QUIZLET LIVE (an online game in which students are divided into teams and compete with each other by choosing the desired translation of a word). Students were encouraged to work with this application both in the classroom and at home in the course of out-of-class independent work.The most popular mode in the classroom has proven to be an online game mode. Results and discussion To assess the effectiveness of learning professional English vocabulary with the mobile application Quizlet we used the following criteria: 1) the results of vocabulary tests; 2) the results of the students' survey and interviews; 3) the results of determining students' satisfaction degree. In the course of the experiment, we conducted one initial and three intermediate vocabulary tests, having created the word sets on the covered topics for them.The evolution of students' test results is presented in Table 2.It should be noted that Quizlet provides an opportunity for teachers to track how many students use this application.The data of the table revealed the improvement of students' results in vocabulary tests by 67.4%.The results of the experiment have showed that the use of mobile application Quizlet in out-of-class independent work enhanced the efficiency of memorizing foreign words by students.However, according to the table not all students used the app at home.Among the reasons, they named the lack of confidence in the effectiveness of the application, the preference of traditional ways of learning. The survey of students about the preferable Quizlet modes is shown in Table 3.According to the survey the most popular mode (98%) among students is the Quizlet live mode (team play online) due to its competitive and playing character.The application allows students to be connected to an online game using a special digital code demonstrated on the screen of the teacher's device.The second place is shared between the flashcards mode (78%) as the easiest to use and the test mode (72%), allowing students to assess immediately the number of the words learnt (in percent).The match mode is assessed by the students as significant as two previous modes (63%).The least popular mode is the write and spell mode due to the lack of its use when having language classes once a week. The degree of the students' satisfaction with Quizlet application in the foreign language lessons was measured in accordance with the approach defining the degree of satisfaction as the difference between the anticipated and acquired results [13,14].The survey followed the experiment has showed that many students (94%) highly appreciate Quizlet and its capabilities in mastering foreign language competence and are going to use this mobile application in the future.Nevertheless, some students (6%) still prefer traditional methods of memorizing words. Conclusion The advantages of mobile applications for the process of mastering foreign language competence prove to be the following: compliance of mobile technologies with each student's personal way and tempo; feedback interaction for a student through interactive interface of mobile applications; game like training; portability of mobile technologies (optimal time and place for learning); wide range of options of mobile applications (flash cards, games, including competitive ones, online testing, etc.); opportunity to self-assess one's progress of any topic unlimited number of times.The above mentioned advantages of mobile applications in combination with traditional training techniques increase students' motivation and satisfaction with foreign language learning and promote the efficiency of mastering students' foreign language competence of mining engineers. Here are some examples of journals, based on CiteScore metrics in Scopus and refined by the word "mining": Archives of Mining Sciences, Australian Mining, Canadian Mining Journal, Engineering and Mining Journal, Journal of Mining and Safety Engineering, International Journal of Rock Mechanics and Mining Sciences [URL: journalmetrics.scopus.com]. Table 1 . The Comparative Table of Features of Mobile Applications. Table 2 . The Evolution of Students' Vocabulary Test Results Table 3 . Preferable Quizlet Modes Used by Students
3,676.4
2018-06-01T00:00:00.000
[ "Education", "Engineering" ]
Geological Study of Unusual Tsunami Deposits in the Kuril Subduction Zone for Mitigation of Tsunami Disasters Chemical analyses were performed with a JEOL JXA-8900R electron probe microanalyzer at the Geological Survey of Japan. Nine major elements (SiO 2 , TiO 2 , Al 2 O 3 , FeO, MnO, MgO, CaO, Na 2 O, and K 2 O) were analyzed with an accelerating voltage of 15 kV and a beam current of 12 nA. The narrow beam scanned within a 10-µm grid, with counting times of 20 and 10 s for peak and background, respectively. Submarine earthquakes, submarine slides and impacts may set large water volumes in motion characterized by very long wavelengths and a very high speed of lateral displacement, when reaching shallower water the wave breaks in over land - often with disastrous effects. This natural phenomenon is known as a tsunami event. By December 26, 2004, an event in the Indian Ocean, this word suddenly became known to the public. The effects were indeed disastrous and 227,898 people were killed. Tsunami events are a natural part of the Earth's geophysical system. There have been numerous events in the past and they will continue to be a threat to humanity; even more so today, when the coastal zone is occupied by so much more human activity and many more people. Therefore, tsunamis pose a very serious threat to humanity. The only way for us to face this threat is by increased knowledge so that we can meet future events by efficient warning systems and aid organizations. This book offers extensive and new information on tsunamis; their origin, history, effects, monitoring, hazards assessment and proposed handling with respect to precaution. Only through knowledge do we know how to behave in a wise manner. This book should be a well of tsunami knowledge for a long time, we hope. Introduction On 26 December 2004 a magnitude M 9.3 earthquake deformed the ocean floor 160 km off the coast of Sumatra, generating the Indian Ocean tsunami and thus causing large sediment transfers due to tsunami run-up in coastal lowlands around the Indian Ocean (e.g., Goff et al., 2006;Moore et al., 2006;Hori et al., 2007;Hawkes et al., 2007;Choowong et al., 2007Choowong et al., , 2010. Sediment transfers of this scale are rare events historically. Only when an unusual tsunami strikes coastal lowlands does a large-scale sediment transfer occur, leaving a sedimentary record, that is, tsunami deposits, in the geological strata on shore (Dawson & Stewart, 2007). In this chapter, we seek to understand the run-up process of past unusual tsunamis by examining a series of tsunami deposits on the Pacific coast of eastern Hokkaido, northern Japan, and we estimate the average recurrence interval of such tsunamis from the geological record. Large earthquakes with M > ~8 in the Kuril subduction zone have historically generated tsunamis that caused damage in eastern Hokkaido between Nemuro and the Tokachi coast (Satake et al., 2005;Fig. 1). Most recently, the 1952 Tokachi-oki, the 1960 Chilean, the 1973 Nemuro-oki, and the 2003 Tokachi-oki tsunamis caused considerable damage and great loss of life in this district. Therefore, it is very important to estimate the likely timing and size of the next large, earthquake-generated tsunami. Information about historical earthquakes in the Kuril subduction zone is limited, however, and no documents from before the 19th century that might refer to tsunami events are available. The earliest written records from eastern Hokkaido are the "Nikkanki" series of documents from Kokutai-ji Temple, which was built by the Edo government at Akkeshi in 1805 (Soeda et al., 2004;Fig. 1). In the hope of finding traces of past giant tsunamis to use to evaluate the frequency and extent of past tsunami inundation in east Hokkaido, late Holocene coastal sediments such as peat beds and lagoon sediments have been studied since 1998 by our research group and other researchers (e.g., Hirakawa et al., 2000;Nishimura et al., 2000;Sawai, 2002;Nanayama et al., 2003;Soeda et al., 2004). Nanayama et al. (2003Nanayama et al. ( , 2007 and Sawai et al. (2009) have reported the general stratigraphy of unusual tsunami deposits due to "500-year earthquake" Tectonic setting of the Pacific coast of eastern Hokkaido Eastern Hokkaido is situated on a continental plate, the Okhotsk plate, under which the Pacific plate is being subducted at the rate of 8 cm/year, and many earthquakes with M > ~8 www.intechopen.com Geological Study of Unusual Tsunami Deposits in the Kuril Subduction Zone for Mitigation of Tsunami Disasters 285 have occurred in the Kuril subduction area (Satake et al., 2005(Satake et al., , 2008 Fig. 1). The most recent, the 2003 Tokachi-oki earthquake (M 8.0), produced a tsunami with a height of less than 3-4 m (Tanioka et al., 2004). This region has been steadily subsiding at a rate of 1 cm/year since the 19th century , but previously it may have been uplifted, either about 0.5-1 m or 1-2 m , by repeated great earthquakes (probably M 8.6) as a result of multi-segment interplate ruptures linking the Tokachi-oki and Nemuro-oki segments ( Fig. 1; Satake et al., 2005Satake et al., , 2008, with an average recurrence interval of 400-500 years (Nanayama et al., 2003(Nanayama et al., , 2007. The last great earthquake tsunami occurred in this area in the 17th century and left widespread tsunami deposits (Nanayama et al., 2003). Because of the complex history of seismic uplift and interseismic subsidence, sea-level changes during the late Holocene are not well understood in this study area. Geomorphic setting of Nanbuto marsh and Gakkara-hama beach The Nemuro coastal lowland is on the Nemuro Peninsula, the easternmost part of eastern Hokkaido (Fig. 1). The southern Kuril area, including the Nemuro Peninsula, is an active seismic area. The population of Nemuro City, which is the second largest city along this coast, is about 30000. In 1973, the Nemuro-oki earthquake tsunami (M 7.9) struck Hanasaki Port; its measured tsunami height was 2-3 m, and it caused heavy damage. Earlier earthquake tsunamis, the 1960 Chilean (M 9.5), the 1952 Tokachi-oki (M 8.2), and the 1894 Nemuro-oki (M 7.4) tsunamis, also struck Hanasaki Port (Satake et al., 2005). In addition, a Tokachi-oki earthquake tsunami (M 8.0) occurring in 1843 is described in the "Nikkanki," the earliest written records from this area (Soeda et al., 2004). We investigated tsunami deposits at two sites along this coast: Nanbuto marsh and Gakkara-hama beach. Nanbuto is a small marsh along the coast near the Nemuro urban district (Fig. 2). The marsh is on a flat coastal plain ranging from 1 to 4 m in elevation with an area of about 8 km 2 . Its maximum extent is about 2 km from north to south and about 4 km from east to west. The plain is surrounded on the north, west, and east by marine terraces, 60-80 m in elevation, of Pleistocene age, formed during marine oxygen isotopic stage (MIS) 9 (ca. 300 ka; Okumura, 1996) (Fig. 2). No large streams are present that might bring sandy sediments to the Nemuro lowland. Nanbuto marsh is one of a group of marshes developed on low-lying coastal plains and valley floors of eastern Hokkaido that were inundated by seawater during the Jomon transgression since 10000 years BP, as shown by the presence of an abandoned sea cliff on the north side of the marsh that is estimated to date to 6000-5500 years BP (Nanayama et al., 2003;Sawai et al., 2009) (Fig. 2). Aerial photographs show that Nanbuto marsh is a typical strand plain, which probably formed as the sea retreated gradually from the plain, leaving up to three beach ridges along its southern edge (Fig. 2). These ridges may have formed during forced regressions associated with seismic uplift or subsidence . Thus, these marshes developed and peat deposition began only after the sea retreated from the area, after 5500-6000 years BP. At Hanasaki Port (Fig. 1), the spring-tide range is 1.2 m, and the neap-tide range is 0.9 m. The corresponding ranges are 1.3 and 1.0 m at Kushiro (Fig. 1), where the extreme tidal range, between the highest and lowest astronomical tides, is 1.7 m. We thus estimated the corresponding ranges at Nanbuto to be 1.2-1.3 m and 0.9-1.0 m (Fig. 3). During the last 200 years, typhoons and heavy storms have rarely struck this coast; thus, no large seawater flooding events or sand movements associated with huge storms or typhoons have occurred. We estimated the landward limit of the annual storm run-up from the distribution of flotsam washed up on the present beach (Fig. 2). Hanasaki Port suffered damage from the 1973 Nemuro-oki earthquake tsunami, which had a wave height of 2-3 m. Figures 2 and 3 show the area inundated by the 1973 tsunami and the flooding elevation. Our other study site, Gakkara-hama beach, is situated on the western margin of Nemuro City, 30 km west of Nanbuto marsh (Fig. 1), at the edge of a marine terrace formed during MIS 9 (Okumura, 1996). There is private ranch and no colony. Behind the rocky Gakkarahama beach, there is a small marsh, 2 to 6 m in elevation. The marsh deposits are exposed in the sea cliff that extends along the shoreline. Field survey and sampling Line NB is 1000 m long and traverses the western part of Nanbuto marsh along the eastern side of Lake Nanbuto (Fig. 2); it is approximately perpendicular to the present shoreline. We measured elevation and distance from the present shoreline with a tape measure, a leveling instrument, and a Handy GPS system (Leica System 1200 GNSS). Most of the marsh is part of a private ranch; therefore, it is partially cultivated and also used by domestic animals. We looked for regional tephra layers and sand beds of possible tsunami origin within the peat beds and other marsh deposits and traced every sand bed that we found, taking samples with a scoop or a peat sampler at 10-m intervals along the survey line (Fig. 2). In addition, we used heavy equipment to dig seven trenches, up to 2.7 m wide, 14.4 m long and 2.7 m deep, and described the trench walls. Finally, we obtained a wide, oriented sample, 300 cm long by 30 cm thick, with a Geo-slicer soil sampler (Atwater et al., 2001) at Nb-GS-1 (Fig. 3). Sedimentary description We made three-dimensional oriented peels of the trench walls and described major sedimentary structures in the field. We also used a large plastic box (21 cm × 30 cm) to take www.intechopen.com The Tsunami Threat -Research and Technology 288 oriented samples of sand beds for radiographic observation from each trench wall and from the Geo-slicer sample. In our laboratory at the Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology (AIST), we took radiographs of typical sand beds in this area to infer their depositional processes. We also described the sedimentary structures of each sand bed, including bed form, grain size, and current directions, and we recorded the color of each bed with a digital soil color reader (Minolta SPAD-503). Finally, we examined each sand sample under a binocular microscope for marine components such as rounded beach sand grains and marine microfossils. We then integrated all this information and used it to identify tsunami deposits in the Nemuro lowland. Tephra study We described the thickness, color, and grain size of tephra layers and collected samples in the field. In our laboratory, the tephra samples were mounted on slides with resin and double-polished and their petrographic features, including glass shard morphology and phenocryst assemblage, were examined under a stereoscope (polarizing microscope). Chemical analyses were performed with a JEOL JXA-8900R electron probe microanalyzer at the Geological Survey of Japan. Nine major elements (SiO 2 , TiO 2 , Al 2 O 3 , FeO, MnO, MgO, CaO, Na 2 O, and K 2 O) were analyzed with an accelerating voltage of 15 kV and a beam current of 12 nA. The narrow beam scanned within a 10-µm grid, with counting times of 20 and 10 s for peak and background, respectively. At least 30 volcanic glass shards from each tephra sample were analyzed to identify the origin of the volcanic ash (Furuta et al., 1986). We compared our results with the known chemical compositions of regional tephra layers as reported by Furukawa et al. (1997), Furukawa & Nanayama (2006), and Yamamoto et al. (2010). Radiocarbon dating Plant materials and one bone sample were selected for 14 C dating from each peat bed and basal beach sediment. In all, 20 samples were dated at the AMS facilities at Geo-Science Laboratory Co. in Nagoya, Japan and the Institute of Accelerator Analysis Ltd. in Kawasaki, Japan. The dates were converted to calendar years before present (cal. yBP) and calendar thousand years before present (cal. ka BP) with the calibration program IntCal04 (Reimer et al., 2004), where BP is relative to the year AD 1950. We refer only to the 2-sigma range of calendar years in this chapter. OSL dating Dose estimation in the OSL dating method has improved greatly over the last 10 years, with the result that OSL has been increasingly used to date late Quaternary sediments. However, the age range to which OSL dating of quartz, feldspar and other minerals can be applied, which depends on both the saturation dose and the dose rate, is limited (Tsukamoto & Iwata, 2005). Nevertheless, it is a very useful method because it can be used to obtain age data directly from tsunami deposits. We collected 14 samples from 12 sand beds interpreted as tsunami deposits (NS1-NS12) for OSL dating by the radiation laboratory of Nara University of Education. These samples, which contained both quartz and feldspar grains with other mineral grains, were dated by the infrared stimulated luminescence method (Nanayama et al., 2009). General stratigraphy In the Nanbuto marsh, the most seaward trench (Na) was dug about 490 m from the shoreline, and the most inland trench (Ishi) was approximately 970 m from the shoreline. The survey found the formation of about 2.2-m-thick peat layer in its deepest zone within the subject area. We identified 7 regional tephra layers and 16 tsunami deposits (NS1-NS16) (Figs. 4 and 5). On the basis of macroscopic examinations and the known stratigraphy of volcanic ashes in the region (Furukawa & Nanayama, 2006;Yamamoto et al., 2010; Fig. 1), we identified, in descending order, ashes Ta-a (erupted from Mt. Tarumai in 1739), Ko-c2 (Mt. Komagatake, 1694), Ma-b (Mt. Mashu, 10th century), B-Tm (Mt. Baitousan, about AD 937-938), Ta-c (Mt. Tarumai, 2.5-2.7 ka), Ma-d (Mt. Mashu, 3.6-3.9 ka), and Ma-f (Mt. Mashu, ca. 7.5 ka). In addition, we obtained AMS 14 C dates on plant materials from peat beds (Table 1). Of 18 samples dated by OSL from NS1 to NS12 (Fig. 5), we considered six to yield reasonable dates for the tsunami deposits. The other luminescence dates were apparently too old, by comparison with the tephrochronology and AMS 14 C data, as discussed below (Table 1). On the basis of the tephra age of the deepest peat horizon immediately overlying beach sediment at each trench site, we estimated past shoreline positions at the 17th century, 1.0 www.intechopen.com Geological Study of Unusual Tsunami Deposits in the Kuril Subduction Zone for Mitigation of Tsunami Disasters 291 ka?, 2.6 ka, 3.7 ka, and 5.5 ka (Fig. 3), which we used to estimate the run-up distance between the observation site of each tsunami deposit and the corresponding shoreline. The estimated run-up distances exceeded several hundred meters, which was an important consideration in our association of these deposits with unusual tsunamis. We also described 12 tsunami deposits, NS1-NS12 (Nanayama et al., 2009; Fig. 7), in the sea cliff scarp at Gakkara-hama beach. The stratigraphic sequence of the tephra layers and tsunami deposits were the same as at Nanbuto, indicating that these unusual tsunami deposits are distributed regionally in the Nemuro lowland. Sedimentary structures in unusual tsunami sand beds When the 1973 Nemuro and 2003 Tokachi-oki tsunamis struck the Nemuro lowland, no large sediment transfer occurred, nor was there much coastal erosion. These tsunamis were too small to leave tsunami deposits in this environment. Giant tsunamis on the scale of the 2004 Indian Ocean tsunami very likely struck the coast in the past, generating large-scale sediment transfers that formed the tsunami deposits that we observed in the marsh environment of the Nemuro lowland. The major component of each of the 16 tsunami deposits (NS1 to NS 16) is very well sorted fine sand. These sands were mainly scoured from beach and dune sand after the tsunami hit the coast. In marsh environments, tsunami deposits are usually interbedded with peat (Dawson & Stewart, 2007). No shell fragments or carbonate microfossils were observed, presumably because they were dissolved by submersion. At Shig trench, some sand beds were covered by mud layers (Fig. 3). We interpreted this as mud deposited in a trough between two sand ridges, possibly during seawater flooding. The thickness of the tsunami sand layers is important information because it indicates the magnitude of local topographic depressions in the marsh environment. For example, NS2 ranges in thickness from a few centimeters to tens of centimeters; where it reaches maximum thickness of 95 cm, it clearly displays parallel lamination and resembles beach sediment (Fig. 5). NS2 also shows clear landward thinning. Although we did not observe any clear graded bedding, the sand beds included internal sedimentary structures such as plane beds, dunes, and current ripples, suggesting bedload transport (Figs. 5 and 6). Moreover, within each bed, dune forms and current ripples indicate two flow directions, thus recording both the tsunami inflow and its outflow (Nanayama & Shigeno, 2006). The gradual upper boundary and the erosional base of each sand bed are characteristic features of tsunami sedimentation. We inferred that the tsunami run-up eroded the underlying stratum, and that the gradual upper boundary reflects the regrowth of marsh vegetation in the years following the tsunami sand deposition. The erosional lower bounds and associated peat blocks or clasts that characterize the tsunami sand layers in this area constitute important evidence of past tsunami deposits in a marsh environment (Bondevik et al., 2003;Gelfenbaum and Jaffe, 2003). Without the application of large stress, it is difficult to detach peat clasts from a peat bed, because of the fibrous nature of peat. According to our radiograph observations, the deposits contained accretion structures generated by flowing water such as plane beds and current ripples. We also observed convolute lamination, reflecting rapid sedimentation and water drainage (Fig. 5), both of which occur during tsunami run-up. www.intechopen.com The Tsunami Threat -Research and Technology 292 Fig. 6. Collection of samples for OSL dating at Shig trench (left). Photograph of the trench wall showing the OSL sampling horizons and dating results (right). AMS 14 C dating results, tephra layers and dates, and sand beds NS1 to NS9 are indicated to the right of the photograph. Fig. 7. View of Gakkara-hama beach looking southwest (a), and photograph of the sampled outcrop (b). Photograph of a large peel sample (c) showing the locations of tsunami sand beds NS1-NS12, tephra layers and dates, OSL sampling horizons and dating results, and AMS 14 C dating results. Estimation of the recurrence interval of unusual tsunamis We identified 16 tsunami sands (NS1 to NS16) within peat beds in the Nemuro lowland, and ascertained the chronology of their deposition using tephrochronology, AMS 14 C dating, and OSL dating. The date of each peat bed can be measured by AMS 14 C dating to within a 2-sigma range of several hundred years. Because the base of each sand layer is usually erosional, it is not possible to estimate the exact age of each tsunami deposit from the AMS 14 C ages of the peat horizons. We thus inferred the average recurrence interval of unusual tsunamis to within about 100 years by using the regional tephra ages along with the AMS 14 C ages from certain important peat horizons as follows (Table 1). 1. Ko-c2 (AD 1694) overlies tsunami deposits NS1 and NS2, which overlie both B-Tm (AD 937-938; Fukusawa et al., 1998) and Ma-b (10th century). We correlated NS1 with the 17th century tsunami and NS2 with a 13th century tsunami. They are estimated to be separated by 379 years, obtained by dividing the interval between the two time markers by two. 2. Underlying B-Tm and Ma-b, and overlying Ta-c (2.5-2.7 ka), are six tsunami deposits (NS3 to NS8). Their estimated recurrence interval is thus 250-283 years. 3. Underlying Ta-c and overlying Ma-d (3.6-3.9 ka), there are three tsunami deposits (NS9 to NS11), and the estimated recurrence interval is 200-367 years. 4. Underlying Ma-d and overlying the lowest peat horizon (4.8-5.0 ka) are five tsunami deposits (NS11 to NS16), for an estimated recurrence interval of 220-320 years. Therefore, the estimated average recurrence interval of unusual tsunami events in the Nemuro coastal area is 200-379 years (Table 1). However, the recurrence interval was not estimated by using dates obtained directly from the tsunami sands, so this value should be understood as a maximum. The number of tsunami deposits in the Nemuro area between the regional tephra layers Ko-c2, B-Tm, and Ta-c is greater than the number in the Tokachi-Kiritappu area, suggesting a possible tsunami source off Habomai, Shikotan, Kunashiri, and Etorofu Islands (southern Kuril Islands), in addition to tsunami-generating multisegment interplate earthquakes along the Tokachi-oki and Nemuro-oki ruptures (Fig. 1). We also attempted to date the tsunami deposits directly by using the OSL dating technique ( Fig. 6; Table 1). We hope that it will be possible to obtain the formation ages of individual tsunami deposits more exactly by future advances in OSL dating technology. However, our luminescence results yielded numerous, erroneously old ages compared with ages ascertained by tephrochronology and AMS 14 C dating. These erroneous ages may be attributable mainly to (1) insufficient zeroing or (2) the mixing of deposits of different ages by scouring during the tsunami run-up, or both. We plan to investigate the sources of the tsunami sand deposits by using the OSL technique, and to conduct basic research on erosional and depositional processes during tsunami run-up in detail in a new study area. Open trench demonstration and donation of large peel samples Unusual tsunami deposits can be traced as high as 18 m above the current sea level and as far as 1-4 km inland from the Pacific shoreline of eastern Hokkaido, and such unusual tsunamis have recurred at intervals of several hundred years, with the most recent event in the 17th century (Stake et al., 2008). The results of this study has thus improved the unusual tsunami hazard map, produce in accordance with government guidelines, municipalities of eastern Hokkaido, including Nemuro City. Because Nemuro is an active seismic area, it is important for citizens to be informed with regard to tsunami hazards. Therefore, on 15 October 2005, we conducted an open trench demonstration for the people of Nemuro at Ishi trench. The Geological Survey of Japan, Hokkaido University, Nemuro City Museum of History and Nature, and the Historical Museum of Hokkaido cosponsored this outreach event. About 200 residents of Nemuro City and eastern Hokkaido participated, giving us a good opportunity to explain the importance of our research results directly to local citizens. After our investigation, we donated some large peel samples to the Nemuro City Museum of History and Nature and to the Historical Museum of Hokkaido, Hokkaido University, to use as educational materials in regard to tsunami disaster mitigation (Fig. 8). Submarine earthquakes, submarine slides and impacts may set large water volumes in motion characterized by very long wavelengths and a very high speed of lateral displacement, when reaching shallower water the wave breaks in over land -often with disastrous effects. This natural phenomenon is known as a tsunami event. By December 26, 2004, an event in the Indian Ocean, this word suddenly became known to the public. The effects were indeed disastrous and 227,898 people were killed. Tsunami events are a natural part of the Earth's geophysical system. There have been numerous events in the past and they will continue to be a threat to humanity; even more so today, when the coastal zone is occupied by so much more human activity and many more people. Therefore, tsunamis pose a very serious threat to humanity. The only way for us to face this threat is by increased knowledge so that we can meet future events by efficient warning systems and aid organizations. This book offers extensive and new information on tsunamis; their origin, history, effects, monitoring, hazards assessment and proposed handling with respect to precaution. Only through knowledge do we know how to behave in a wise manner. This book should be a well of tsunami knowledge for a long time, we hope.
5,765.2
2011-01-29T00:00:00.000
[ "Geology" ]