text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Pecuniary Value of Disability-Adjusted-Life-Years in the Arab Maghreb Union in 2015 This study bridges extant information gap on the pecuniary value of disabil-ity-adjusted-life-years (DALYs) lost in the Arab Maghreb Union (AMU). The DALYs lost in 2015 are converted into money using human capital (lost output) approach. The AMU total value of DALYs lost from all causes is the sum of each of the five country’s pecuniary value of DALYs (PVD) lost from all causes. The PVD associated with DALYs lost due to j th disease among persons of a specific age group is the product of the per capita non-health GDP in international dollars (Int$) and the total DALYs lost. The 27,175,610 DALYs lost in AMU in 2015 had a pecuniary value of Int$ 289,033,271,814, which is equivalent to 25.6% the sub-region’s 2015 GDP. The average pecuniary value per DALY lost was Int$ 10,636, which ranged from a minimum of Int$ 4226 in Mauritania to a maximum of Int$ 13,852 in Algeria. The pecuniary value of DALYs lost from all causes in the AMU sub-region annually is substantive. Introduction The Arab Maghreb Union (AMU) consists of five member countries, i.e. Algeria, Libya, Mauritania, Morocco and Tunisia. The five countries had a total population of 95.423 million in 2015 [1]. The population was distributed as follow: 42% in Algeria, 7% in Libya, 4% in Mauritania, 36% in Morocco, and 12% in Tunisia. Algeria and Libya are upper-middle income countries; and the remaining three countries are lower-middle income countries. The life expectancy at birth was 75.6 years in Algeria, 72.7 years in Libya, 63.1 years in Mauritania, 74.3 years in Morocco and 75.3 year in Tunisia [1]. The life expectancies, except for Mauritania, were higher than the average global life expectancy of 71.4 years. The physician and pharmaceutical personnel densities per 10,000 population of AMU countries are lower than the global averages (see Table 1) [3] [4]. Likewise, the density of health infrastructure and technologies (e.g. psychiatric beds, radiotherapy units) for AMU are lower than the global averages [3]. The global per capita total expenditure on health is 4-fold than that of AMU countries. The per capita total expenditure on health for AMU countries is between US$49 and US$372 [4], which falls short of the US$ 146 (lower-middle income) to US$ 536 (upper-middle income) per person per year health systems investment recommended for achieving health sustainable development goal (SDG) 3 [5]. Consequently, there is need for AMU sub-region economic burden of disease estimates for use in sensitization of Ministries of Finance, private sector and development partners to increase health development investments to the levels recommended for achievement of SDG3. Such studies are routinely conducted in economically developed countries to raise public and whole-government awareness of potential economic returns from health development investments [6]- [17]. Economic burden of disease studies has been conducted in Southeast Asia [18]- [23] and West Pacific [24] [25] [26] [27] [28]. Latin America has also recorded conduct of some economic burden of disease studies [29]- [36]. A number of studies in Africa have attempted to estimate the economic burden of premature mortality from neglected tropical diseases [37], childhood diseases [38], cholera [39], diabetes mellitus [40], disasters [41] "the sum of the present value of future years of life time lost through premature mortality, and the present value of years of future life time adjusted for the average severity (frequency and intensity) of any mental or physical disability caused by a disease or injury (p. 326)." This study contributes to bridging the existing knowledge gap on the pecuniary value of DALYs lost in the AMU in 2015. This paper answers the question: What is the total pecuniary value of DALYs lost from all causes in the AMU? The specific objective was to estimate the total pecuniary value of DALYs lost from all causes in the AMU in 2015. Study Area and Population The study focuses on DALYs lost from all causes amongst seven age groups in AMU in 2015. The causes comprise all communicable diseases, maternal condi-tions, neonatal conditions and nutritional deficiencies; all non-communicable diseases (NCDs), covering malignant neoplasms, mental and substance-use disorders, neurological conditions, sense-organ diseases, cardiovascular diseases, respiratory diseases, digestive diseases, genitourinary diseases, musculoskeletal diseases, congenital anomalies, oral conditions and sudden infant death; and intentional and unintentional injuries [60]. [49] and availability of data on GDP per capita, total health expenditure per capita and DALYs for AMU. GDP of any country consists of four components: personal consumption expenditures, investment, government expenditure and net exports. The AMU GDP per capita equals total expenditure divided by total population. The methods of calculating DALY are contained in Murray [63] and WHO [60]. We hypothesis that DALY losses erode incomes and consumption of households and firms, savings and investment, taxes and service fees, and net exports. Study The AMU total pecuniary value of total DALYs (TPVD) lost from all causes is the sum of each of the five country's pecuniary value of DALYs (CPVD) lost from all causes: [60][61][62][63][64][65][66][67][68][69], and 70 years and above (CPVD =>70 ). The CPVD associated with the j th disease DALYs lost among people of a specific age group are the product of the per capita non-health GDP in purchasing power parity (PPP) and the total j th disease DALYs lost within a specific age group [64]. Each i th country's discounted total CPVD attributable to the j th disease DALYs were estimated using equations (2) through (9) below [64]. The DALY estimates published by the WHO in the Global Health Observatory are discounted at a 3% rate [64]. Therefore, we did not introduce a discount factor in equations (3) to (9) to avoid double discounting. Table 2 reproduces the United Nations Sustainable Development Goal 3 targets. By 2030, end the preventable deaths of newborns and children under 5 years of age and reduce neonatal mortality to 12 per 1000 live births or lower and under-5 mortality to 25 per 1000 live births or lower in all countries [65]. Estimation of the Reductions in Pecuniary Value of DALY Losses in AMU Assuming SDG 3 Related Targets Are Achieved By 2030, end the epidemics of AIDS, tuberculosis, malaria and neglected tropical diseases and reduce hepatitis, water-borne diseases and other communicable diseases [65]. (a). HIV-related deaths will be reduced to fewer than reduce global HIV-related deaths to below 500,000 [66] in 2020 from a 2015 baseline of 1,062,352 [67], i.e. a target reduction of 52.93%. (b). Malaria mortality rates will be reduced globally by at least 90% from 2015 to 2030 [68]. The number of TB deaths will be reduced by 90% from 2015 to 2030 [69]. Mortality due to vector-borne diseases will be reduced globally by at least 75% from 2016 to 2030 [70]. By 2030, reduce premature mortality due to NCDs by one third through prevention and treatment and promote mental health and well-being [65]. SDG 3.6 By 2020, halve the number of global deaths and injuries due to traffic accidents [65]. Table 2 were obtained from UN [65], WHO [66], WHO [67], WHO [68], WHO [69] and WHO [70]. Sources: Targets in The reductions in AMU pecuniary values of DALYs lost assuming the SDG3 targets for maternal mortality ratio (Target 3.1), neonatal mortality (Target 3.2), children under 5 years of age mortality (Target 3.2), and HIV/AIDS deaths (Target 3.3) are achieved were estimated using the following formula: where: PVD HCj2030 is the total pecuniary value of DALYs expected to be lost in AMU from j th health condition in 2030 assuming related target is fully achieved; PVD HCj2015 is the total pecuniary value of DALYs actually in AMU from j th health condition in baseline year 2015; and SDG jHCT is the SDG j th health condition target mortality rate. The reductions in AMU pecuniary values of DALYs lost assuming the SDG3 targets for death associated with tuberculosis (Target 3. The detailed elucidation of those algorithms can be found in Kirigia and Mwabu [64] study on monetary value of DALYs lost in the East African Community. Data Source and Software The nine equations in subsection 2.2 were estimated using per capita total health expenditure data from the WHO Global Health Expenditure Database [4], per capita GDP data from the International Monetary Fund World Economic Outlook database [2], and DALYs data from the WHO Global Health Observatory [67]. The nine equations were estimated using Excel Software developed by Microsoft (New York). Estimates of Pecuniary Value of DALYs Lost in the AMU in 2015 without SDGs In 2015, the AMU lost a total of 27,175,610 disability-adjusted-life years (DA-LYs) from all causes. Out of the total DALY loss, 40% was borne by Algeria, 6% by Libya, 8% by Mauritania, 35% by Morocco, and 11% by Tunisia ( Table 3). The DALY losses in the AMU translated into a total pecuniary value loss of Int$ 289,033,271,814; which is equivalent to 25.6% of the region's 2015 GDP. Out of which, 52% was borne by Algeria, 8% by Libya, 3% by Mauritania, 26% by Morocco, and 11% by Tunisia. The average pecuniary value per DALY lost was Int$ 10,636; which ranged from a minimum of Int 4226 in Mauritania to a maximum of Int$ 13,852 in Algeria. Whilst, the average pecuniary value per person in population was Int$ 3022; and varied between Int$ 2199 in Morocco and Int$3,755 in Algeria. Nearly 24.3% of the pecuniary value of NCD DALY loss resulted from cardiovascular diseases; 12.3% from mental and substance use disorders; 11.8% from malignant neoplasms; 9% from diabetes mellitus; 8.0% from musculoskeletal diseases; 6.9% from neurological conditions; 5.6% from congenital anomalies; 4.8% from genitourinary diseases; 4.7% from sense organ diseases; 3.9% from digestive diseases; 3.8% from respiratory diseases; 1.6% from skin diseases; 1.6% from endocrine, blood, immune disorders; 1.3% from oral conditions; 0.5% from other neoplasms; and 0.1% from sudden infant death syndrome ( Figure 1). Cardiovascular diseases, mental and substance use disorders, malignant neoplasms, diabetes mellitus, and musculoskeletal diseases alone accounted for 65.3% of the total pecuniary value of DALYs lost in the AMU. Approximately 43.8% of the pecuniary value of CMN DALY loss was from neonatal conditions (preterm birth complications, birth asphyxia and birth trauma, neonatal sepsis and infections, and other neonatal conditions); 23.4% from infectious and parasitic diseases; 20.1% from respiratory infectious diseases (lower respiratory infections, upper respiratory infections, and otitis media); 10.3% from nutritional deficiencies (e.g. protein-energy malnutrition, iodine deficiency, vitamin A deficiency, iron-deficiency anaemia, and other nutritional deficiencies); and 2.3% from maternal conditions ( Figure 2). Neonatal conditions, respiratory infectious diseases and nutritional deficiencies are responsible for 74.3% of CMN pecuniary losses. Almost 82% of the pecuniary value of injury-related DALY loss stemmed from unintentional injuries and 18% from intentional injuries. The three leading causes of pecuniary value of DALYs lost from intentional injuries were road injuries (46.8%), falls (9.7%) and exposure to mechanical forces (6.3%) (Figure 3). Journal of Human Resource and Sustainability Studies Majority of the unintentional injuries pecuniary value of Int$ 5,621,933,544 was from self-harm (35.7%) and interpersonal violence (35.1%). Pecuniary Value of DALY Losses from Five SDG 3 Related Targets Approximately Int$ 240,663,156,194 (83.3%) of the total pecuniary value of DALYs lost in AMU in 2015 resulted from the SDG3 health conditions listed in Table 2. Table 4 Estimates of Reductions in Pecuniary Value of DALY Losses in AMU if the Five SDG 3 Related Targets Are Achieved As shown in Table 5, if all the five SDG3 targets in Table 2 SDG Target 3.1: Maternal Health Conditions The AMU lost DALYs worth Int$ 1,472,716,293 in 2015 from maternal conditions. However, if SDG target 3.1 is fully achieved, the pecuniary value of DALY losses in 2030 would be Int$ 551,874,414. This implies a saving of Int$ 920,841,879 per year. The reduction in maternal conditions related pecuniary losses may be realized if AMU states implement the UN Commission on the Status of Women resolution that calls upon Government authorities and international leaders at all levels to generate requisite political will, increased resources, commitment, international cooperation and technical assistance to strengthen health systems with a view to guaranteeing all women and girls universal access to comprehensive health services to decrease maternal mortality and morbidity, and improve maternal and new born health [71]. All such efforts should be guided by the UN Human Rights Council resolution A/HRC/RES/33/18 that urges States and encourages other relevant stakeholders to take action at all levels, utilizing a human rights-based approach to address the interlinked causes of maternal mortality and morbidity, such as inaccessibility to affordable and appropriate health-care services, lack of information and education, poverty, food insecurity, harmful cultural practices (including child marriage, wife inheritance, female genital mutilation), early childbearing, gender inequalities, discrimination and domestic violence against women [72]. SDG 3.2: Neonatal Health Conditions The preterm birth complications, birth asphyxia and birth trauma, neonatal sepsis and infections, and other neonatal conditions led to a loss of DALYs valued at Int$ 27,787,753,001 in 2015. If SDG target 3.2 is fully attained, the pecuniary value of DALYs lost in 2030 would be Int$ 20,457,241,473, which denotes a saving of Int$ 7,330,511,528 per year. The saving can be made by adapting and implementing the African Union Maputo plan of action 2016-2030, which contains nine strategic areas of focus and priority interventions (plus indicators for monitoring progress) for assuring realization of sexual and reproductive health and rights, and ultimately, improve maternal, newborn, child and adolescent health [73]. First, the commitments agreed in the political declaration on HIV and AIDS, which calls for increasing and front-loading investments from domestic and external sources, and promote laws, policies and practices for ensuring universal access to high-quality, affordable and comprehensive sexual and reproductive health-care and HIV services, information and commodities with a view to ending the AIDS epidemic by 2030 [74] [75]. SDG Target 3.3: HIV/AIDs, Tuberculosis, Malaria and Neglected Tropical Diseases Second, the commitments encapsulated in the political declaration on antimicrobial resistance, which urges member states to develop and adequately fund multi-sectoral One Health national policies, programmes and action plans to combat resistance of bacterial, viral, parasitic and fungal microorganisms to antimicrobial medicines [76]. Third, on 26 September 2018 UNGA adopted a political declaration on the fight against tuberculosis entitled "United to End Tuberculosis: An Urgent Global Response to a Global Epidemic". In that political declaration member states committed to provide diagnosis and treatment; address tuberculosis prevention, diagnosis, treatment and care in the context of child health and survival; prevent tuberculosis for those most at risk of falling ill through the rapid scale-up of access to testing for tuberculosis infection; develop national antim-icrobial resistance strategies, capacities and plans; find the missing people with tuberculosis; systematically screen relevant risk groups; adapt and implement rapidly the global End TB Strategy; develop community-based health services; explore how digital technologies could be optimally used for effective tuberculosis prevention, treatment and care; pursue multi-sectoral collaboration at all levels; foster cooperation between public and private sector entities; create an environment conducive to research and development of new tools for tuberculosis; mobilize sufficient and sustainable financing, from all sources, for universal access to quality prevention, diagnosis, treatment, and care of tuberculosis [77]. Fourth, the UNGA resolution A/RES/72/309 calls upon countries, multilateral and bilateral development partners to substantially increase funding to countries to provide universal access to existing life-saving tools for the prevention, diagnosis and treatment of malaria [78]. Lastly, the UN Commission on Population and Development resolution 2010/1 encourages Member States and international organizations to scale up actions aimed at ensuring universal access to prevention and treatment of neglected tropical diseases, and access to affordable safe water and sanitation [79]. In the London declaration on NTDs, pharmaceutical companies and international development partners committed to sustain, expand and extend drug access programmes to ensure the necessary supply of drugs and other interventions to help end NTD epidemic [80]. Conclusions The study has successfully estimated pecuniary value of DALYs lost in the AMU in 2015, and reductions in pecuniary value of DALY losses if five SDG3 targets are achieved. The findings from this study could potentially be used by health development stakeholders to advocate for increased domestic and external investments towards achievement of SDG3. The non-communicable and communicable diseases, and injuries resulted in a significant number of DALY losses valued at Int$ 289 billion in the AMU. Approximately 83% of the total pecuniary value of DALYs lost in AMU is from SDG-related health conditions. Full attainment of the five CDS, NCD and injuries-related SDG3 targets would reduce the total pecuniary value of DALYs lost 35% in AMU. In order to significantly reduce the SDG3-related DALY losses, the AMU countries should intensify their whole-of-government and whole-of-society efforts to fully implement their past health-related commitments contained in Universal access to health services will not be sufficient for AMU States to attain the health SDG3 of ensuring healthy lives and promoting well-being for people at all ages. There is need for simultaneous policy actions to revamp systems that address other-related SDGs, such as SDG1 on ending poverty in all its forms, SDG2 on ending hunger through food security, SDG4 on equitable education and lifelong learning, SDG 5 on gender equality, SDG 6 on availability and sustainable management of water and sanitation, SDG 11 on inclusive, safe, resilient and sustainable human shelter (housing), SDG 13 on combating negative health impacts of climate change, and SDG 16 on promoting peaceful and inclusive societies [65]. This will require strong and efficiently coordinated collaboration across multiple sectors in individual member states. Cultivation and nurturing of solidarity and closer cooperation and partnership between the AMU States is bound to accelerate the progress towards attainment of SDG3 and other related SDGs. Significant reductions in burden of disease will have substantive social and economic impact on AMU. All along the AMU States (public and private sector leaders) and development partners should always remember that health is wealth of AMU.
4,149.6
2018-11-23T00:00:00.000
[ "Economics", "Medicine" ]
Machine Learning (ML)-Based Model to Characterize the Line Edge Roughness (LER)-Induced Random Variation in FinFET ML (Machine Learning)-based artificial neural network (ANN) model is proposed to estimate the LER (line edge roughness)-induced performance variation in Fin-shaped Field Effect Transistor (FinFET). For a given LER features such as rms amplitude(<inline-formula> <tex-math notation="LaTeX">$\Delta$ </tex-math></inline-formula>), correlation length along x-direction (<inline-formula> <tex-math notation="LaTeX">$\Lambda _{\mathrm {X}}$ </tex-math></inline-formula>), and correlation length along y-direction (<inline-formula> <tex-math notation="LaTeX">$\Lambda _{\mathrm {Y}}$ </tex-math></inline-formula>), the metrics for device performance such as on-state drive current, off-state leakage current, threshold voltage, and subthreshold swing can be computing-efficiently estimated with the ANN model. I. INTRODUCTION For the last a few decades, complementary metal oxide semiconductor (CMOS) technology has been successfully evolved with the adoption of new techniques such as stress engineering in 90 nm technology node and beyond [1], high-k/metal-gate in 45 nm technology node and beyond [2], and 3-D advanced device structure in 22 nm technology node and beyond [3]. In every new CMOS technology platform, the physical dimension of metal oxide semiconductor field effect transistor (MOSFET) has been scaled down not only to increase the density of devices in integrated circuit (IC) but also to improve the functions of IC per cost. However, process-induced random variations (i.e., transistors' electrical characteristics such as threshold voltage, on-state drive current, and off-state leakage current, are randomly fluctuated/affected while fabricating transistors in FAB), have negatively affected the manufacturability of CMOS devices, and thereby, it would significantly hinder the evolution of CMOS technology [4]. The root-causes of process-induced random variation are classified as (i) line edge roughness (LER), (ii) random dopant fluctuation (RDF), and (iii) work function variation (WFV) [5]. Especially, LER would degrade the device performance but also indirectly affect the other random variation sources (i.e., RDF and The associate editor coordinating the review of this manuscript and approving it for publication was Sneh Saurabh . WFV) because it induces structural variations in device [6]. With the most radical shift in device structure in the year of 2011, i.e., from planar bulk MOSFET to 3-D MOSFET (i.e., FinFET), the process-induced technical issues become much more severe [7]. Therefore, as the device architecture becomes more complicated (in reality, multiple bridge channel field effect transistor (MBCFET), stacked nano-wire FET, stacked nano-slab FET, etc. for 3 nm CMOS technology node [8] and beyond), understanding the impact of LER on device performance is desperately required in developing variation-robust silicon device at 3 nm technology node and beyond [9]. A few studies have reported to understand, quantify, and analyze the impacts of LER on device characteristics [10]- [12]. TCAD (Technology Computer Aided Design)based method has been adopted to propose model for finely and accurately predicting the impact of LER [13]. However, the TCAD-based approach is fundamentally very time-consuming and computationally-inefficient when predicting thousands of LER-induced input transfer characteristics of MOSFETs in integrated circuit. Thus, a few studies [14], [15] have tried to compactly model the impact of LER on the device performance. Nevertheless, due to many technical barriers in developing a new compact model, the compact model for analyzing the impact of LER [14], [15] would not be timely developed, even though the LER on the fin sidewall of FinFET should be modeled for two-dimensionally characterizing/understanding the sidewall surface [7], [13]. Therefore, using Machine Learning (ML) technique, simple but eye-catching novel approach with reasonable accuracy is proposed in this work, to provide an alternative device solution for predicting the process-induced variation. II. DEVICE DESIGN AND DATA GENERATION A. LINE EDGE ROUGHNESS PARAMETERS Generally, 2 or 3 parameters (e.g., , , and α) are used to describe the LER profile in planar MOSFETs, and 3 or 4 parameters (e.g., , x , y , and α) are used in 3-D MOSFETs. The impact of each parameter in LER profile is comparatively described in Fig. 2. The details of each parameter used in Fig. 2 are as below [16]: (i) Amplitude ( ): the root-mean-square(rms) value of roughness amplitude. The smaller is, the smoother the surface is. (ii) Correlation length ( ): how closely the correlated edge is associated to its neighboring edge. The larger is, the smoother the surface is. (iii) Roughness exponent (α): the high frequency component of roughness. The larger α is, the smoother the surface is. B. DEVICE DESIGN WITH LINE EDGE ROUGHNESS A three-dimensional (3-D) bird's-eye view of FinFET with a 3-D LER on its sidewall fin is shown in Fig.1. The device design parameters of nominal FinFET device are summarized in Table 1. To reconfigure the surface roughness on the sidewall fin of FinFET, the quasi-atomistic model [13] was used. The steps to generate a rough surface are as below: Step I: Define key parameters such as , x , y , α, Step II: Obtain the 2-D power spectrum by taking the fast Fourier transformation (FFT). Step III: Obtain the amplitude spectrum by taking the square root of the result in Step II. Step IV: Obtain the 2-D impulse response by taking the inverse fast Fourier transformation (IFFT) on the result in Step III. Step V: Generate the white Gaussian noise (wgn) and take the 2-D convolution of the result in Step IV and wgn. Step VI: Once the steps above are done, import the generated surface coordinates to TCAD with Sentaurus Structure Editor. ACVF (x, y) In (1), x and y are the correlation length along x-direction and y-direction of surface, respectively. determines the relation between x and y direction. C. DATA GENERATION To build and verify the Artificial Neural Network (ANN) model, 100 different data sets (note that each data set consists of 50 different FinFETs with identical LER parameters) were x = 20 nm, y = 50 nm, α = 1, = 0. Afterwards, the value of three LER parameters ( , x , y ) are randomly chosen from a given range for each LER parameter, as follows: from 0.2 nm to 0.8 nm, x from 10 nm to 100 nm, and y from 20 nm to 200 nm. The distribution of each LER parameter in the limited range follows the uniform distribution. Note that α is set to 1, and is set to 0. In fact, in order to take account into the impact of α on a LER profile, a very small sampling distance is necessary. However, the small sampling distance should cause the tremendous amount of computational works in TCAD simulation runs. In real, α is usually out of sight in many other studies on LER [11], [14], [15], [20]. Regarding , we set as 0, for simplicity. This means that the roughness along x-direction is independent of that along y-direction. Then, I d -vs.-V g characteristic of all FinFETs in 100 different data sets were simulated using the TCAD, and thereafter, the performance metrics (e.g., I off , V t , I on , SS) were extracted out [see Table 2 ]. Those data sets were separated into three groups: training data sets, validation data sets, and test data sets. The training data sets are used to update the ANN model components such as weight matrices and bias vectors. The validation data sets are used to monitor if the ANN model is well trained or over-fitted in the training process. After the training process is finished, the test data sets are used to verify if the ANN model is well trained or not [see Fig. 3]. III. ARTIFICIAL NEURAL NETWORK MODELING A. FULLY CONNECTED LAYERS This ANN model has 1 input layer, 1 output layer and 3 hidden layers with 3 activation functions (ϕ), [see Fig. 4]. The hyperbolic tangent (tanh) is used for activation functions. It is mathematically defined as follows: Weight matrices (W 1 , W 2 , . . . , W 4 ) and bias vectors (b 1 , b 2 , . . . , b 4 ) of ANN model can determine outputs. When training the ANN model, those matrices and vectors are updated in order to be fit to the training data sets for specified number of iterations. B. GRAFTING PROBABILITY DISTRIBUTION In this study, we assumed that the distribution of performance metrics follows the multi-variate Gaussian distribution to securely build the model for estimating the LER-induced performance variation of device. It is known that the LER-induced variation of V t , I on , SS, and log 10 I off approximately follows the Gaussian distribution in various devices [11], [21], [22]. To train the ANN model with probabilistic layer, we used Maximum likelihood estimation (MLE) method. Based on the observation (e.g., Y ), the MLE method is a technique for estimating parameter θ , when there is the input X. In other words, the final goal in this method is to find θ that maximizes P(Y|X; θ ) or can be mathematically rewritten as in (3): The parameters such as X , Y , and θ can be redefined in our model as follows: X : , x , and y (LER parameters) Y : {y 1 , y 2 , . . . y 50 }, y i : observed I off , V t , I on , and SS θ: mean vector and covariance matrix To train the probability-grafted ANN, we used ''Negative log likelihood'' (negloglik) as a loss function. Negloglik notifies how much two other distributions are different from each other. Using Adam Optimizer [23], the training process was executed for 200,000 epochs (776 sec) with learning rate of 10 −5 . The model was trained without overfitting [see Fig. 5]. Fig. 6 shows how I on is varied with modifying the LER parameters. Table 4 and Fig. 7 show the comparison between the TCAD data (=test data set) and the prediction data by the ANN model. Based on the probability density function determined by the mean vector and covariance matrix, the prediction data was ''randomly'' extracted. Hence, they are slightly different from TCAD samples, but they can never be identical to TCAD samples. Thus, the accuracy of prediction data was evaluated using the confidence interval calculated by the standard error of mean and standard deviation [24]. Herein, the predicted values of population mean and standard deviation by the ANN model are considered as the true population mean and standard deviation. IV. RESULTS AND EVALUATION n : number of samples in 1 set of data. Standard error of standard deviation ≈ σ √ 2 (n − 1) (5) VOLUME 8, 2020 Table 3 shows the comparison of simulation time of TCAD vs. ANN. It is noteworthy that the advantage of using the ANN model becomes conspicuous when the number of data (or the size of data) is bigger than 10,000 or more. Note that the ANN model was built using the Tensorflow 2.0 and Tensorflow-probability python library [25], [26]. V. CONCLUSION Line edge roughness (LER) is one of key sources inducing undesirable variation in transistor performance. These undesirable fluctuations affect the operation of circuit, and thereby, they can cause unexpected errors. Therefore, it is important to understand the factors causing the random variation in an accurate manner within reasonable time. In FinFET, the structural deformation by LER appears not as a shape of line but plane. Thus, the compact modeling method would not be the right option for solving a problem with increased complexity. To avoid these difficulties, we used the ANN model and suggest alternatives to predict the process-induced random fluctuations. With accurate predictions (which meets the confidence interval of 99%), our method is expected to help analyze the effects of LER in fabrication process and to evaluate yield of integrated circuit (IC).
2,699.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Computational analyses of eukaryotic promoters Computational analysis of eukaryotic promoters is one of the most difficult problems in computational genomics and is essential for understanding gene expression profiles and reverse-engineering gene regulation network circuits. Here I give a basic introduction of the problem and recent update on both experimental and computational approaches. More details may be found in the extended references. This review is based on a summer lecture given at Max Planck Institute at Berlin in 2005. Background The promoter of a gene is defined as the cis-regulatory DNA region at a specific location (the transcription start site, or TSS) that can drive the transcription of its target gene in response to environmental signals. Computationally, it is often conveniently divided into three regions: the core-promoter (~80-100 bp surrounding the TSS), the proximal-promoter (~250-1000 bp upstream of the corepromoter) and the distal-promoter (further upstream, normally excluding enhancer or other regulatory regions whose influences are position/orientation independent). The core-promoter is minimally required for the assembly of the preinitiation complex (PIC) and can drive a reporter gene at a basal level from the TSS. The proximalpromoter often contains major cis-regulatory elements for driving activated reporter gene expression with some tissue-specificity. However, the distal-promoter together with distal enhancers/silencers and insulators are often necessary for accurately reproducing the endogenous gene expression patterns in vivo, especially for early developmental genes. Distal cis-regulatory elements also occur in the introns and the downstream regions, and therefore computational studies of these regions have been difficult and often limited to only the conserved sub-regions and/ or regions in which functional cis-regulatory elements form clusters. Most of our work has been focused on 1 kb proximal-promoters (defined as -700 to +300 with respect to the TSS). We have shown that DNA motifs in this region can predict tissue-specific gene expression [1]. Computational promoter analyses usually face two related problems: the localization of the core-promoter (TSS prediction) and the identification of cis-regulatory elements (motif discovery). Basic computational methods have been reviewed previously [2], here I emphasize some recent developments. New experimental developments One recent surprise, revealed after more detailed biochemistry studies of promoter activation, is that people have underestimated the diversity and complexity of corepromoter architecture and regulation. I refer readers to the recent comprehensive review on "the general transcription machinery and general cofactors" [3]. Although several core-promoter elements have been identified ( Figure 1), with each element being short and degenerate and not every element occurring in a given core-promoter, the combinatorial regulatory code within core-promoters remains elusive. Their predictive value has also been very limited, despite some weak statistical correlations among certain subsets of the elements which were uncovered recently [4,5]. Further biochemical characterization of core-promoter binding factors under various functional conditions is necessary before a reliable computational classification of core-promoters becomes possible. An example of the type of question that must be answered is how CK2 phosphorylation of TAF1 may switch TFIID binding specificity from a DCE to DPE function [6] (Figure 1). The most significant advance comes from the new sequencing and microarray technologies that, for the first time, can provide ample and accurate 5'UTR sequence and core-promoter/TFBS location data. In particular, largescale 5'RACE technology at Tokyo University and 5'CAGE tag technology at Riken have provided DBTSS (Database of Transcriptional Start Sites, mainly human) [7] and Fantom (Functional Annotation of Mouse) [8,9] with an order of magnitude more promoter sequences derived from full-length 5'UTRs/cDNAs than were present in the traditional part of EPD (Eukaryotic Promoter Database) [10]. These sequences serve as the best training data for all current computational studies in promoter recognition. Many of the surprising new statistical features of the corepromoter have come from the recent analyses of such data (see [11] for a nice updated summary). One particularly interesting point made in this reference is that "Contrary to expectations, only a small fraction of RNAP II promoters appear to contain a TATA box. In contrast, a large pro-portion of RNAP II promoters in metazoan genomes appear to contain an INR element. Finally, about 25% of human promoters appear to lack known core promoter elements. This may point to the existence of additional core promoter sequence elements that remain to be identified and functionally characterized.". More mammalian promoter statistics are discussed in [12] which presents a comprehensive study of Fantom3 data. In addition to sequence data, ChIP-chip technologies (e.g. see review [13]) provide genome-wide in vivo mapping of protein-DNA binding regions which provide the best experimental data for all current computational studies in cis-regulatory motif discovery. Most of the important data for promoter prediction has come from the ChIP-chip localization of PIC at active core-promoters in the whole genome at sub-100 bp resolution [14]. When more such data are produced for different tissues/cells and development stages, it will transform the field of computational Advances in motif discovery The traditional approach for finding cis-elements is to collect a set of (target gene) promoter sequences believed to be enriched by some common TFBS motifs. They may either be collected from the literature or from systematic experiments (such as SELEX, etc.). There are many de novo TFBS motif finding algorithms available. For a recent review on computational TFBS finding methods, see e.g., [15]. For a recent benchmark of some popular motif finders, see [16]. In addition to the classical alignment-based motif finding algorithms, such as CONSENSUS [17], EM [18]/MEME [19] and the Gibbs sampler [20] which have been reviewed previously [21], most modern approaches have tried to extend either to the discovery of motif combinations (called cis-regulatory modules or CRMs), the use of evolutionary conservation information (with either phylogenetic footprinting or shadowing approaches), or a combination of both approaches. One can also increase specificity by incorporating structural information, for example, if the protein binds as a homodimer, one could restrict the search to only the palindromic motifs. More powerful and flexible motif finders can take the advantage of a separate sequence set called a background set, serving as a negative control. The goal is to search only for motifs that are most discriminating, i.e. only those enriched in the foreground set relative to the background set. Examples of such motif finders, called discriminant motif finders, include: ANN-Spec [22], DMOTIFS [23], DWE [24] and DME [25]. DME is particularly novel and powerful; it can enumerate all possible (discretized) weight matrices above user-defined minimum information content. A newer version (called DME-B [26]) of DME can optimize the classification ability of the identified motifs based on whether or not the sequence contains at least one occurrence of the motif. This technology has been used to systematically catalog of mammalian tissuespecific TFBS motifs [27,28]. The most powerful generalization of this idea would be to turn motif finding into a feature selection problem in regression analysis by asking what is the set of features X (some functions of the motifs or CRMs) that can best explain the microarray data Y (e.g. expression scores). This is very similar to the general problem in genetics: Y represents the phenotype (mRNA expression) and X represents the genotype (promoter DNA elements). One would like to learn a model (function f) so that f(X) can best predict Y. When "best" is measured by the average squared error based on the distribution Pr(X, Y), the solution is the conditional expectation (also known as the regression function, see, e.g. [29]): f(X) = E (Y| X = x). REDUCE was the first successful motif selection algorithm based on linear regression [30]. It has now been generalized to include cross-interaction terms [31], to use nucleotide weight matrices discovered by MDscan (Motif Regressor [32]), to apply logistic regression [33] and to a nonlinear model based on regression trees called MARSMotif [34,35]. The matrix version of REDUCE (called MatrixREDUCE [36]) and of MARSMotif (called MARSMotif-M [37]) are becoming important motif discovery tools for mammalian promoter analyses. Almost all the tools developed for analyzing expression microarray data can also be easily applied to the analysis of localization data, such as ChIPchip data. Although ChIP-chip is a global measurement for in vivo binding of proteins to chromatin DNA and hence is potentially capable of revealing direct target genes (most targets identified in expression arrays are not direct targets); due to the current resolution and to nonspecific or non-functional cross-links, not all putative targets are functional or possess functional cis-elements. ChIP-chip data have also been used to further refine motifs found by expression data (e.g. using a boosting approach [38]). Better promoter prediction A number of statistical and machine learning approaches that can discriminate between the known promoter and some non-promoter sequences have been applied to TSS prediction. In a recent large scale comparison [39], eight prediction algorithms were compared. Among the most successful algorithms were Eponine [40] (which trains Relevant Vector Machines to recognize a TATA-box motif in a G+C rich domain and uses Monte Carlo sampling), McPromoter [41] (based on Neural Networks, interpolated Markov models and physical properties of promoter regions), FirstEF [42] (based on quadratic discriminant analysis of promoters, first exons and the first donor site) and DragonGSF [43,39] (based on artificial neural networks). However, DragonGSF is not publicly available and uses additional binding site information based on the TRANSFAC database [44], exploiting specific information that is typically not available for unknown promoters. Two new de novo promoter prediction algorithms have emerged that further improve in accuracy. One is ARTS [45], which is based on Support Vector Machines with multiple sophisticated sequence kernels. It claims to find about 35% true positives at a false positive rate of 1/1000, where the above mentioned methods find only about half as many true positives (18%). ARTS uses only downstream genic sequences as the negative set (non-promoters), and therefore it may get more false-positives from upstream non-genic regions. Furthermore, ARTS does not distinquish if a promoter is CpG-island related or not and it is not clear how ARTS may peform on non-CpG-island related promoters. Another novel TSS prediction algo-rithm is CoreBoost [46] which is based on simple Logit-Boosting with stumps. It has a false positive rate of 1/5000 at the same sensitivity level (Zhao, personal communication). CoreBoost uses both immediate upstream and downstream fragments as negative sets and trains separate classifiers for each before combining the two. The training sample is 300 bp fragments (-250, +50), hence it is more localized than ARTS which has training sample of 2 kb fragments (-1 kb, +1 kb). The ideal application of TSS prediction algorithms is to combine them with gene prediction algorithms [21] and/or with the ChIP-chip PIC mapping data [14]. Future direction: epigenetics and chromatin states Although much progress has been made in promoter prediction and cis-regulatory motif discovery, false-positives are still the main problem when scanning through the whole genome. Fundamentally this is because the information about chromatin structure is still missing in all our models! Protein-DNA binding specificity is partly determined by the energetics and partly determined by "entropy", which depends on how much of the genome is accessible to the DNA binding protein [47] Without knowing which regions of chromatin are open or closed (and to what degree), researchers have to assume the whole genome is accessible for binding, which is obviously wrong and will lead to more false positives (and false negatives because of the extra noise). This is clearly shown by recent genome-wide ChIP-chip data as well as DNase I Hypersensitivity mapping data. There is a necessity for higher order prediction algorithms that are capable of predicting chromatin states based upon, perhaps, genome-wide epigenetic measurements, CpG-islands and repeat characteristics in addition to genomic sequences. It is fortunate that such kinds of data are rapidly being generated [48][49][50][51][52][53][54] and the corresponding analysis tools [55][56][57] are also coming along. The days of more realistic dynamic modeling of chromatin structure and its relation to expression and regulation are finally coming.
2,748.2
2007-09-27T00:00:00.000
[ "Biology", "Computer Science" ]
Features and Constitutive Model of Gypsum’s Uniaxial Creep Damage considering Acidization School of Resource, Environment and Safety Engineering, Hunan University of Science and Technology, Xiangtan, Hunan 411201, China School of Energy and Mining Engineering, China University of Mining and Technology (Beijing), Beijing 100083, China Work Safety Key Lab on Prevention and Control of Gas and Roof Disasters for Southern Coal Mines, Hunan University of Science and Technology, Xiangtan, Hunan 411201, China Foreign Language School, Hunan University of Science and Technology, Xiangtan, Hunan 411201, China Introduction Currently, about 44 percent of mines are exploited by the room-and-pillar method. French Lorraine iron ore accounts for 94% of the total production, while 58 mines in Lorraine are mined by the room-and-pillar method. In the United States, the method is used by 65 percent of metal mines and in Sweden, by over half of nonferrous metal mines [1,2]. During the mining, accidents are not uncommon, which threaten the engineering quality and the safety of miners and even bring substantial economic losses to the country [3][4][5][6]. The potential safety hazard of the method is mainly manifested as: the pillar is eroded and becomes sharp (see Figure 1), which leads to the overall instability of the house [7][8][9][10]. With the massive mining, shallow mineral resources are gradually exhausted, pushing the mining to the deep, where the underground environment becomes increasingly complex [11][12][13][14][15][16]. After excavation, due to the existence of groundwater, the pillar will be eroded, leading to the gradual decrease of its stability with time and the eventual failure [17][18][19]. In the process of being eroded to destruction, the influence of groundwater and mine pressure on pillar cannot be ignored [3,4,20]. Especially, the groundwater with partial acidity will seriously threaten the pillars [21][22][23]. Generally, the causes of the formation of groundwater are man-made and natural. Artificial acid groundwater is generally formed by the interaction of oxygen-containing water with surrounding rocks during mining [24,25]. The naturally formed acidic groundwater is due to the long-term evolution of water environment under natural conditions [26][27][28][29] and causes damage to the mechanical properties of rocks, which is an inescapable problem [12,[30][31][32][33]. Wawersik et al. [34] conducted tests on granite and sandstone, and the results showed that the time-varying effect would enhance with moisture content. Under uniaxial compression, the steady-state creep rate of dry specimens differs by about two orders of magnitude from that of saturated specimens. The tests by Dong et al. [35] showed that the intrusion of groundwater into rocks mainly produces two effects: one is that the friction and cohesion between the rock mineral grains decreases; the other is that it changes the mineral composition and microstructure, resulting in pores, caves, and cracks, which eventually leads to the softening of the rock and greatly reduce its strength. Taking tuff as the research object, Zhu et al. [36] carried out creep tests under dry and saturated state, respectively, and discussed the regularity of rock creep under water-bearing state. The result showed that, compared with the creep deformation, the water content slightly affects the instantaneous elastic deformation modulus, but greatly affect the ultimate creep deformation. By comparing the dry and saturated samples, the difference of their creep deformations was found to be 5-6 times. Huang et al. [37] conducted uniaxial compression creep tests on mudstone under different water-bearing conditions and found that with the increase of water content, the elastic modulus and uniaxial compressive strength greatly reduced, and the creep deformation and steady-state creep rate significantly increased. Li et al. [38] carried out creep tests on granite in air-dried and saturated states. The results showed that the long-term strength of the saturated granites was lower than the dried ones, while the creep rate and deformation are roughly larger. Liu et al. [39] used soft conglomerate to carry out single and biaxial creep tests to analyze the deformation characteristics of dry and saturated specimens. The instantaneous deformation modulus of the saturated conglomerate was found to be much lower than that of the dry state. Okubo et al. [40] carried out uniaxial compressive creep tests on tuff and andesite in dry and saturated state and studied their creep characteristics. The creep strain of rock in saturated state was found to be larger than that in dry, while the strength of creep failure is smaller. Xie et al. [41] performed chemical corrosion on porous limestone, and then carried out triaxial creep tests. The results showed that chemical reagent corrosion can increase the creep deformation and the permeability. Li et al. [42] executed shear creep tests on weak structural planes of sandstone under different moisture content. The result showed that the effect of moisture content on the shear creep deformation of weak structural plane is very significant. With the increase of water content, its deformation amount increases gradually, and the creep strength decreases. Moreover, the time reaching to a stable creep state increases continuously. Based on the engineering background of Wuhan Yuejiang Tunnel, Li et al. [43] carried out dry and saturated shear creep tests, respectively, and found that water can accelerate the creep strain rate and reduce the strength value of creep failure. Therefore, in engineering practice, the effect of water on sandstone creep cannot be ignored. Wang et al. [44] carried out triaxial creep tests on silty mudstone with dry and saturated state, respectively, to analyze the influence of water on its creep deformation and long-term strength. The result showed that the instantaneous strain, creep strain, and total strain of silty mudstone in dry state are all smaller than those in saturated, and water can lead to a significant change in its creep mechanical properties. Brzesowsky et al. [45] investigated the creep effects of the hydrochemical environment and stress state coupling effect on sandstone under compression. Most scholars study the creep of rock in the hydrochemical environment by hydration treatment before testing. Few researches perform the two tests simultaneously. Constitutive model is the key and difficult point of rock creep [46]. Many scholars have established creep constitutive models of rock materials from different ways and achieved fruitful results, including empirical model, component combination model, and nonlinear theoretical model [47][48][49]. Empirical models are the simplest, which refers to the establishment of stress-strain-time function relationships based on test data by using mathematical methods. They can be obtained from different rock materials with different experimental conditions. At present, common empirical model equations are power type, exponential type, logarithmic type, and the combination of the three. The theory mainly includes aging, flow, reinforcement theory, and elastic continuation [46,50]. Xu et al. [51] took granite as the research object and carried out creep tests and obtained the empirical formula of the negative exponential type by summing up. Jiang et al. [52] executed uniaxial creep tests on sandstone. By using the creep test data and substituting power function, the power function type empirical model was obtained. Singh et al. [53] and Mesri et al. [54] summed up the power function constitutive relation between strain and time by the consolidation creep test of clay. Li et al. [55] performed uniaxial creep tests on marble and obtained the empirical formula of the creep by fitting the test curve. Lu et al. [56] carried out three-axis consolidation undrained which made it difficult to reflect the creep mechanism and characteristics inside the rock, so empirical models are rarely used at present. Component combination models idealize the rock medium into basic elements in series or parallel to reflect the properties in the process of creep, such as elasticity, viscosity, and plasticity [58]. The combination model is characterized by its simple concept, which can reflect the creep characteristics of various rocks by changing the size, quantity, and combination mode of various mechanical parameters of the basic elements. Therefore, it is widely used. Currently Great achievements have been made in creep tests and constitutive models of rocks under the influence of the hydrochemical environment [76]. However, most of the related studies are conducted on the acidification of rocks before the mechanical properties are tested, which is not quite consistent with the actual engineering [77]. The mine pressure and the immersion of groundwater continuously influence on the pillars. At present, the creep constitutive model of rock is mainly applied to the analysis of rock mass itself, but rarely to the external environmental factors such as acid, alkali, temperature, and humidity [78,79]. Based on the abovementioned engineering 3 Geofluids practice, the hydrogeological survey of the Gypsum mine in Lilin County, Hunan province, was carried out. Through investigation, weak acid groundwater was found in this mine area, where the pillar is damaged by its erosion for a long time. Therefore, this paper intends to take the gypsum rock in the mine area as the research object, simulate the groundwater erosion environment, conduct immersion on the gypsum rock, study the uniaxial compression creep mechanical characteristics and constitutive model of the gypsum rock under the condition of acid erosion, and thus provide certain guiding significance for the retaining and protection of pillars. Laboratory Tests 2.1. Preparation of Rock Specimens. All the specimens are taken from the gypsum mine in Lilin County, Changde, Hunan Province. To reduce the dispersion of rock specimens, we sample them from the same rock block by drilling and coring (see Figure 2(a)-2(c)). To be specific, we make their ends and sides flat with the cutting machine and grind them with sandpaper to ensure the integrity and smoothness. Finally, we obtain the 50 × 100 mm standard cylinder specimens with smooth surfaces. We prepare hydrochloric acid solutions with pH values of 5, 6, and 7 in advance, respectively (see Figure 3). Then, the processed specimens are put into the prepared acidic solutions (see Figure 2(d)). Experiment Content and Test Method. To observe the mechanical properties of gypsum before and after the acidification treatment, the scanning electron microscope (SEM) observations (see Figure 4(a)) and uniaxial compression creep tests (see Figure 4(b)) are performed. SEM Observation. To observe the changes in the gypsum rock microstructure before and after acid corrosion, we perform SEM tests on four kinds of specimens. One is the original gypsum specimen in dry state, and the others are the specimens after the saturation with pH values of 7, 6, and 5 for 49 days, respectively. The sample surfaces are scanned with a magnification coefficient of 400-450 (see Figure 4(a)). Uniaxial Compression Creep Test. There are many loading methods for creep tests, among which one-stage loading, multistage monotonic loading, and multistage cyclic loading are commonly used [80] (see Figures 5(a)-5(c)). The uniaxial compression creep tests are carried out on RMT-150C rock mechanics testing machine in the Hunan University of Science and Technology (see Figure 4(b)). Considering the dispersion of the samples and the addition of acid solutions, we combine the multistage monotonic loading (see Figure 5(b)) with the multistage cyclic loading (see Figure 5(c)) in this test and call them the multistage monotonic cyclic loading mode (see Figure 5(d)). The stress in uniaxial compression creep tests is divided into three levels: 50%, 60%, and 70% of the uniaxial compressive strength. Since the previously measured average uniaxial compressive strength of gypsum rock is 64 MPa, the three stress levels are set to 32 MPa, 38.4 MPa, and 44.8 MPa, respectively. Each stress level is maintained for 48 h. After the first monotonic loading stage, we unload the stress to 0.1 kN for 24 h and observe the unloading curve. Then, we add the hydrochloric acid solution with pH values of 5, 6, and 7 into the molds, which are divided into the first, the second, and the third group. Next, we carry out the second stage loading on the specimens and observe the change of the stress-strain curves. Note that the stress level and the holding time are the same as in the first stage. Laboratory Results and Discussion 3.1. Chemical Damage Analysis. The main component of gypsum is CaSO 4 . However, there exist other oxides in gypsum rocks, such as SiO 2 and CaO. As the reaction goes on, the contents of Ca, K, and Na in the gypsum rock decrease, while the proportion of Si and Mg increases. It is because the reaction of hydrochloric acid with CaSO 4 and other substances forms solutions or gas. The nonhydrophilic minerals or the minerals that cannot react with hydrochloric acid will be exposed on the rock surface. Since the acidity affects the reaction efficiency, the proportions of elements are different under the four conditions (see Figure 6). In the acidic solutions, the oxidized mineral of gypsum reacts with H + as follows: . The rock specimen saturated in the distilled water (pH = 7) has no obvious change in size, but it changes in color (see Figure 7(b)). Compared with the specimen saturated in the acid solution with pH = 6 (see Figure 7(c)), the one in the solution with pH = 5 has more powdery particles falling off its surface and has fine cracks (see Figure 7(d)). Therefore, the acid solution with the lower pH value causes stronger chemical damage to the gypsum. Microscopically, the rock specimen shows a clear layered structure and lamellar crystalline morphology before immersion. It has good homogeneity, close internal structure, and small interlayer distance. The microfracture and micropore are of small size and relatively scattered distribution. There is almost no large pore in the specimens (see Figure 8(a)). It means that the gypsum rocks have good macroscopic mechanical properties before erosion. After being saturated in the neutral water, the structure of the specimen does not change significantly, but the color became lighter (see Figure 8(b)). After being saturated in the acid solution, the original lamellar structure or lamellar crystal morphology changes to sponge or floc shape, the structural porosity increases, and the interlayer boundary becomes fuzzy (see Figure 8(c)). The numbers of microcracks and micropores increase, and some independent small-sized micropores are connected to form large-scale "gullies" (see Figure 8(d)). Besides, the solution with the lower pH value causes more serious damage to the internal microstructure. 7 Geofluids the solution [81]. This phenomenon also reflects that more serious damage will occur to the microstructure of gypsum with stronger acidity. Note that the quality of the specimens saturated in the distilled water also decreases slightly. Thus, there exist water-soluble minerals in the gypsum. Uniaxial Creep Mechanics and Deformation Characteristics of Gypsum. Through three groups of uniaxial compression creep tests under acid corrosions, we obtain data of gypsum in terms of different pH values. Considering the discreteness of the samples, we choose the specimen with the ideal result from each test group and make analysis. Table 1 shows the creep damage of gypsum rocks, and Figure 10 illustrates the creep curve of the specimens under different acid conditions. When the three groups of specimens loaded with three levels of stress in the first loading process, their strain increments have nearly no difference. When the axial stress is unloaded to the preload value, there exist residual deformations in the specimens, which are almost the same (see Figure 10). It means that the dispersion of the three groups is small, and the test results are reliable. During the second loading process, the specimen soaked in the solution with pH = 5 is damaged under the first stress level, whereas the specimen soaked in the solution with pH = 6 and the one 8 Geofluids soaked in distilled water both failed under the second stress level. The difference is that the former specimen is damaged earlier than the latter, which confirms that the stronger acidity causes greater creep damage. According to the test data, we draw the creep curves of specimens under different stress levels (see Figure 11) and analyze the test results in detail. In the first loading process, the rocks have instantaneous strain when they are under axial stress. For the three groups of specimens in the dry state, when the stress is 32 MPa, their instantaneous axial strains are 1.10%, 1.00%, and 1.09%. As time goes on, the increase of the strains becomes more slowly, and the final strains are stable at 1.11%, 1.13%, and 1.10%, respectively. When the stress increases to 38.4 MPa, the instantaneous strain increments are 0.11%, 0.10%, and 0.13%, respectively. The strains increase more slowly with time, and the final strain increments are stable at 0.13%, 0.12%, and 0.15%, respectively. When the stress increases to 44.8 MPa, the instantaneous strain increments are 0.10%, 0.09%, and 0.09%, respectively. The strain increases slowly with time, and the final strain increments are stable at 0.13%, 0.11%, and 0.10%. With axial stress loading, the strains increase slowly with time, while the growth rate decreases. When the growth rate is zero, the specimen goes into a stable creep state. When the stress increases from the first level to the second level, the strain increment of specimens is greater, compared to the case that the stress increases from the second level to the third level. This indicates that the specimens are fully compacted under the second stress level instead of the first. In the unloading process, the strains of the three groups of specimens all drop sharply to about 0.29% and then go into the stable stage. The deformations of the specimens are not fully recovered. Residual deformations still exist. It indicates that the specimens suffer plastic deformation (see Figure 11(a)). In the second loading process, we add different solutions into the molds. The rock specimen soaked in the pH = 5 solution shows the most obvious change. After being loaded with the first stress level, the rock sample is damaged after 11.3 h. At the beginning of loading, the instantaneous strain appears and grows from 0.29% to 1.19%. It can be seen from Figure 11(d) that the first creep stage is very short. The specimen soon enters the steady-state creep stage, during which the strain rate is almost unchanged and keeps at about 1:03 × 10 −4 /h. After about 11 h, the specimen goes into the accelerated creep stage. At this time, the strain begins to accelerate, the strain rate increases, the creep curve becomes concave, and the specimen is damaged quickly. As for the specimen soaked in the solution with pH = 6 and the one soaked in the neutral distilled water, their initial creep stages are very short under the first stress level. They enter the steady-state creep stage quickly and do not have the accelerated creep stage. Under the second stress level, these two groups of specimens experience the initial stage, steady stage, and accelerated creep stage. Their strain rates in the steady creep stage are 1:46 × 10 −5 /h and 1:16 × 10 −5 /h, respectively. Besides, the specimen saturated in the acid solution is destroyed much earlier than the one soaked in neutral distilled water (see Figures 11(b) and 11(c)). We compare the strain rates of rock specimens in the steady-state creep stage before entering the accelerated creep stage. As shown in Figure 12, the saturation with the smaller pH value produces greater strain rate. With the decrease of the pH value, the strain rate increases exponentially. It shows that hydrochloric acid affects the creep characteristics of gypsum, which can accelerate the creep. To be specific, the stronger acidity causes greater influence on the creep damage. Uniaxial Creep Failure Patterns of Gypsum under Acid Corrosion. In different external environments, the rocks present different failure patterns [82]. As for the three groups of gypsums under acid corrosion, their uniaxial creep failure patterns are roughly the same (see Figure 13). All of them suffer splitting failure modes and end damages with different degrees. Especially, the specimen saturated in the acid solution with pH = 5 is most clearly destroyed. Since some minerals in specimens can react with hydrochloric acid and change into solution and gas, the end of specimens becomes sharper after the reaction. Moreover, with the axial stress, the rock specimens are finally destroyed. It can be seen from the damage degrees that the solution with the smaller pH value has more obvious effect on the creep characteristics of gypsum. The Creep Constitutive Model of Gypsum under Acid Corrosion. Under the external loads, rocks show complex mechanical characteristics, such as elasticity, plasticity, and creep. In the study of creep characteristics, the most important parts are the creep model construction and application [83]. To describe different creep characteristics of rocks, researchers usually take the methods of empirical formulas and differential equations. Based on the experimental data of the uniaxial compression creep of gypsum obtained in the previous sections, we introduce a nonlinear element and combine it with basic elements to establish a new constitutive model. The model can describe the accelerated creep stage of gypsum under the acid solution. [84,85]. Its mechanic model and dynamics are shown in Figure 14. The constitutive equation of the Kelvin model is The constitutive equation of the Maxwell model is According to the series relation, there is According to equations (2), (3), and (4), we can get: By differentiating and simplifying equation (5), we get the constitutive equation of the Burgers model as The creep equation of the Kevin model is The creep equation of the Maxwell model is According to the superposition principle, the creep equation of the Burgers model can be obtained by superimposing the creep equations of the Kelvin and Maxwell models as According to equation (9), when t = 0, there is It means that the burgers model has instantaneous elastic deformation. Besides, when t = 0, no other component except the spring k 2 has deformation. As time goes on, the strain increases, and the viscous element flows at a constant velocity. At the moment of t 1 , the unloading process begins. Its curve is shown in Figure 14(b). During the unloading process, the instantaneous spring back happens on the spring k 2 . With the increase of time, the deformation of k 2 continues to recover until the deformation of spring k 2 restores. At this time, the deformation is When t 1 is large enough, the deformation recovery of the elastic after effect is Finally, the remaining deformation of the model is The above analysis shows that this model has the characteristics of instantaneous deformation, deceleration creep, and constant velocity creep. According to the above test results, under the condition of acid corrosion and low stress, the gypsum rocks have instantaneous deformation and suffer the deceleration and stable creep stages. Under the condition of high stress, the rocks also have instantaneous deformation and suffer the deceleration creep, stable creep, and accelerated creep stages. Although the Burgers model can well reflect the properties of instantaneous deformation, deceleration creep, and stable creep, it cannot represent the accelerated creep stage. Therefore, based on the Burgers model, this paper constructs a new model to describe the characteristics of gypsum rocks in different creep stages under acid corrosion. Under the corrosion of acid, the mechanical properties of gypsum become weaker. At the same time, the external stress makes the internal structure of rock dislocated [86]. The creep failure of the specimen finally occurs under the combined action of force and acid. Since there exists acidification in the whole creep process, the parameters that reflect the creep characteristics of gypsum rocks must be related to the pH value. The main creep parameters related to the pH value are the elastic modulus and viscosity coefficient of the gypsum rock. In the accelerated creep stage, the presence of acid can promote the accelerated creep process and rock failure. Xu et al. [87] proposed a new nonlinear viscoplastic body (NVPB) model by paralleling a nonlinear viscous element with a plastic element. NVPB can well reflect the accelerated creep stage of rock. Its mechanical model and creep curve are shown in Figure 15. In this paper, we will use this model to describe the accelerated creep stage of gypsum rocks under acid corrosion. The creep equation of the model is Geofluids When n ≠ 1, the relationship between time and strain is nonlinear. When n > 1, the strain rate increases with time. When n < 1, the strain rate decreases with time. When the model is used to describe the accelerated creep stage of gypsum rock, n must be greater than "1". In this equation, η is the viscosity coefficient. Since this model is used to describe the creep characteristics of the gypsum rock under acid corrosion conditions, η must be related to the pH value and the stress σ. That is, η = η (pH, σ). As we know, the Burgers model can well describe the creep mechanical properties of gypsum rocks. Therefore, by connecting the NVBP model with the Burgers model in series, we obtain a nonlinear creep model that can fully reflect the accelerated creep stage of gypsum rocks under acid corrosion. The elastic modulus and viscosity coefficient in the Burgers model are related to the pH value of the acid solution, so the specific structure of the nonlinear creep model is shown as Figure 16. According to the creep equation (9) According to equation (15), when σ ≤ σ s ðpHÞ, the model degenerates to the Burgers model. Then, it can describe the properties of the gypsum rock in terms of deceleration creep and stable creep. When σ > σ s ðpHÞ, the model is a nonlinear creep model that can describe the accelerated creep stage. 3.3.2. Verification of the Creep Constitutive Model. By substituting the identified parameters into the above model, we can get the constitutive model curves. Figure 17 shows its comparisons with the test curves. According to Figure 17, the established constitutive model curve and creep test curve have good coincidence under various conditions. It means that the creep process of gypsum rock under acid corrosion can be well described by connecting the Burgers model with the NVPB model proposed by Xu et al. [79]. Besides, the square of the parameter identification correlation coefficient approaches to "1", which also verifies the correctness and rationality of the new model. Conclusions In light of the above work, the main conclusions of this paper are as follows: (1) After acidic saturation, the original lamellar structures and crystal forms were spongy or flocculent. The sample structure loosened, and the boundary between layers became fuzzy. Meanwhile, the number of microcracks and micropores increased, which weakened the macromechanical properties of the gypsum (2) The gypsum specimens with pH = 5 hydrochloric acid were damaged at the first stress level, while that with pH = 6 and pH = 7 were destroyed at the second stress level. The difference is that the failure time of the former is earlier than that of the latter, which indicates that the stronger acidity causes greater corrosion on the creep of the samples (3) The failure modes of the three groups were basically the same, with cleavage and end damage of different degrees. The damage to the end of gypsum sample with pH = 5 hydrochloric acid is the most obvious (4) A new nonlinear creep constitutive model was established by using the Burgers model and NVPB model in series, which agrees well with the creep test results and provides guidance for practical calculation Data Availability The numerical data used to support the findings of this study have not been made available because the nature of this research and participants of this study did not agree for their data to be shared publicly. Conflicts of Interest The authors wish to confirm that there are no known conflicts of interest associated with this publication, and there has been no significant financial support for this work that could have influenced its outcome.
6,428.8
2020-09-22T00:00:00.000
[ "Environmental Science", "Geology" ]
A Hybrid Deep Learning Intrusion Detection Model for Fog Computing Environment Fog computing extends the concept of cloud computing by providing the services of computing, storage, and networking connectivity at the edge between data centers in cloud computing environments and end devices. Having the intelligence at the edge enables faster real-time decision-making and reduces the amount of data forwarded to the cloud. When enhanced by fog computing, the Internet of Things (IoT) brings low latency and improves real time and quality of service (QoS) in IoT applications of augmented reality, smart grids, smart vehicles, and healthcare. However, both cloud and fog computing environments are vulnerable to several kinds of attacks that can lead to unexpected loss. For example, a denial of service (DoS) attack can block authenticated users by rendering network resources unavailable and consuming network bandwidth unnecessarily. This paper proposes an intrusion classification model using a convolutional neural network (CNN) and Long Short-Term Memory networks (LSTM) to obtain the advantages of deep learning methods in order to accurately predict such attacks. The proposed integrated CNN with LSTM-based Fog Computing Intrusion Detection ICNN-FCID model is used for multi-class attack classification. Our proposed model is demonstrated using NSL-KDD, a benchmark dataset, and provides attack detection accuracy of about 96.5%. Comparisons of the accuracy of our model with both traditional and other recent deep learning approaches show that our model is superior in performance. The ICNN-FCID model can be used in fog layer devices where network traffic is monitored and the attacks are detected. As a result, the cloud server and fog layer devices can be protected from malicious users and are always available in providing services to IoT devices. Introduction The Internet of Things (IoT) refers to the interconnected billions of physical devices that store and exchange data from around the world through the internet, where data can be processed and used for many purposes. The large amounts of data generated by the IoT need to be stored, processed, and accessed [1,2]. The cloud computing paradigm can be used for big data storage and analytics. The sensing data of IoT devices can be stored in the cloud so that the smart devices can be monitored and actuated [3]. This will enable the development of new applications using IoT and smart devices. Fog computing is an architecture that integrates cloud and IoT technology. The fog architecture acts like a cloud but is closer to the end user, and enables cloud computing facilities to be at the edge of the network, through which connected devices can obtain cloud services [4]. Fog computing enables the operations of cloud computing by means of a control plane and a data plane. A fog node is a device that includes the capabilities of computing, storage, and network connectivity. Multiple fog nodes can be installed to provide support to end devices. Switches, embedded servers, controllers, routers, and cameras can act as fog nodes. Most of the time-sensitive data generated by the end devices are sent to the fog node, where the data are analyzed. The response is sent to the device in a fraction of second. Then the fog node will send a summary of the data and work to the cloud for further analysis. The less time-sensitive data can be processed after seconds or minutes and sent to the aggregate node. After analysis, the aggregate node sends the response via the nearest node to the device. Later the aggregate node will send the report to the cloud for future review. The IoT network's data that are not time-sensitive can be sent to the cloud, where they are processed, analyzed, and stored. The end devices will wait hours, days, or even weeks for the data. The fog computing architecture also uses private servers to store the confidential data. This local server is also useful for ensuring data security and privacy. The fog node can receive irrelevant data from malicious user during communication. The attackers can produce a flood of data to execute a denial of service (DoS) attack, thereby diminishing the availability of fog nodes and the cloud. A local fog server is vulnerable to several kinds of attacks, including DoS, Remote to Local (R2L), Probe, and User to Root (U2R). This requires effective detection and prevention of various attacks [5]. Since the fog node is constrained by limited resources, if it suffers a DoS attack, it will not be able to provide services for users and the network performance will be greatly reduced. An intrusion detection system (IDS) is used [6] to monitor a network or systems for malicious activity and such activities should be reported either to an administrator or collected centrally. An IDS can be network-based, meaning it is responsible for checking network structures and looking for attack signatures, host-based, which means it is used to monitor host systems, or application-based, meaning it monitors specific applications and programs. An IDS can be implemented through deep learning models, which are advanced models of machine learning. This model consists of several consecutive layers that are interlinked, and each layer receives the previous layer's output as input. The key advantage of the deep learning algorithm over other machine learning algorithms is its ability to run feature engineering on its own. A deep learning algorithm scans the data to search for correlated features and combines them to enable faster learning without guidelines. Deep learning models are capable of creating new features by themselves. Once the deep learning model is properly trained, it can perform thousands of routine, repeatable tasks within a shorter time frame. A convolutional neural network (CNN) is a deep learning algorithm that can be used in various domains such as image processing, natural language processing (NLP), and biomedical applications. CNN has achieved excellent research results in image classification, sentiment classification, relation classification, textual summarization, and disease diagnosis and detection [7,8], and it has also been applied to various information security use cases, such as classification of malware, detection of intrusion, and Android malware, spam and phishing, and binary analysis. Historically, network intrusion detection was performed by machine learning models. However, these algorithms can cause the program to produce many false positives, causing repetitive work for security teams. Deep learning models can be used to develop more intelligent IDSs that can analyze network traffic more reliably. To address the challenge of intrusion detection in fog computing environments, this paper proposes an integrated CNN with LSTM-based intrusion classification methodology (ICNN-FCID). This methodology can reduce the number of false alerts and help security teams distinguish between bad and good network activities. Related Work This section lists various accomplishments in the area of intrusion detection, specifically the real-world IDS. Much research has been carried out in the area of network intrusion [9,10]. Yang et al. [11] designed the SVM-RBM algorithm using a support vector machine (SVM) and a restricted Boltzmann machine (RBM) to detect network anomalies. They then used the unsupervised algorithm of RBM to extract useful features from the datasets and trained the SVM classifier in a short time using the Spark gradient descent algorithm. They explored the numbers of hidden units for improving the performance of SVM-RBM. Jiang et al. [12] proposed the use of LSTM recurrent neural networks (LSTM-RNNs) as an intelligent multi-channel attack detection model. They performed multi-channel training with different types of features for preserving attack features of input data. They then classified the attacks and normal data and used a voting algorithm to determine whether the input data are an attack or not with the results of the classifier's attack detection. They have shown that their work was superior to other attack detection methods, such as Bayesian or SVM classifiers. Peng et al. [13] proposed a decision tree-based IDS system for fog computing environments. They digitized the strings in the KDD Cup dataset using a preprocessing algorithm, and they increased the quality of the input data through data normalization. This improved the efficiency of detection. Gao et al. [14] proposed a deep belief network (DBN), which is a combined form of unsupervised learning networks, a four-layer RBM, and a back propagation network, which is a supervised learning algorithm. Their result is demonstrated with the KDD Cup 1999 dataset. Farahnakian et al. [15] proposed an enhanced IDS model using the deep autoencoder (DAE) method. From the high-dimensional data, features were extracted using AE. They used four autoencoders in their deep autoencoder-based IDS (DAE-IDS), in which the result of the previous layer is used as the input to the next layer in AE. Each layer undergoes greedy unsupervised training to improve the efficiency. After the four autoencoders were trained, they used a softmax layer to classify the inputs to normal and attack. They also used the KDD Cup 1999 dataset in their work for evaluating the efficiency of DAE-IDS. Wang et al. [16] proposed a hierarchical spatial-temporal features-based intrusion detection system (HAST-IDS). They used deep CNNs to learn low-level spatial features of input data and LSTM networks to learn high-level temporal features from raw data. The deep neural networks automatically completed the entire process of feature learning using the DARPA1998 and ISCX2012 datasets. Kim et al. [17] proposed a DNN-based IDS model for detecting attacks. They used the KDD Cup 1999 dataset. Their DNN model used four hidden layers, 100 hidden units, and a ReLU activation function for the proposed IDS. Potluri et al. [18] used the NSL-KDD dataset to develop an accelerated DNN model for identifying the anomalies in the network data. The input layer contains 41 features that are fed into the DNN and two hidden layers are used for selecting 10 features from 41 features in the dataset. The first two hidden layers come into the pre-training procedure of the DNN. The hidden layer 3 is the softmax layer that will decrease the number of features to five. Zhang et al. [19] used two hybrid algorithms that combine SVM, RBM, and DBN for the analysis of the false positive rate, accuracy, false negative rate, and testing period with the KDD Cup-99 dataset. Illy et al. [20] proposed using ensemble learners for increasing the accuracy of an IDS. In a Fog of Things environment, they used two classification levels. Othman et al. [21] introduced the Spark-Chi-SVM model for intrusion detection, which used ChiSqSelector for feature selection and an SVM-based intrusion detection model built on the Apache Spark big data platform. They used the KDD99 dataset in their work. In comparing Chi-SVM classifier with Chi-Logistic Regression classifier, the Spark Chi-SVM model demonstrated high performance. Many of the previous works are implemented with KDD cup dataset which consists of redundant records. Also most of the research works on fog computing have focused on architectural aspects except few. The contributions of our research paper are as follows. The deep neural network is highly applicable to this field. We have introduced an integrated CNN with LSTM-based intrusion classification model for IDS, called ICNN-FCID, for fog computing environments, where accurate prediction can reduce the number of false alerts. This is primarily because CNN is capable of extracting high-level representations of features that reflect the abstract nature of low-level network traffic communication feature sets and because LSTM is capable of learning long-term dependencies in data. The NSL-KDD dataset is used in our proposed model for training and testing. Proposed Methodology This section describes the architectures of the fog computing model, the CNN-LSTM model, and the proposed architecture and algorithm for intrusion detection. Fog Computing Architecture In our system model, a hybrid IoT network is considered and shown in Fig. 1. It includes the IoT devices D: a set of heterogeneous devices (d), which are equipped with sensing and communication capabilities. Sensing results are periodically reported to the cloud server via the fog device. The set D can be further divided into k subsets such as D1; D2; D3………:Dk D Each subset has m number of IoT devices The fog layer consists of set of fog nodes and local fog server/cloud. The fog nodes serve as the relay between the IoT devices and the cloud server. F is a set of fog nodes. F ¼ ffd1; fd2; fd3……::fdxg (4) Figure 1: Fog computing architecture The fog layer may be vulnerable to several kinds of attacks. These nodes can monitor the anonymous traffic where our proposed integrated DNN model is deployed to detect intrusive behavior. In this way, the fog layer and cloud server can be protected from malicious users. In addition, it gives fog nodes high availability for more time-sensitive applications. CNN-LSTM A convolutional neural network is a kind of deep neural network and is referred to as CNN or ConvNet [22][23][24]. The architecture of an ICNN-LSTM is shown in Fig. 2. The input layer contains the input. The primary building unit of a CNN is the convolutional layer, which uses a series of convolution kernels, which are used to identify the features part of the network traffic data. A set of n kernels and biases are W = {w 1 ,w 2 ,…,w n } and B = {b 1 ,b 2 ,…, b n }, respectively, and are convolved with input data at each CNN layer. A new feature map x k is produced by the convolution between data and each kernel. For each and every convolution layer l, the transformation is defined by: The convolution operation is performed by sliding a filter or kernel over the inputs in the CNN learning process by which the optimized values of weights and bias can be obtained from different features of the input data without considering their position in the input data. The Activation Function, also known as the transfer function, is used to obtain the output of the node. The activation function is applied for every value in this layer. The ReLU rectified linear activation function is one of the common activation functions, and is a piecewise linear function. If the input is positive, it will output the input, else it will output zero and often achieve better performance. The pooling layer is another building block of a CNN. It is used to reduce the spatial size of the representation progressively through dimensionality reduction. In this way, the computational power for processing the data is greatly reduced and it controls overfitting. Max pooling, the most commonly used approach, will return the maximum value from the input, which is covered by the kernel. Max pooling discards the noisy activations. The LSTM layer is a class of recurrent neural network (RNN), which is able to learn long-term dependencies in data. In the fully connected layer, the non-linear combinations of the high-level features are learned in the output representation of the convolutional layer. The flattened output is input to a feedforward neural network. In every iteration of training, backpropagation is applied. The model can distinguish between dominating features and certain low-level features. These features can be classified using the softmax classification technique over a series of epochs. The output layer compares output values that are predicted with the known labels. After that, it finds the error of the predicted value. The error is sent back through the loss function, through which weights and bias will be updated. Proposed Architecture and Algorithm for ICNN-FCID Model The architecture of the proposed ICNN-FCID classification model is shown in Fig. 3. It consists of three main processes: data preprocessing, training, and testing. These processes are shown in Algorithm 1. During data preprocessing, raw data are transformed into a useful and efficient format. The raw data (real-world data) cannot be applied through a CNN model because errors will result. Data must be preprocessed before it can be used. This step includes the operations of data normalization, feature selection, and one-hot encoding. Data normalization is a technique used to scale the value of each feature having a different range of values to a common scale. In feature selection, the numbers of optimal features are selected. The symbolic features cannot be processed by the CNN model. One-hot encoding is used for converting data into numerical values. Through the feature selection process, the number of features in the NSL-KDD dataset is reduced. Using that subset of features, our ICNN-FCID model is trained and tested. With the CNN/LSTM-based intrusion classification deep learning model, a fog node can easily detect an attack and raise the alarm after processing the network traffic. Output: Well trained model of ICNN-FCID for IDS. Step 1: Preprocessing of data with TD dataset. 1.1 Normalization of Data for all features in the dataset. 1.2 Selecting optimal features to create subset of features as TD1. 1.3 Feature Encoding using one hot encoding for the features in TD1. Step 2: Build a Integrated Convolution Neural Network Classifier (ICNN-FCID) for the IDS. Step 3: Optimize the Classifier using Adam optimizer. Updating weight Step 5: Evaluate the performance of the ICNN-FCID classifier using model validation. Step 6: Calculate the classification accuracy. Experimentation Results and Evaluation This section evaluates the proposed ICNN-FCID classification model for intrusion detection. In this experiment, the CNN with LSTM is used for five classes of classification (normal, DoS, Probe, R2L, and U2R attacks). Our model is implemented using Python and Keras (the deep learning library of Python) on a computer equipped with an Intel Core i7 CPU, 16 GB of RAM and Windows 10. Dataset Description and Analysis The NSL-KDD dataset used in our proposed methodology [25] is an updated, cleaned up version of the KDD Cup 1999 dataset. The NSL-KDD dataset was created because there were too many redundant records [26] in the KDD Cup 1999 dataset. It was created with the records of internet traffic by a simple intrusion detection network and consists of four types of subdatasets: KDD Train+, KDD Train+_20Percent, KDD Test+, and KDD Test-21. In this dataset, four classes of attacks exist: Denial of Service, Probe, R2L, and U2R. Each record contains 43 features. The first 41 features refer to the input of traffic itself, the 42 nd feature refers to whether it is a normal record or attack record, and the 43 rd feature refers to the score, i.e., the severity of traffic input. Dataset Features Classification The dataset has 32 feature columns containing numeric data, 6 features of each record contain binary data, and 3 features are nominal features. The total number of records in all categories of the NSL-KDD training dataset is 125,973. Of that number, 67,343, 45,927, 11,656, 995, and 52 records belong to the Normal, DoS, Probe, R2L, and U2R attack categories respectively, as shown in Tab. 3. Similarly, the test dataset consists of Normal, DoS, Probe, R2L, and U2R attack records, but contains additional attacks in each class that are not in the training dataset. The test dataset has 37 types of attacks, of which 16 are attacks not available in the training dataset. Data Preprocessing The NSL-KDD training and testing dataset must undergo certain preprocessing steps. Data Normalization This technique aims to change the numeric values of attributes to a standard scale without distorting differences in value ranges. The values in column y are transformed using Eq. (7) below. z ¼ y À minðyÞ maxðyÞ À minðyÞ (7) Feature Selection Our dataset, contains 43 features. The last two columns are Packet type and Score. Because some of the features were considered to have no effect on neural network analytical results, fewer resources were needed to complete tasks. The computational cost of the model can also be reduced. In our work we took the first 41 features except the last two columns. In that, the list of {7, 8,9,11,14,15,16,18,19,20,21,22,25,27, 31} features were removed, because they had zero values. This allowed us to reduce the data volume size from 41 to 26 features. One-Hot Encoding In the NSL-KDD dataset, there are four attributes with non-numeric values: protocol_type, service, flag, and class. These were converted into numeric values. The protocol_type feature has three value types: TCP, UDP, and ICMP. They were encoded into its binary vectors [1,0,0], [0,1,0], and [0,0,1] by applying one-hot encoding. The service feature has 70 attribute types and the flag feature has 11 attribute types. In this manner transformation was performed. After transformation, the 41-dimensional features were mapped into 112 dimensional features. The predicted targets were mapped with the five categories of classification: normal, DoS attack, Probe attack, R2L attack, and U2R attack. Tab. 1 shows all of the parameters and shapes in each layer in our model. We can see that the total number of parameters is 92,489 and all are trainable. The number of non-trainable parameters is 0. Evaluation Metrics In this paper, a confusion matrix is used to describe the performance of our model. It includes significant details about actual and predicted output classes. True Positive (TPw) is the value that represents the number of records predicted as attacks that are actually anomalous records. True Negative (TNw) indicates the number of records predicted as normal that are actually normal records. False Positive (FPw) represents the value that indicates the number of records predicted as attacks, but they are actually normal records. False Negative (FNw) is the value that indicates the number of records predicted as normal that they are actually anomalous records. From the confusion matrix, we can define performance metrics mathematically as follows. False Negative (FNw) is the value that indicates the number of records predicted normal that they are actually anomalous records. From the confusion matrix, we can define performance metrics mathematically as follows. Accuracy: This is the percentage of the number of records that have been classified correctly to the total number of records. Recall: This is propositional to the true positive rate, and this is the percentage of the number of anomalous records have been correctly detected divided by the total number of anomalous records. Recall ðRÞ ¼ TP w TP w þ FN w (9) Precision: Precision quantifies the number of attack class predictions that actually belong to the anomalies class. F-measure: This is the harmonic mean of accuracy and recall, which provides a measurement of derived effectiveness. False Alarm Rate: This is the misprediction of normal data as abnormal data. False Alarm Rate ðFARÞ ¼ FP w FP w þ TN w (12) Misclassification Rate: This is the number of records that are incorrectly classified. Results In our proposed ICNN-FCID model, the CNN section is composed of an input layer, convolution layer 1 with 64 filters, pooling layer 1 with pooling size 2 and stride size 1, convolution layer 2 with 64 filters, pooling layer 2 with pooling size 2 and stride size 1, LSTM layer 1 with output size 112, fully connected layer 1, and an output layer. Dropout by 0.3 is considered to prevent overflow. The Rectified Linear Unit (ReLU) activation function was used in all of the layers except the last layer. The softmax activation function was used in the last layer. For optimization, the Adaptive Moment Estimation (Adam) method was used. Experiments were conducted with the ReLU, sigmoid, and hyperbolic tangent (TanH) activation functions, and the performance then compared. With ReLU we achieved the highest accuracy of 96.5%, precision of 85.25%, recall of 91.16%, and F-score of 86.43%. The accuracy, precision, recall, and F-score of our model with the sigmoid activation function were 85.58%, 80.06%, 90.50%, and 87.78%, respectively. The accuracy, precision, recall, and F-score with the TanH activation function were 88.3%, 84.53%, 89.30%, and 85.1%, respectively. All of the results are listed in Tab. 2, which shows that the ReLU activation function in the ICNN-FCID model provided higher accuracy than the sigmoid and TanH activation functions. The number of epochs was set to 50 and the size of each batch was set to 64. We then assessed the performance of our proposed model by measuring its accuracy. In the experiments, the ICNN-FCID provided improved classification results and the accuracy of our model was approximately 96.5%. We labeled DoS, Probe, R2L, and U2R attacks as 1, 2, 3, and 4, respectively, while the normal connections were labeled as 0. The confusion matrix for the NSL-KDD testing dataset of the ICNN-FCID model is shown in Fig. 4. In our model, as the epoch increases, the accuracy of the training and testing sets also increases. With the increase in epoch, there is a decrease in the loss of the training and testing sets. As the training rounds increase, the accuracy increases and the loss is decreases, but in the end, it appears to be flat. To find a better value of epoch, we tested every 10 epochs, from 10 to 50. The result is shown in Figs. 5a and 5b. Tab. 3 shows the performance of the ICNN-FCID model on the test dataset, as well as the accuracy, precision, recall, F-Score, false alarm rate, and misclassification rate for each class. The accuracy and false alarm rate for the DoS attack are 98.93% and 0.7%, respectively. The accuracy of the Probe, R2L, and U2R attacks is 98.34%, 98.94%, and 98.53%, respectively. The false alarm rate of the Probe, R2L, and U2R attacks is 0.24%, 0.24%, and 1.29%, respectively. Fig. 6 compares the performance analysis of the ICNN-FCID model with the three activation functions. Comparison with Existing Work The accuracy of our ICNN-FCID model was compared with conventional machine learning models and the latest algorithms of deep learning models. From Yin et al. [27] the literature reports that the traditional models of J48, Naive Bayesian, and Random Forest for intrusion detection have been implemented using the NSL-KDD dataset. The accuracies in five class classifications of J48, Naïve Bayesian, and Random Forest were 81.05%, 76.56%, and 80.67%, respectively. Next, we analyzed the performance of our model with the implementation of the latest deep learning models. The deep learning classification model called stacked NDAE was proposed by Shone et al. [28] and had accuracy of 85.42%. Li et al. [29] proposed a Fig. 7 shows the results of the comparison of accuracies of our ICNN-FCID model with traditional J48, Naive Bayesian, Random Forest, Stacked NDAE, and multi-CNN fusion models. The experiment demonstrated that our ICNN-FCID model performs classification with high accuracy on the NSL-KDD dataset. Real Time Classification Virtualization technology was used for simulating our experiments. We considered only DoS attacks. Fig. 8 shows the virtualization framework that was implemented for attacker-fog-cloud structure. All of the traffic to the cloud passes through the intermediate fog layer. Attack traffic can easily be classified on the fog layer through the ICNN-FCID model, so that the malicious traffic can be dealt with easily before it reaches the cloud server. Therefore, it provides efficient utilization of cloud resources and time. Using open-source tools and scripts on various operating systems and cloud servers, malicious traffic was The VMware ESXi hypervisor was installed on the server and configured. Then the three nodes-two Windows 7 systems and one Windows 10 system-were deployed in the virtual environment using the web interface of the ESXi server, and the firewall was shut down to make the DoS attack easier. The proposed ICNN-FCID was deployed in one Windows 7 virtual machine (VM). This system was used to monitor all of the traffic. We used the open-source Ethereal Network Analyzer to capture the packets. We used the Low Orbit Ion Cannon (LOIC) network stress-testing and DoS attack application tool in another Windows 7 VM attacker system for creating a SYN flood to attack the target machine. After that, the network traffic was preprocessed we formed the dataset by adding 400 connection records. A total of 500 records were given as input to the ICNN-FCID model, which classified the packets with expected accuracy. Fig. 9 shows the real-time classification performance of the ICNN-FCID model. Conclusion We performed an experiment to demonstrate the feasibility of using CNN and LSTM for a Network Intrusion Detection System (NIDS), in order to exploit the power of deep learning to identify network intrusions. In this paper, a hybrid classification model combining CNN with LSTM, called ICNN-FCID, is proposed, then implemented and trained for intrusion detection in fog computing environments. To improve the accuracy of our model, we used normalization, one-hot encoding for the features in the dataset. We trained our hybrid model with the KDD Train + dataset, and tested it using the KDD Test + dataset. With a testing accuracy of 96.5%, our model outperforms traditional machine learning methods and other recent deep learning algorithms. The efficiency of our model for the classification of DoS, Probe, R2L, and U2R attacks has been demonstrated for fog computing environments. We conducted our test using virtualization technology to detect DoS attacks and it achieved the expected results. This means that the proposed ICNN-FCID classification model can function efficiently in real-time environments. The direction that our future work will take will be to provide an attack detection model using multiple CNNs with more real-time traffic. Funding Statement: The authors received no specific funding for this study.
6,714.4
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
The leading twist light-cone distribution amplitudes for the S-wave and P-wave quarkonia and their applications in single quarkonium exclusive productions In this paper, we calculate twist-2 light-cone distribution amplitudes (LCDAs) of the S-wave and P-wave quarkonia (namely 1S0 state ηQ, 3S1 state \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ J/\psi \left(\varUpsilon \right) $\end{document}, 1P1 state hQ and 3PJ states χQJ with J = 0, 1, 2 and Q = c, b) to the next-leading order of the strong coupling αs and leading order of the velocity expansion v in the non-relativistic QCD (NRQCD). We apply these LCDAs to some single quarkonium exclusive productions at large center-of-mass energy, such as γ* → ηQγ, χQJγ (J = 0, 1, 2), Z → ηQγ, χQJγ (J = 0, 1, 2),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ J/\psi \left(\varUpsilon \right)\gamma $\end{document}, hQγ and h → J/ψγ, by adopting the collinear factorization. The asymptotic behaviors of those processes obtained in NRQCD factorization are reproduced. Introduction One of main fields for precision examination of the perturbative Quantum Chromodynamics (QCD) is the study of the hard exclusive processes with the large momentum transfer involved. The collinear factorization has been a well-established calculation framework for more than three decades [1,2]. In this framework, the amplitudes of many hard exclusive processes can be expressed as convolutions of the perturbatively calculable hard-kernels and the universal light-cone distribution amplitudes (LCDAs), in which the short-distance and long-distance contributions are clearly factorized. For instance, the electromagnetic form-factor of γ * γ → π 0 at large momentum transfer can be expressed as where hard-kernel T H (x; Q 2 , µ) contains the short-distance dynamics, while the LCDA of pion f π φ π (x; µ) is a purely non-perturbative object parametrizing the universal hadronization effects around the light-like distance. The LCDAs for light hadrons are not perturbatively calculable, one has to extract their informations from the experiments, or calculate JHEP06(2014)121 or constrain them by various non-perturbative methods, such as QCD sum rules, Lattice simulations. However, the dependence of these LCDAs on the renormalization scale µ are perturbatively calculable. For instance, renormalization scale dependence of the twist-2 LCDA of pion is governed by the celebrated Efremov-Radyushkin-Brodsky-Lepage (ERBL) equation [3,4] d d ln µ 2 f π φ π (x; µ) = α s 2π C F 1 0 V 0 (x, y)f π φ π (y; µ) , (1.2) where V 0 (x, y) is the so-called Brodsky-Lepage kernel. For the quarkonium involved exclusive processes, if the momentum transfer square is much greater than the mass square of the quarkonium, the collinear factorization can be invoked as well [5,6]. Many phenomenological applications along this line have been made for exclusive hard production of charmonium [7][8][9][10][11][12][13], exclusive charmonium production in B meson decays [14][15][16][17], etc. All of these applications require the understanding of the LCDAs for quarkonia. Different from the LCDAs for the light mesons which relies completely on the dynamics in the non-perturbative regime of QCD, one believes that the LCDAs for quarkonia can be further factorized into the product of the perturbatively calculable part and nonperturbative behavior of the wave-functions of quarkonia at origin, due to the nature of quarkonium as a non-relativistic bound state of heavy quark and anti-quark. The standard theoretical tool to deal with the heavy quark bound state system is the NRQCD factorization [18,19], in which all information of hadronization of quarkonium is encoded in the NRQCD matrix elements. Thus, there must be connections between the LCDAs of quarkonia and NRQCD matrix elements. For examples, in [20][21][22], the authors try to constrain their models for the LCDAs of quarkonia by relating the moments of LCDAs with the local NRQCD matrix elements; in [23,24], the authors calculated the leading twist LCDAs of the S-wave quarkonia within the NRQCD framework, and express the LCDAs in form of the product of perturbatively calculable distribution part and lowest order NRQCD matrix-element. Especially, the attempts in [23,24] open a way to connect the predictions of hard quarkonium exclusive productions within the collinear factorization directly to those made within the NRQCD factorization (for examples, the many theoretical calculations based on NRQCD factorizations [25][26][27][28][29][30][31][32][33][34][35][36][37][38], triggered by the recent experimental measurements of charmonium exclusive productions at B-factories [39][40][41]). In particular, in [42,43], the authors have shown that the collinear factorization indeed can reproduce the exact asymptotic behavior of NRQCD predictions at the leading logarithms (LL) and next-toleading order (NLO) of the strong coupling α s , respectively, for a certain class of the quarkonium exclusive productions, if one employs the leading twist LCDAs calculated in [24]; and the ERBL equations can be used to resum the large logarithms appearing the NRQCD factorization calculations for the exclusive quarkonium productions, while such resummation cannot be done within the NRQCD factorization. JHEP06(2014)121 to the NLO of α s and leading order of non-relativistic expansion parameter v, by adopting methods developed in [23,24]. For three LCDAs of S-wave quarkonia, we get slightly different results from those obtained in [23], and confirm the results of LCDA for 1 S 0 state given in [24]. The seven leading twist LCDAs of P-wave quarkonia at NLO are totally new. All of these leading twist LCDAs at NLO do obey the ERBL equations, and can be applied to various quarkonium involved hard exclusive processes. This paper is organized as follows: in section 2, we give the definitions of the leading twist LCDAs for the S-wave and P-wave quarkonia, in terms of the matrix-elements of a certain class of non-local QCD operators, and their tree-level forms at the leading order of v; in section 3, we present our main results of this paper, the LCDAs at the NLO of α s and leading order of v; in section 4, as applications and non-trivial examinations of our results, we calculate the γ * → η Q γ, χ QJ γ, Z → η Q γ, χ QJ γ (J = 0, 1, 2) , J/ψ(Υ)γ, h Q γ and h → J/ψγ within the collinear factorization, by using the LCDAs we calculate, and show how we can reproduce the asymptotic behavior of the NLO NRQCD predictions for those processes exactly; finally, we summarize our work in section 5. Notations We adopt the following notations for the decompositions of momenta: the momentum of quarkonium H is P µ ≡ m H v µ with v 2 = 1, and a 4-vector a µ can be decomposed as We also use the same notation v for the non-relativistic expansion parameter, which is typical size of the relative velocity of quark and anti-quark inside a quarkonium. In the context, one should not confuse these two. We also introduce two light-like vectors n µ ± such that n 2 ± = 0 and n + n − = 2, and any 4-vector a µ can be decomposed as a µ = n + an µ − /2 + n − an µ + /2 + a µ ⊥ with n ± a ⊥ ≡ 0. For convenience, we set v µ = (n + vn µ − + n − vn µ + )/2 (apparently n + vn − v = 1). Defintions of the LCDAs The leading twist, i.e. twist-2, LCDAs for the S-wave and P-wave quarkonia are defined as the matrix elements of the proper gauge invariant non-local quark bilinear operators where Q is the heavy quark field in QCD, the Wilson-line is a path-ordered exponential with the path along the n + direction, g s is the SU(3) gauge coupling and A µ (x) ≡ A a µ (x)T a (T a are the generators of SU(3) group in the fundamental representation). JHEP06(2014)121 The ten non-vanishing twist-2 LCDAs of the S-wave and P-wave quarkonia are defined as 1 where f , ε * andφ(x) are decay constants, polarization vectors/tensors, and twist-2 LCDAs of corresponding quarkonia, respectively. x denotes the light-cone fractions, and µ is the renormalization scale. In whole of this paper, we will also adopt the notationx ≡ 1 − x for any light-cone fraction x ∈ [0, 1]. Due to the discrete C, P, and T symmetries, one can check that, when ω → 0, we have and corresponding integrals of the rest LCDAs do not vanish. Thus, we set the normalization conditions for the LCDAs as following Here we follow the definitions of the LCDAs for P-wave mesons in series papers by K.C. Yang et al. [45][46][47][48], by setting z = ωn+/2, and p µ = n+P n µ − /2. Thus p · z ≡ n+P ω/2. JHEP06(2014)121 Then, some decay constants defined above can be related to the following matrix-elements of local operators In practical calculations, it is convenient to use the Fourier transformed form of the non-local operator defined in eq. (2.1) which are invariant under the re-parametrization n + → αn + and n − → α −1 n − . We have Here we suppress the dependence of all quantities on the renormalization scale µ. NRQCD factorization for the LCDAs Since quarkonia are non-relativistic bound states of heavy quark and anti quark, all of the LCDAs of quarkonia can be factorized into products of perturbatively calculable distribution parts and non-perturbative NRQCD matrix elements, as what done in [23,24]. This means that, schematically, at operator level, we have the matching equation JHEP06(2014)121 where n denotes the order of v-expansion, C n Γ (x, µ) is the short-distance coefficient as a distribution over the light-cone fraction x, and O NRQCD Γ,n is the relevant NRQCD operator which scales O(v n ) in the NRQCD power counting. Thus, the LCDAs of quarkonia can be expressed as At the lowest order of v, the matrix elements of the following relevant NRQCD effective operators will be involved in our calculation: Here we use the four-component notations as in [44] for the NRQCD Lagrangian, where m is the pole mass of the heavy quark, ψ v and χ v are the effective fields of the heavy-quark and anti-heavy-quark, respectively, Here we have used the spin symmetry of heavy quark system to relate the various matrix elements of S-wave operators and P-wave operators. at the leading order of α s and v-expansions. Here R nl (r) denotes the radial Schrödinger wave function of the quarkonium with radial quantum number n and orbit-angular momentum l, and the prime denotes a derivative with the respect of r. Tree-level matching The short distance coefficient C n Γ (x, µ) can be extracted, most conveniently, through matching the matrix-elements between the vacuum and state of a colorless pair of free heavy quark and anti quark with non-relativistic relative motion. In this subsection, we illustrate how to do the matching at tree level. The generalization to the NLO calculation is straightforward. We start with the heavy quark and anti-quark pair with the momenta where the residual momenta q andq in the rest frame of heavy quark pair scale like The total momentum of heavy quark pair The on-shell spinors of quark and anti-quark can be expanded in v as where we defineq ≡ (q −q)/2, and a, b are color indices for the quark and anti-quark. JHEP06(2014)121 For illustration, when Γ = γ 5 , we havē Thus, With the normalization conditions for the LCDAs set by (2.14), (2.15), we havê and where the superscript (0) denotes the quantity at the leading order of α s . Note we have used the fact that n + v/n + P = 1/m H . Similarly, one can get and The calculations of the LCDAs at NLO Matching procedure by method of threshold expansion To extract the short-distance coefficients C n Γ (x, µ) at NLO of α s through the matching equation JHEP06(2014)121 However, in this work, we will adopt the method of threshold expansion [49] to simplify the matching procedure so that we do not need to calculate the one-loop corrections to the matrix elements of effective operators O NRQCD Γ,n . This is equivalent to what done in [24]. In Feynman gauge, at one-loop level, the bare matrix element of Q[Γ](x) is written as 3 where +iǫ prescription for the propagators are understood, α s = g 2 s /(4π) 2 is the running strong coupling, C F = N 2 c −1 2Nc with N c = 3 is rank-2 Casimir in the fundamental representation of SU(3) group, and with d = 4 − 2ε and γ E = 0.5772 . . . being the Euler constant. In the following calculations, we will use the dimensional regularization (DR) to regulate both of the ultraviolet and infrared divergences. Apparently, we have to fix the scheme to treat γ 5 in DR. In the literature, two schemes about γ 5 in DR are widely-used, one is the naive dimensional regularization (NDR) scheme [50], in which {γ 5 , γ µ } = 0, {γ µ , γ ν } = 2g µν and g µ µ = d; the other is the t'Hooft-Veltman (HV) scheme [51,52], in which γ 5 ≡ iγ 0 γ 1 γ 2 γ 3 , and {γ µ , γ 5 } = 0 for µ = 0, 1, 2, 3 but [γ µ , γ 5 ] = 0 for µ = 4, . . . , d − 1. In this paper, we will compute the NLO corrections to the LCDAs in both the NDR and HV schemes. The commonly used method to deal with the spinor bilinearū(p 1 ) · · · v(p 2 ) in NRQCD community, is to transform it into a trace of Dirac matrices Tr[v(p 2 )ū(p 1 ) · · · ] by replacing v(p 2 )ū(p 1 ) with the proper spin-singlet or spin-triplet projectors. In many cases, the γ 5 involved trace is unavoidable. In contrast to the HV scheme, in which such traces involving γ 5 are defined uniquely and consistently, the NDR scheme for traces involving γ 5 are generally ill-defined. Thus, the additional care should be paid in evaluating the odd-number of γ 5 s involved trace. For instance, in [53] the authors proposed a strategy to treat traces involving an odd number of γ 5 s in the NDR scheme, by which one can easily reproduce the celebrated Adler-Bell-Jakiw anomaly, and other γ 5 involved loop calculations that are consistent with those obtained in the HV scheme. JHEP06(2014)121 However, in this paper, we will not use the trace techniques to calculate the spinor bilinearū(p 1 ) · · · v(p 2 ). In general, we have to deal with a spinor bilinear likē u(p 1 ) · · · n / + Γ · · · v(p 2 ) , (3.2) where n / + Γ originates from the vertex of Q[Γ](x), and the ellipses denote complex of Dirac matrices product from the QCD vertex and quark propagators. As we have seen in section 2, Γ = 1, γ 5 , γ α ⊥ , γ α ⊥ γ 5 , and we set n µ ± , v µ , γ α and both the external momenta within 4 dimensions. Then, no matter in the NDR or HV scheme, n / ± either commutes or anticommutes with Γ from Q[Γ]. The loop momentum k can be decomposed into in which k µ ⊥ can run over the extra dimension µ = 4, . . . , d − 1. Therefore, (3.1) can be simplified to where implicitly We will expand the loop integrals in small parameter v ∼ | q|/m by the threshold expansion technique developed in [49]. The most important momentum regions are hard region (where loop momentum k µ ∼ m), soft region (where k µ ∼ mv), potential region (where k µ ∼ m(v 2 , v)), ultra-soft region (where k µ ∼ mv 2 ). The contributions from the low-energy regions, i.e. (ultra)-soft and potential regions, are reproduced by the one-loop corrections to the matrix elements of effective operators in matching equation (2.33). Thus, to get the NLO part of the short distance coefficient C Γ,n (x, µ), we only need to calculate the contributions from the hard region. After the tedious expansions of integrands in hard region, we get various complicated spinor bilinears with complicated spin structures. At first, we try to use only identities {γ µ , γ ν } = 2g µν , {n / ± , Γ} = 0 or [n / ± , Γ] = 0 and on-shell conditions for the external spinors as much as possible, for which identities hold in both NDR and HV schemes, to simplify the spin-structures. And in the end, it turns out that the only structures which cannot be JHEP06(2014)121 simplified further for which the γ 5 schemes do matter are 4 γ ρ n / + Γγ ρ , and γ ρ γ σ n / + Γγ σ γ ρ . (3.5) We define γ ρ n / + Γγ ρ ≡ c n / + Γ n / + Γ so that in the NDR scheme and in the HV scheme. Thus, the hard part of the bare matrix element up to O(v) is where the on-shell renormalization constant for the heavy quark is and the renormalization kernels for the operator Q[Γ](x) in the MS scheme are with the Brodsky-Lepage kernel being (3.14) JHEP06(2014)121 Therefore, schematically, the final matching equation up to O(v) goes to Before we close the description of our matching procedure, one last thing we have to mention, is that in general covariant gauge, we should get additional contributions to (3.1). However, since we are calculating the on-shell matrix elements of gauge invariant operators, such additional contributions should vanish in the end. And we check that, by our strategy to simplify the spin-structures, no matter whether we are in the NDR scheme or HV scheme, such additional terms in general covariant gauge do vanish, as they should. This guarantees the gauge invariance of our results. Final results for LCDAs of quarkonia Giving the concrete Γ in (3.15), we can simplify the spin structures further, and decompose them into the matrix elements of the effective operators in (2.34), as we did in the previous section. By use of the loop integrals given in appendix A, we obtain the short-distance coefficients C Γ,n (x, µ). Imposing the normalization conditions given in (2.14) and (2.15), we reach the final results for the LCDAs at the NLO of α s and leading order of v. The three LCDAs for the S-wave quarkonia arê (3.17) and the corresponding decay constants are Here, ∆ = 0 for the NDR scheme, and ∆ = 1 for the HV scheme. Similarly, the seven LCDAs for the P-wave quarkonia arê 27) and the decay constants are In the above expressions, the + + +, ++ and +-functions are defined as One can check that our results forφ M (x; µ) preserve the normalizations in (2.14), (2.15), and f M φ M (x; µ) satisfy the ERBL equations For the decay constants which can be defined by the local QCD currents, such as and f 3A , we find that our results at NLO of α s and in the NDR scheme agree with those in literature [56]. The decay constants, such as f ⊥ 1A , f S , f ⊥ 3A , f T , and f ⊥ T , are actually the first Gegenbauer moments of the corresponding LCDAs, which satisfy the renormalization group equation that they should obey [3,4], We also compare our results for the LCDAs of S-wave quarkonia with those in [23,24]. In [23], the authors give all three leading twist LCDAs for S-wave quarkonia, but we find that their results do not lead to correct decay constants at NLO of α s after integration over the light fraction either in the NDR scheme or in the HV scheme. In [24], only f PφP (x) is calculated, and we find that our results in the NDR scheme agree with theirs. Some related quantities In the practical applications of the leading twist LCDAs, since the lowest order hard-kernels T H (x) for many hard exclusive processes are in form of 1/x or 1/x, the inverse moments of the LCDAs are crucial for final amplitudes. We define Applications In this section, we will apply our results for the LCDAs of quarkonia to calculate the hard exclusive processes γ * → η Q γ, χ QJ γ, Z → η Q γ, χ QJ γ, J/ψ(Υ)γ, h Q γ and h → J/ψγ within the collinear factorization. 5 We also compare our results with the asymptotic behavior of the corresponding predictions in the NRQCD factorization. These comparisons can be regarded as a non-trivial test of our results. In [57], the hard-kernels have been obtained at the NLO of α s which are Actually we did a recalculation of the hard-kernels T P,V H by using evanescent operator technique proposed in [58], and obtain the same results as in [57] if we adopt the NDR scheme to treat γ 5 . For the problem we consider here, the evanescent operator is which tree-level matrix-element vanishes in 4-dimension, but can contribute a term proportional to d − 4 in d-dimensional loop-calculation in general. If the one-loop coefficient of the tree-level matrix-element Q µν E contains a pole in term of 1/ε, an additional finite renormalization is required to make sure the matrix-element of Q µν E at one-loop level vanishes in the end [58]. In the NDR scheme, tree-level matrix-element of Q µν E does not vanish in d-dimension, thus we are required to do the corresponding finite renormalization. However, in the HV scheme, tree-level matrix-element of Q µν E does vanish even in d-dimension, so that we do not need to do the additional finite renormalization. This leaves us a great convenience to get the hard-kernels in the HV scheme, even before we get those in the NDR scheme. Thus, in the HV scheme, the hard-kernels read as − (x ↔x) . JHEP06(2014)121 Note that T V H in the HV scheme is actually identical to T V H in (4.4), but T P H in the HV scheme is different from T P H in (4.3). Straightforwardly, we apply the LCDAs of quarkonia obtained in the previous section, we have the NLO amplitudes with L ≡ ln −Q 2 −iǫ m 2 . One can check that, although both of the hard-kernels and LCDAs are dependent on the γ 5 schemes in loop calculations, the amplitudes of γ * → η Q γ and γ * → χ Q1 γ are independent of the schemes of γ 5 as they should be. By squaring the amplitudes, one can easily reproduce the asymptotic behavior of the ratios between the NLO and tree-level cross-sections of e + e − → η c γ, χ cJ γ(J = 0, 1, 2) in [32]. The authors adopted the trace technique proposed in [53]. Since only one γ 5 is involved in the trace, their results are essentially consistent with the results obtained in the HV scheme. 4.2 Z → η Q γ, χ QJ γ, J/ψ(Υ)γ, h Q γ in the collinear factorization The Z boson interacts with quark-anti-quark pair through the tree-level weak interaction as where g is the weak coupling in SU(2) L ×U(1) Y electro-weak gauge theory, θ W the Weinberg angle, g V = 1 − 8 sin 2 θ W /3 and g A = 1 for the up-type quark, and g V = −1 + 4 sin 2 θ W /3 and g A = −1 for the down-type quark. Thus, through the vectorial interaction, Z can decay to η Q γ, χ QJ γ as γ * , the corresponding decay amplitudes in the light-cone framework are just similar to γ * → Hγ by replacing the prefactor e 2 e 2 Q with gg V ee Q /(4 cos θ W ), ε γ * with the polarization vector of Z boson ε Z , and Q 2 with m 2 Z . Through the axial-vectorial interaction, Z can decay JHEP06(2014)121 By squaring the amplitudes, one should easily reproduce the asymptotic behavior of the ratios between the NLO and tree-level cross-sections of e + e − → J/ψγ, h c γ at Z 0 -pole. In [59,60], Chen et al. give the asymptotic ratios between the NLO and LO cross section are Their results agree with ours for 1 P 1 case, but differ from ours for 3 S 1 case, by a constant term (-4) at O(α s ). We cannot figure out the source of this discrepancy. h → J/ψγ in the collinear factorization The higgs boson h in the Standard Model interacts with quark-anti-quark pair through the Yukawa interaction Qh . where ε ψ is the polarization vector of J/ψ, and the hard-kernel T H can be calculated perturbatively. The NLO hard-kernel is with the mass of higgs in the Standard Model m h ≃ 125 GeV. Straightforwardly, we have the NLO amplitude iM(h(Q) → J/ψ(p, ε ψ )γ(p ′ , ε γ )) (4.24) where m c is the pole mass of charm quark. JHEP06(2014)121 Thirty years ago, Shifman et al. [61] had calculated h → J/ψγ to NLO of α s in color singlet model which is equivalent to the NRQCD calculation. The NLO prediction for h → J/Ψγ, that we quote from eq. (21) in [61], is written as where which coincides with eq. (4.24). Summary In this paper, we calculate ten leading twist LCDAs for the S-wave and P-wave quarkonia to the NLO of α s and leading order of v, in both NDR and HV schemes. We demonstrate that applications of these LCDAs in some single quarkonium exclusive processes can lead to correct asymptotic behavior of relevant NRQCD results. This confirms again the conclusion in [43] that there is a tight connection between the collinear factorization method and NRQCD factorization method for a certain class of quarkonium exclusive productions. And also as in [42], together with the ERBL equation, the collinear factorization method can be used to resum the large logarithms in NRQCD calculations. However, as discussed in [42,62], the so-called "endpoint logarithms" in helicity-flipped exclusive processes, lead to the breakdown of the collinear factorization. Such "endpoint logarithms" seem to be process-dependent, and how to resum them remains unknown.
6,174
2014-06-01T00:00:00.000
[ "Physics" ]
STS Motion Control Using Humanoid Robot This study presents the development of Sit to Stand (STS) motion control method. The main challenge in STS is in addressing the lift-off from chair problem. In solving the problem, the main components of the humanoid STS motion system involved are the (1) phase and trajectory planning and (2) motion control. These components should be designed so that the Zero Moment Point (ZMP), Centre of Pressure (CoP) and Centre of Mass (CoM) is always in the support polygon. Basically, in STS motion control there are two components, 1. Action selector and 2. Tracking controller. The STS motion control should able to operate in real time and continuously able to adapt any change in between the motion. In this way, the accuracy of the controller to rectify the motion error shall increase. The overall proposed method to perform the STS motion is designed to have two main phases. (1) CoM transferring that implements Alexander STS technique and (2) Stabilization Strategy that used IF-THEN rules and proportional velocity controller. This study focuses on the presentation of the development of second phase which are 1. The development of the IF-THEN rules as the action selector that operates in real time to assists the proportional controller in making the best decision and, 2. The development of Proportional Gain Identification for the proportional velocity controller that is capable to change the gain implementation by referring to the define region that represent the motion condition. The validation of the proposed method is done experimentally using NAO robot as the test platform. The coefficient of the gain identification for the proportional controller was tuned using NAO robot that was initially set at sitting position on a wooden chair. The inclination of the body from a frame perpendicular with the ground, angle y is observed. Coefficient that gives the lowest RMSE of angle y trajectory is taken as a constant. Results show the proposed control method has reduce the (Root Mean Square Error) RMSE of the motion from 6.6858° when all coefficient is set as the same to 4.0089° after the coefficient at all defined region have been identified. INTRODUCTION The study of Sit to Stand motion (STS) gives high impact to the robotics field particularly in rehabilitation (Chuy et al., 2006), exoskeleton (Strausser and Kazerooni, 2011) as well as humanoid robotics.Research in the STS field will promote the advancement of common humanoid motion hence make a robot more humanlike.With the capability of STS motion, the robot can be set at sitting position as a default home position and can be used for the purpose of long period application such as security and domestic robot.STS capability can also be implemented to other similar system such as exoskeleton robot, orthosis robot and FES system.In humanoid robotics field, the STS study has not been given emphasis until year 2010 (Mistry et al., 2010).As far as 2013, groups have been identified to publish study of STS on humanoid are Mistry et al. (2010), Kaicheng et al. (2009), Pchelkin et al. (2010), Sakai et al. (2010), Xue and Ballard (2006), Jones (2011), Faloutsos et al. (2003), Kuwayama et al. (2003), Iida et al. (2004a) and Sugisaka (2007). The main challenge in STS is addressing the liftoff from chair problem.The lift-off problem occurs when support polygon's area becomes smaller (initially positioned where hip touches the chair and feet touches the ground but becomes smaller when only the feet touches the ground) in a short period (Mistry et al., 2010;Riley et al., 1995).The phenomena is proven clinically in Millington et al. (1992) where the result showed that many parameters including torque at each joint and position of CoM need to be controlled at this point within a short period (9% of STS cycle).Failure to overcome this problem will cause the humanoid robot to fall on its back.This phenomena is called sitback failures in Riley et al. (1995).The lift-off problem is also caused by the actuator at the ankle that is not able to rotate the whole body in balancing the STS motion (Pchelkin et al., 2010). In solving the problem, the main components of the humanoid STS system are the: • Phase and trajectory planning • Motion control (Mistry et al., 2010) These components should be designed so that the Zero Moment Point (ZMP), Centre of Pressure (CoP) and Centre of Mass (CoM) stay in the support polygon.Combination of a proper phase, the right controller and trajectory planning will solve lift-off problem. For the first component, improper phase and trajectory planning will cause the robot joints to be in awkward positions.For example, at sitting position, if a robot bend forward too much, its ankle joint will be unable to provide enough force to balance the STS motion (Pchelkin et al., 2010).There are several phase that have been introduce to plan a proper trajectory in STS motion.Stability strategy and momentum-transfer is used by Riley et al. (1995).Knee strategy and the trunk-hip strategy are another named that have been called to represent the motion (Coghlin and McFadyen, 1994).Other than identifying the need of the motion then separate those into phases, Fu-Cheng et al. (2007) choose to implement an Alexander STS technique into the robot motion to plan the CoM position during STS movement.Human demonstration is another method used in Mistry et al. (2010) to obtain the CoM and joint trajectory to perform stable human-like STS motion. The second component i.e., motion control concerns on how well a humanoid robot follows the planned trajectory.The challenge is to control the whole body to manage how and when the system should react (Prinz et al., 2007).A good control method also helps to solve the phase planning problem as mention in Konstantin Kondak (2003).There are two aspects that need to be considered in STS motion control that is: • Action selection • Tracking the planned trajectory Action selection concerns on selecting the appropriate action to be taken in different robot condition.Tracking the planned trajectory concerns on ensuring accuracy of robot motion in joint or cartesian space. Action selector: The function of action selection is to choose the proper effort at certain condition such as different phase, robot position, or time interval.It is desirable to have action selection method that can adapt to change in STS motion in real time.Selection of appropriate action has been performed in the study of others using several methods. One of the approaches proposed is using the IF-THEN rules (Rasool et al., 2010) as the action selector.The rules are set based on the knee joint flexion.When the joint achieved certain degree, the rules will activate a controller that has been set at that moment.Another method introduced in Prinz et al. (2007) is a high-level controller that based on the phases planned by the author.A set of action has been design at every phase and the high-level controller wills active the action when the system entered the phase.The study is mostly the same with Matsui (2010) where the optimal controller is changing with the phase change.Both methods in Prinz et al. (2007) and Rasool et al. (2010) are not adaptable to the motion changes in real time because the rules are set based on a constant variable along the motion.Thus these approaches are not adaptable to real time change and not suitable for STS motion that needs a different phase or path. In another work, an EMOSIAC (Extended Modular Selection and Identification for Control)was used as a controller and also as a soft selector to activate certain modules (Andani et al., 2007).The EMOSIAC is more adaptable to the real time based on the system updated the next trajectory refer from the inverse and forward kinematic of each joint.However, the method have to undergoes learning process before it can be implement because it isa feedforward controller. The level of selection capability of the methods discuss earlier is focused on phase, robot position, or time interval as describe before.The selector should also able to operate in real time such as in Andani et al. (2007) with addition of feedback information from the motion.For this reason, this study presents a new approach to select appropriate action based on IF-THEN rule using COP position.This approach is not considered before by others since they used simulated invironment where the real COP data is lacking.Since in this project hardware experimentation is involved, the COP data can be acquired naturally by using force sensitive resistor embedded in the robot's feet. Tracking the planned trajectory: In tracking the planned trajectory, a controller that monitor the motion in real time is also needed.The controller should be able to minimize the error i.e., difference between the planned and the actual trajectory performed by the robot.In performing STS motion at multiple chair heights, control system that is able to rectify the motion error while the environment variation is taken into account is crucially needed. In Mughal and Iqbal (2006a, b) the optimal H $ controller was used as a tracking scheme to perform STS motion using biped model.Additionally, there is also a combination of H $ and H ∞ optimal controller developed in Mughal and Iqbal (2008) with the same purpose to perform STS motion using biped model.The optimal controller design is based on optimal solution for the system.It is most suitable in that environment but may not be the optimum solution for other system or environment. PID controller has also been implemented in Andani et al. (2007) as a feedback controller while the whole system was monitored by the EMOSIAC as mention in action selection.The PD controller was also used in Jones (2011) but the author has combined the controller with the root orientation correction and a virtual force feedback loop to perform STS motion using biped model.From the review, the PID or PD controller cannot be used alone to stabilize the STS motion.In both proposed methods, the authors have combined the PID and PD controller with other type of controller that function as an action selector or additional feedback to the system.The reason is STS motion is a nonlinear motion while PID or PD in a linear controller. The combination of action selection and tracking controller was also done in Prinz et al. (2007) and Rasool et al. (2010) where both have implemented a fuzzy controller to track the planned trajectory.The fuzzy system in Prinz et al. (2007) give the required joint torque to the simulation robot and Rasool et al. (2010) used the fuzzy compensator in modifying the state space of the motion.The problem when designing a fuzzy system is the need of knowledge in the motion itself before heuristic approach can be used to set up the fuzzy parameter.It is crucial to repeat the process if the fuzzy controller is implemented in different environmental setting or for different type of robot. Another approach in artificial intelligence system to perform STS motion is through learning process such as in Faloutsos et al. (2003), Iida et al. (2004b), Kuwayama et al. (2003) and Kanoh and Itoh (2007).The learning process has to be done until the robot able to stand.A number of trials are needed before the system can operate well.Logically, the method should able to adapt to the change of environment or system but the controller have to repeat the learning process before it generates the best motion.In Sakai et al. (2010), the author introduce Multi-valued Decision Diagram (MDDs) where the same problem may happen when facing with different environment or system. From the review, AI controller is more adaptable when compared to the PID or optimal controller.However, AI controller requires time and many sample of STS motion for learning.To overcome these limitations, a method that is both adaptable and does not require many STS motion sample is proposed.The proposed method is a non-linear controller that is able to change the motion of the robot based on the feedback given by the actual STS motion in real time.There are related works that uses feedback to change the robot motion such as in Konstantin Kondak (2003).However, the feedbacks are theoretically calculated since their work is done through simulation only. This study presents a controller that is implemented on a real hardware.Thus feedback from the motion could be acquired in real time.This project proposed the use of CoP reading to manipulate the gain of a proportional controller so that the controller is adaptable to the real time condition of the STS motion.The method decreases the heuristic by calculating the real velocity of the motion and constant gain is change by the real value of CoP position. Summary of contribution: There are two contributions presented in this study.Firstly, the implementation of IF-THEN rules that function as an action selector.The rules are set based on the CoP position and angle y reading at each moment in the motion.The action changes every time to ensure the effort given is the most suitable at that time.The concept is explained in system overview.Secondly, a Proportional Gain Identification method is proposed to ensure the controller is suitable for tracking whole STS motion.The gain is change based on the CoP position while the velocity of the whole body becomes a reference to the controller.The detailed explanation of the second contribution is stated in system overview. METHODOLOGY System overview: Figure 1 shows the system overview of the proposed sit to stand motion.The system is designed to have two main phases: In the CoM transferring phase, the trajectory of the robot motion is planned based on Alexander STS technique.The Alexander technique focus on decreasing the force needed to perform the STS motion.In this research, bending to front and ankle joint flexion is made to bring the Head-Arms-Torso system (HAT) CoM into the support polygon.For the case of NAO robot used in this project, it is located at 0.03 cm from the ankle joint of the robot. This study focuses on the improvement of the performance of Phase 2. Phase 2 starts when the HAT CoM is fully transferred in phase 1.In phase 2, the system will control the robot motion to a fully standing position using speed control.To determine a suitable speed parameter value, IF-THEN rules are set.The rules give a desired speed gain.The gain is varied by the Centre of Pressure (CoP) position in x-axis.For NAO robot, the controller used three types of sensor to control the motion which is gyroscope, accelerometer and force sensitive resistor.The gyrometer and accelerometer is used to get the angle y reading which refer to the angle between the robot and perpendicular line from the ground as in Fig. 2. Four units of FSR at the robot's feet give a CoP reading in meter. NAO robot configuration: NAO robot has been used for experimentation purposes.There are three types of sensor embedded in NAO i.e., gyroscope, accelerometer and force sensitive resistor.The gyrometer and accelerometer generates the angle y reading which refer to the angle between the robot and perpendicular line from the ground as in Fig. 2. Four units of FSR at the robot's feet give a CoP reading in meter.The motor speed is between 210 to 230°/sec with load.All of the motor for one leg (3 unit motor) is aligned to each other's. System configuration: The proposed method has two constant variables that need to be set before it can be implemented.There are: The position of CoM between the CoM transferring phase and stabilization strategy phase is assume the same i.e., the CoM is equal to the HAT CoM.Located at the centre of the HAT while both hand is parallel with the body.The region boundaries are determined by experiment procedure with the objective of determining the minimum stability edge before been used in the proposed method as discuss in below section. CoM transferring phase: In CoM transferring phase, HAT CoM is transfer into the support polygon to facilitate the stabilization strategy.The path planning was referred to the AT.To keep the stability of the robot, the CoM or HAT CoM in this case must support by any part of the robot body that has contact with the ground or the chair surface.Before hip lifted from the chair surface, the path should able to transfer the HAT CoM near to the feet.This is because when the robot lifts from the chair's surface, the feet are only parts that contact with the ground.This phase has two processes: and determination Velocity decision In the first process the horizontal distance between the HAT CoM of the robot with the ankle joint is identified.The process functions as automatic distance identification that crucially need by the second process to automatically determine the path needed.In this process, the horizontal distance ˲ is determined by using Eq. ( 1).In Eq. ( 1), hip joint unit, I can be ignored when the HAT CoM position is adjusted to be parallel with the hip joint position using Eq. ( 2): where, The diff refers to difference between hip, knee and ankle joint position read from sensor, , , with hip, knee and ankle joint position at normal position, , , .ˬ is the length between hip joint to the HAT CoM in cm and ˬ and ˬ is the length of thigh and shank.Figure 2 shows the position of each joint and the normal position that has been defined. Typical parameter values for standard NAO sitting position use in this research is shown in Fig. 2 With Eq. ( 1), the distance of HAT CoM with ankle joint can be determine for any robot after normal sit position has been defined. In the second process, joint angle and is determined.Value ˲ is used to identify angle change at each joint.The joint angle need to be identified to make sure that the HAT CoM is in the Stability Region (SR).Following the Alexander technique, the method focuses on hip and ankle joint to shift upper body weight into the stability region.In the first move, the method brings the body to the front.By using Eq.(3), the needed hip joint angle change , can be calculated: Results from Eq. ( 3) are observed to make sure that the robot does not exceed the hip joint limitation.The limitation at hip joint leads to the needed of ankle joint change.At this point, remaining distance between HAT CoM and stability region edge is calculated using Eq. ( 4): The remaining distance, ˲ determine whether the ankle joint change is needed or not.If ˲ = 0, the system proceed to the second phase. However, if ˲ has a positive value a new ankle joint is calculated using Eq. ( 5): After both hip and ankle joint has their values, the system moves the robot to the desired position starting with hip than followed by ankle. The trajectory of the hip and ankle joint is generated using the cubic polynomial function.With the cubic polynomial trajectory generation the joint speed was decreased at the first and the end of the motion.This condition directly affects the dynamic of the whole body motion. Hip and ankle joint will rotate to the new angle while knee joint is the same.Firstly, hip joint will move than followed by the ankle joint after hip joint has already at the destination.In between this motion, system will always monitor the projected angle y reading to make sure the robot does not fall forward.Hip or ankle joint will stop moving when angle y reading is more than the limit variable to control the motion from giving to much forward force. With the proposed algorithm, the path generates is referring to the AT while it can operate automatically if the environment i.e., chair height is changing.The algorithm also generates path that consider the body limitation and choose the most appropriate joint action.A minimum feedback taken from angle y reading is made to ensure the path not generates a high momentum at the end of the first phase.This is because the high momentum will give more difficulty for the second phase to control the motion. Stabilization strategy: In this phase, controller input is CoP, in meter, angle y, and joint angle, / / .The controller output is the new hip joint target angle, .and joint speed, Ӕ . .Firstly, the controller undergoes IF-THEN rules to choose the correct direction, velocity and gain.The gain and rules is based on the CoP position in three types of regions as depicted in Fig. 3.The boundaries of the regions are the optimum stability region edge value.The stability region edge values are usually determined heuristically.However in this research the value is obtain using experimentation by testing which HAT CoM position in Fig. 3: Region defines at robot foot base on CoP position the x-axis that not makes the robot fall.For NAO robot used in this study, the stability region edge value is found to be 0.03 m from the ankle joint.The boundary edge was set slightly smaller than the stability region edge, ˲ i.e., 0.02 m because the acceptable area must not exceed 0.03 m to the back of the ankle joint.The region area also cannot set too small to avoid over sensitivity of the system. The proposed IF-THEN rules: Generally the rules are set based on CoP and angle y reading.The dependant variables i.e., the velocity, direction and the type of gain will change according to the CoP and angle y reading.In principal, the dependant variables are set to ensure that the STS motion is always in favor of producing less angle y error.The amount effort needed for minimizing the angle y error is also considered when setting the dependant variables.For example, the direction of the hip joint only changes if the system senses that the angle y trajectory is smaller than planned and CoP is located at region B as in Fig. 3.For other conditions, velocity and gain are the only change made because rapid change in direction will worsen the stability of the STS motion. The robot is defined as stable when the CoP is in region M and becoming unstable when the CoP is in region B and F as shown in Fig. 3.The robot hip joint angular velocity, Ӕ . and direction, .depends on whether the CoP is in the region M or region B and F at the front or the back of the foot.Thus, the IF-THEN rules used as the action selection controller, it was set as follows: In the first case, the angle y and CoP reading represent that the robot is approaching to fall forward.The system action is increased the HAT velocity while moving backward.The proportional gain given to the controller is based on unstable region F. In the second case, it represent the vice versa of the first case.The action made is changing the HAT direction to the front while the velocity is new velocity from the controller.Another two cases represent that the CoP is in the stable region M but angle y reading is moving apart from the planned trajectory.In both cases, only the angular velocity of the HAT is change.Increased if the angle y>than planned, decreased if the angle y<planned.The proposed IF-THEN rules are not undergoes any fuzzification process to decrease the heuristic approach.Furthermore, the rules are operating in real time where the reading of angle y and CoP position is updated in every moment.After the dependent variables have been choose, the system will proceed to a velocity proportional controller as illustrated in Fig. 1 after the IF-THEN rules box. The proposed proportional gain, X X identification method: The proposed method implemented a proportional velocity controller with adjustable gain.The explanation of the proposed method begins with the gain identification method and then the process to identify the angular velocity that used as the controller output. Proportional gain: The gain is determined by the CoP position.As the CoP is changing every moment in the whole motion, the gain fed to the proportional velocity controller is also changing with it.In this way, the gain provide to the controller is the most suitable value to rectify the error.To do so, partitioning of region as in Fig. 4 is made to prepare the best coefficient for the CoP reading before used as the gain.The coefficient is set based on the defined region so that the value is only effected the gain when the effort is really needed.Furthermore, the coefficient used to enhance the CoP reading before used as a gain. The coefficient, G 1 and G 2 are tuned by changing the value until the angle y trajectory produces a lowest RMSE.Detailed on the tuning is discuss in the result. The CoP reading is taken from one foot because the system is in 2 dimensions (X-Y) so positions of CoP are assumed to be the same between each foot.Firstly, the coefficient was multiply with the CoP reading to produces the gain, ˧ as in Eq. ( 6).Result from Eq. ( 6) will be taken to the controller and produces a new hip joint angular velocity, Ӕ , as in Eq. ( 7).In any cases set in the rules describe before, the gain is functioning in the same way.Effort of increase or decreasing the original hip joint angular velocity, Ӕ is not control in this section.The discussion on the proportional velocity adjustment is in the sub section velocity variable: After the joint have reached the target position at time ˮ = ˮ , the controller need to keep the robot balance when ˮ > ˮ .The method to identify the gain at this moment is the same as before, ˮ < ˮ .However, the applied coefficient is different.From Fig. 5, the coefficient is represented as G 3 .The coefficient G 3 is different from the G 1 and G 2 because at this moment, the robot should be in static position.The only motion left is due to the error before the robot achieves the target position.It is not suitable for the gain to implement the same coefficient as before because the requirement in controlling the movement is different i.e., G 1 and G 2 control the motion in dynamic and G 3 in static.To ensure the robot not tumbling down due to this error, the controller will rectify the error with the same methods i.e., control the velocity of the motion. With the implementation of CoP reading and region coefficient as the prerequisite before the gain can be determine, the proportional velocity controller able to change the effort made corresponding to each moment in the motion.This has change the linear proportional controller into a controller that can adapt a nonlinear motion.The dividing of region that represent different coefficient can increase the sensitivity of the proposed method. Velocity variable: From the rules, proposed method will change the hip joint angular velocity in order to rectify the error produce by the motion.STS motion is a circular motion where tangential and angular velocities exist at each moment in the motion.When there is momentum, friction and gravity interference in the motion, this tangential velocity will changed from what it was planned.The developed controller should able to decrease this change to ensure the motion is mostly the same as planned and produce a stable motion.The tangential velocity of the whole body is proposed to be used as the tracking variable.To do so, the HAT CoM position is assumed as the end point of an imaginary link that start from the ankle joint refer as whole body link as shown in Fig. 6.From now, the CoM of the whole robot was set located at the HAT CoM as the mass of shank and tights was moving to each other in x-axis and the HAT give the most dynamic effect to the motion where in Hutchinson et al. (1994) it state that 10 to 15% dynamic contribution by the Hip joint (HAT) while knee and the ankle joint motion only less than 1%.Another link is the hip joint to the HAT CoM that becomes another system refers as HAT link. The horizontal distance from ground to the centre of mass is, H obtained from the angle measurement.Using trigonometry concept, Eq. ( 8) is used to calculate the horizontal distance between ankle joint and the CoM: where, The link for whole body, H is determined by Eq. ( 9) and the link for HAT, H is always same as the distance between CoM and hip joint, ˬ : and represent the angle y plan and actual.The general torque equation for the whole body link is, = ˭˧H sin that can also be represented by, = ˭H $ ӕ .By combining both equations, it will be as in Eq. ( 10).The final formula for plan and actual value is used to calculate the error of angle y as in Eq. ( 11): From the acceleration error in Eq. ( 11), the velocity error is determined by integration of ӕ within a step time.From the angular velocity error of the whole body motion, the tangential velocity, ˢ at HAT CoM is determined using Eq. ( 12).A needed hip joint angular velocity is determined using Eq. ( 13).With the new angular velocity, a new tangential velocity ˢ that counter the first tangential velocity generates by the whole body ˢ was made by the HAT motion to ensure that the total ˢ is zero: The new direction of the hip joint, . is determined base on angle y reading.From Eq. ( 11), ӕ will be a positive or negative value depending on the value of .This in turn will influence the value of ˢ and Ӕ . in Eq. ( 12) and ( 13).With Ӕ , , the new angular velocity, Ӕ , have to consider the rules discuss before.In the first case, the angular velocity of the hip joint is increase.The increment is based on addition of original hip joint velocity, Ӕ at that moment with Ӕ , , where it is the result of Ӕ , times gain based on the F region.In the second case, the hip joint angular velocity is only the Ӕ , because there is a direction change.For another two cases, the original hip joint angular velocity is added with Ӕ , when increasing is needed and minus with the Ӕ , when the system needs to decrease the motion velocity. RESULTS AND DISCUSSION This section discuss in detailed the results of experiment conducted.The experiment objective is to validate the proposed stabilization strategy method.The expseriment was done using NAO robot Version 3.3.Controller scheme was written using python script and no other external sensor was used.In every test, both robot heels must touch the chair front legs and the test was repeated for 5 times.Angle y and CoP position is observed to study the performance.Performance was measured by using root mean square error, RMSE calculated between actual and plan angle y trajectory. Results: NAO robot was set at sit position as in Fig. 2 using wooden chair.The chair height is 11 cm where knee joint is at 90°.This height is equal to the total length of shank and feet thickness.Three coefficients, G 1 , G 2 and G 3 were tuned until a lowest RMSE value of the angle y trajectory is found.The M region boundary is at -0.02 and 0.02 m.The B region is -0.02 to -0.5 m and F region is 0.02 to 0.1 m.Position 0.0 m of CoP is at the ankle joint (Fig. 3).At first, all ˙ was set at 1 to find the suitable performance standing time.Result was shown in Fig. 7. Let the coefficient for M region represented by G1 and B and F region by G2 as in Fig. 4. Another gain coefficient to control the robot after complete stand position is G3.G1 and G2 were set at 1000 and G3 was varied until the RMSE become smaller.In range of 1000 to 3000, the smaller value of RMSE is 1.387° at G3 = 2500.The RMSE was calculated from total performance time at 3.25 to 6 sec because at 3.25 sec the robot knee and ankle has reached the desired position (complete standing).Secondly, G1 was varied with G2 = 1000 and G3 = 2500.The same method was used to find G2. Figure 7 and 8 shows the graph of RMSE with respect to the coefficient of G1, G3 and G2 while Fig. 9 until Fig. 10 and 11 shows the angle y and CoP readings from the robot.In Fig. 12, the curve of gain value within the region is presented. The robot able to perform STS motion within 1.5 sec standing time which is start from 1.6 sec until 3.2 sec.The controllers switch to constant gain coefficient, G3 after 3.2 sec of operation after all joint has reached the target angle.Figure 13 shows the NAO robot motion after all coefficients has been identified. Discussion: When all coefficients is equals to 1, the lowest RMSE is when standing period is 2 sec as shown in Fig. 7.However, the robot will need 3.5 sec to stand.To decrease the performance times, a suitable gain was needed to ensure that the proportional controller able to provide a velocity equivalent to the present error.From Fig. 9a, the average RMSE of angle y trajectory is 6.6858° when all coefficients were set at 1000.The angle y graph shows that the actual trajectory moved forward from the planned trajectory at start due to momentum generates in the first phase.The average actual trajectory at this moment is 44.60°.After 3 sec of operation, the actual trajectory once again move to the back (-6.3°) from the plan trajectory (2.86°) after all joints has stop.This happen because the new velocity provide by the controller to decrease the error at the beginning of the motion is affecting the motion at this moment.Furthermore, the gravity forces acting on the robot bring the system backward as the last body motion is in that direction.The robot motion just not ends until 3.5 sec but begins to move forward.This motion was influenced by the controller and the gravitational force.The CoP position in Fig. 9b also shows that the pressure continuously changing until 3 to 5 sec before start to stable.The CoP is considered stable as it always in the foot area. The error happen after all joints already at the target position was decrease by tuning the coefficient G3.In Fig. 10, the actual trajectory of angle y when all joints have already stopped gives less error when compared with the graph in Fig. 9.The new velocity generated by the controller is able to counter the error cause by the gravitational force and previous velocity that act on the body.CoP position in Fig. 10 start to stable after 3.28 sec of operation.However, the changing of CoP position still occurs at the beginning.Similarly, the trajectory of angle y, the actual trajectory is not according to plan.This is due to the effect of coefficient G3 where it's only active after all joints are at the target position. To ensure the system provides the most suitable gain, G1 and G2 was needed.The presence of both coefficients has increase the stability of the motion where average RMSE for angle y trajectory decreased until 4.00968° as shows in Fig. 11.The angle y has already moved to the back after 2.5 sec of operation.This is because the region gives high sensitive to the system in controlling the motion.From the experiment, the M region needs high coefficient when compare to the B and F region as represent in Fig. 12.This is due to CoP reading is smaller in the M region than the other two regions.The gain coefficient boosts up the CoP value before it is used as gain value in the controller.However, if the coefficient used in the B and F regions, the gain will become larger than necessary. The performance of angle y observed when system in stabilization strategy phase shown that the actual value was greater than plan.Overshoot of actual angle y trajectory happen at the beginning (mostly at 1.6 sec) for all graph represent in Fig. 9 until Fig. 11 because of the high momentum creates by the robot body at CoM transferring phase.Change in direction of all joints generates the momentum forward.The CoP reading also shown that initially, the pressure was located at the centre of the SP (-0.03 to 0.03 m) than quickly focused to the front.Between the graph in Fig. 9 and 11, the different of plan and actual angle y trajectory will cause the CoP located further from the origin (0.00 m). At the end of the motion, the CoP reading is not consistently at one point between each test.However, the robot was able to stand completely based on actual angle y value that move closer to the plan value as time increased.Although the CoP is outside the defined stable region, the CoP is still in the foot area which the robot can stand stably because at this time, only small movement done by the robot.The last movement made by hip joint change to ensure that angle y moves closer to the planned angle y trajectory as fast as it can.In Fig. 11 the graph has already stable after 4 sec of operation due to small error happen before completed the standing post.However, the graph in Fig. 9 only stable after 5 sec of operation. CONCLUSION The IF-THEN rules based on CoP position and angle y trajectory helps to increase the capabilities to make proper action.The proportional controller with gain feed from the defined region based on the CoP reading has increase the flexibility of the controller in handling a nonlinear motion.AT that is proposed as a guideline in gait planning able to transfer the HAT CoM into the define support polygon has increase the stability of whole motion.The proposed control method is able to control the robot in performing STS motion within 3.2 sec and the RMSE is 4.0021°.The robot will collapse if there is no control system implement in the motion.It is recommended for future work that the proposed control method is tested on other humanoid robot to test the robustness and its capability.Comparison with other control method that has been developed can be made to verify the effectiveness of the method using the same experiment tools.Furthermore, the STS dynamic model can also be diversified to identify the best model to be used in this system.In the future, the method and algorithm will be tested using various chair height to validate the CoM transferring phase for autonomous STS motion system. Fig. 1 : Fig. 1: Overall system overview for stable sit to stand motion IF: Angle y>Plan AND CoP>0.02cm THEN : 1. Hip joint velocity is increased, 2. HAT moving backward direction 3. Gain is based on region F IF : Angle y<Plan AND CoP<0.02cm THEN : 1. Hip joint velocity is the body velocity error, 2. HAT moving forward direction 3. Gain based on region B IF : Angle y>Plan AND (-0.02<CoP<0.02)THEN : 1. Hip joint velocity is increased, 2. HAT moving backward direction, 3. Gain based on region M IF : Angle y<Plan AND (-0.02<CoP<0.02)THEN : 1. Hip joint velocity is decreased, 2. HAT moving backward direction 3. Gain based on region M Fig. 4 : Fig. 4: Coefficient label in the defined region Fig. 5 : Fig. 5: Position of coefficient G # , G $ and G % refer to the motion trajectory Fig. 7 : Fig. 7: RMSE of angle y trajectory with different STS motion period.(RMSE = 10 represent that the motion was collapse)
9,156
2014-07-05T00:00:00.000
[ "Computer Science", "Engineering" ]
From primordial seed magnetic fields to the galactic dynamo The origin and maintenance of coherent magnetic fields in the Universe is reviewed with an emphasis on the possible challenges that arise in their theoretical understanding. We begin with the interesting possibility that magnetic fields originated at some level from the early universe. This could be during inflation, the electroweak or the quark-hadron phase transitions. These mechanisms can give rise to fields which could be strong, but often with much smaller coherence scales than galactic scales. Their subsequent turbulent decay decreases their strength but increases their coherence. We then turn to astrophysical batteries which can generate seed magnetic fields. Here the coherence scale can be large, but the field strength is generally very small. These seed fields need to be further amplified and maintained by a dynamo to explain observed magnetic fields in galaxies. Basic ideas behind both small and large-scale turbulent dynamos are outlined. The small-scale dynamo may help understand the first magnetization of young galaxies, while the large-scale dynamo is important for the generation of fields with scales larger than the stirring scale, as observed in nearby disk galaxies. The current theoretical challenges that turbulent dynamos encounter and their possible resolution are discussed. Introduction The universe is magnetized, right from the Earth, the Sun and other stars to disk galaxies, galaxy clusters and perhaps also the intergalactic medium (IGM) in voids. In nearby disk galaxies, magnetic fields are observed to have both a coherent component of order a few micro Gauss, ordered on scales of a few to ten kilo parsecs (kpc) and a random component with scales of parsecs to tens of parsecs [1][2][3]. In these galaxies, both stars and the gas in the interstellar medium (ISM), are in a thin disk supported against gravity by their rotation. It is not clear what is the strength and structure of magnetic fields in the other major type of galaxies, the ellipticals. This is perhaps related to the fact that normal ellipticals have much lower active star formation and lack the requisite cosmic ray electrons for producing significant synchrotron emission. There is tentative evidence that even young galaxies, which are several billion years younger than the Milky Way host ordered micro Gauss strength magnetic fields [4][5][6]. Magnetic fields of similar strengths and coherence are detected even in the hot plasma filling the most massive collapsed objects in the universe, rich clusters of galaxies [7]. There is also indirect evidence for a lower limit of order 10 −16 G to the magnetic field contained in the intergalactic medium of large scale void regions between galaxies [8,9] (see however [10]). This strength refers to a coherence scale of a Mpc, and the field needs to be stronger if the coherence scale is smaller. The origin and maintenance of cosmic magnetism is an outstanding question of modern astrophysics. We focus here on galactic magnetic fields and trace their origin and maintenance from the early to the present day universe. Magnetic fields of the observed strength need to be constantly maintained against turbulent decay, the turbulence either being self generated by the Lorentz force or driven by other forces. This is done by electromagnetic induction due to motions in a preexisting magnetic field. Such motions can induce an electric field with a curl which by Faraday's law, can maintain the magnetic field. The resulting evolution of the magnetic field B is governed by the induction equation, where V is the fluid velocity and η the resistivity of the plasma. The first term in the induction equation describes the electromagnetic induction (the generation of electric field in a conductor moving across magnetic field), whereas the second term is responsible for its diffusion and resistive decay. If η → 0, the magnetic flux through any surface moving with the fluid remains constant. The relative importance of induction versus resistance is measured by the dimensionless magnetic Reynolds number R m = vl/η, where v and l are typical values for the fluid velocity and length scales respectively. For inter-stellar turbulence, adopting v ∼ 10 km s −1 [11], l ∼ 100 pc [12] and Spitzer value for the resistivity η ∼ 10 7 cm 2 s −1 , as applicable to ionized plasma at a temperature T ∼ 10 4 K, we have R m ∼ 3 × 10 19 ≫ 1. From Eq. (1) we see that one needs at least a seed magnetic field to be present before induction can amplify it. It turns out that most ideas of seed field generation lead to magnetic fields which are much smaller than observed. They need to be then amplified and maintained, a process called the dynamo. We review ideas for both these aspects. Early Universe origin Seed magnetic fields could be a relic from the early Universe, arising during the inflationary epoch or in a later phase transition, when the electroweak symmetry is broken or when quarks gather into hadrons (for reviews see [13,14]). Indeed, if the evidence for weak, femto Gauss magnetic fields in the void regions is firmed up, an early universe mechanism would provide a natural explanation. Such a possibility can also help probing the physics of the early universe. In this section, we use the natural system of units in which the Planck constant, speed of light and the Boltzmann constant are equal to 1. In the expanding universe all length scales increase proportional to the expansion factor a(t). Thus if the magnetic flux is frozen, the field strength at a time t decreases or redshifts as B(t) ∝ 1/a 2 (t). Neglecting the effects of dissipation, its energy density then decreases as ρ B (t) = B 2 (t)/(8π) ∝ 1/a 4 (t). The energy density of cosmic microwave background radiation (CMB), ρ γ (t), a relic of the hot 'big bang' beginning of the universe, also decreases with expansion in the same manner. This implies the approximate constancy of the ratio r B = ρ B (t)/ρ γ (t) ( Approximate as particle annihilation at certain epochs, can increase ρ γ (t)). This motivates characterizing the strength of the primordial field with either r B or B 0 the field strength at the present epoch, as a function of the scale L over which the field is averaged. A value of B 0 ∼ 3.2µG corresponds to the field having the same energy density as the CMB today, or r B = 1. Observations of CMB anisotropy or structure formation lead to upper limits of B 0 at the nano Gauss levels assuming nearly scale invariant magnetic spectrum [13][14][15][16][17]. Generation during Inflation The seeds for structures we see in the universe are thought to have originated during the inflationary epoch, from amplification of quantum vacuum fluctuations in the scalar field driving the rapid accelerating expansion of the universe. Inflation does have several useful features to generate coherent seed magnetic fields as well [18]. First, the rapid expansion during inflation stretches small scale wave modes to very large correlation scales corresponding to galaxies and larger. Second, such expansion dilutes any pre-existing charge densities to be negligible. Then the conductivity of the universe is negligible, there is no constraint from conservation of magnetic flux and one can generate magnetic fields from a zero field. The idea is then to excite quantum fluctuations of the vacuum state of the electromagnetic (perhaps more correctly hypermagnetic) field, when a given mode is within what is known as the Hubble radius, which then transits to random classical fluctuations as the mode is stretched well beyond the Hubble scale. Subsequently, when the universe reheats generating charge particles, the electric field is shorted and damped to zero, while the magnetic field part of what once was an electromagnetic wave is frozen into the resulting plasma. This idea however faces one major difficulty. The conventional electromagnetic (EM) action S EM is left invariant under a conformal transformation of the metric (g µν ) given by g * µν = Ω 2 g µν . Moreover, the geometry of the Friedmann-Robertson-Walker (FRW) expanding universe itself transforms to its flat space version under a suitable conformal transformation. Then Maxwell equations and consequently the electromagnetic wave equation transform to what obtains in flat space-time. In such a case, EM wave fluctuations cannot be amplified in a FRW universe. The electromagnetic field still decays with expansion as 1/a 2 (t), which is very drastic during inflation. Therefore significant inflationary magnetogenesis requires a mechanism for breaking conformal invariance of the electromagnetic action, so that the decrease of the field becomes milder, to say B ∼ 1/a ǫ with ǫ ≪ 1. A variety of models where such a behaviour can obtain have been suggested, one of them being to couple a scalar field φ (perhaps the inflaton responsible for driving inflation) to the EM action as S = f 2 (φ)S EM during inflation [18,19]. It turns out that in this model, one gets a scale invariant spectrum of magnetic fields for f ∝ a 2 or f ∝ a −3 , with a present day amplitude [14] Here H is the Hubble expansion rate in energy units during inflation and M pl the Planck energy. Thus, for specific evolutionary behaviour of the coupling function f , strong enough fields can be generated. Constraints and Caveats A number of constraints and caveats arise in models of inflationary magnetogenesis. First, a time dependent coupling f in front of the EM action implies that electric fields and magnetic fields evolve differently. For example in the model with f ∝ a −3 electric fields increase rapidly with time even though the magnetic field remains almost constant. Then its energy density can begin to exceed the inflaton energy density causing a back reaction problem [20,21]. This does not happen in the model with f ∝ a 2 . However in the latter model the function f = f i (a/a i ) 2 increases greatly during inflation, from its initial value of f i at a = a i . When the interaction of the EM field with charged particles is taken into account, the value of f at the end of reheating, f 0 , will renormalize the electric charge from e to e N = e/ f 2 0 . Suppose we require the charge to take the present day value at the end of inflation, i.e f 0 = 1. Then f i ≪ 1 initially and thus e N = e/ f 2 i ≫ e at early times. Demozzi et al. [22] argued that the theory is not trustable in this case as the EM field is in a strongly coupled regime. Alternatively, suppose one started with a weakly coupled theory where f i ∼ 1. Then f 0 ≫ f i by the end of inflation and so the renormalized charge e N ≪ e. When inflation ends, the interaction of the electromagnetic field with the charges will then be extremely weak. A third potential problem raised by [23] is that the creation of charged particles by the generated electric fields, due to the Schwinger effect, can increase the conductivity so much that magnetic field generation freezes. We have built models which attempt to address these issues by having a rising f during inflation followed by a decreasing f until reheating, but now predict a blue magnetic field spectrum dρ B /d ln k ∝ k 4 (k is the comoving wavenumber) and require a low energy scale of inflation and reheating [24,25]. The spectrum is cut-off at the Hubble wavenumber of reheating. The field is also helical when one adds a parity breaking piece to the EM action [25]. In this case the field orders itself considerably as it decays (see below). We find that a scenario with reheating at a temperature of 100 GeV leads to present day field strengths of order B 0 = 4 × 10 −11 G with a coherence scale of 70 kpc. Generation during phase transitions As the universe expands and cools from very high temperatures, it goes through the electroweak (EW) phase transition (at T = T c ∼ 100 GeV) and the quark-hadron (QCD) phase transition (at T c ∼ 150 MeV). Significant magnetic field generation can take place in these phase transitions, especially if they are of first order. In this case, the transition to the new phase occurs in bubbles nucleating in the old phase. These bubbles expand and collide with each other until the universe transits completely to the new phase. In these bubble collisions battery effects can operate to generate a seed magnetic field which is further amplified by a dynamo due to the turbulence generated during bubble collisions [26]. The consequences of such a picture has been studied for both the EW phase transition [27] and the QCD phase transition [28,29]. More subtle effects have also been considered invoking gradients in the Higgs field during EW phase transition [30], linking baryogenesis with magnetogenesis [31,32], or using the chiral anomaly of weak interactions [33,34]. A brief review of some of these effects is given in [14]. The properties the magnetic field generated in all these models is uncertain but ρ B can be a few percent of ρ γ . The coherence scale of the field can be as small as a few tens of the thermal de-Broglie wavelength 1/T upto a significant fraction f c of the Hubble scale. For the EW phase transition, which occurs at a temperature of about 100 GeV, the proper Hubble scale is of order a cm, and thus the comoving coherence scales will then be of order 10 15 f c cm [14]. For the QCD phase transition, which would occur at a temperature of T ∼ 150 MeV, the Hubble radius is ∼ 6.4 × 10 5 cm, and the comoving coherence scale is of order ( f c /3) pc. Moreover, the present-day strength for a magnetic field which has say a fraction r B = 0.01 is B 0 ∼ 0.3µG. Magnetic field evolution in the early universe The small-scale magnetic fields generated in these phase transitions or in the inflationary models with blue spectrum [24,25], are strong enough to drive decaying magnetohydrodynamic (MHD) turbulence [35,36]. The magnetic field energy density then decreases faster than the (1/a 2 ) dilution due to expansion. However, the field coherence scale simultaneously also increases with the decay. Note that in the radiation dominated universe, MHD equations reduce to their flat space-time version provided one uses a conformally transformed fields, for example B * = Ba 2 , conformal time τ = dt/a(t) and comoving spatial coordinate x = (r/a(t)) (r is the proper spatial coordinate). Moreover, the plasma in the early universe is an excellent electrical conductor, but its viscosity increases whenever the mean free path of a particle species (like the neutrino or photon) grows to be comparable with the coherence scale of motions. In epochs when viscosity dominates, the peculiar velocity induced by the Lorentz force becomes damped and hence does not in turn distort the field, freezing its evolution. In all other epochs, the Lorentz force induced velocity leads to decaying MHD turbulence. In the case of decay of fluid turbulence in flat space-time, a general feature is that of preservation of large scales (larger than the coherence scale) during the decay, and then the evolution of energy and coherence scale depends on the energy spectrum on such large scales [37]. The case of nonhelical magnetic field decay appears to be more complicated. Numerical simulations find that the comoving magnetic energy density, E M ∝ (B 2 * /8π) decays slower than for pure hydro turbulence, as E M ∝ τ −1 and undergoes an inverse transfer of energy with the coherence scale L c (τ) increasing as τ 1/2 [38,39]. If the field is fully helical, magnetic helicity conservation constrains the decay and further slows it down to E M ∝ τ −2/3 while L c increases faster as τ 2/3 [36,40]. If the field is partially helical its decay is to begin with as in the nonhelical case but conserving helicity. This makes the field eventually fully helical after which they decay more slowly like in the fully helical case. For the radiation dominated universe with a(t) ∝ t 1/2 , we also have τ ∝ t 1/2 ∝ a(t). Thus a power law decay in conformal time is still a power law decay in physical time (though slower). When matter starts dominating, the transformation law to the flat-space time MHD equations are different [36] and the relevant time co-ordinate becomesτ = dt/a 3/2 . Since the expansion a(t) ∝ t 2/3 in the matter dominated era, τ ∝ ln(t) and any power law decay for the comoving magnetic field inτ becomes only a logarithmic decay in real time. Therefore turbulent decay of the field almost freezes after matter domination. Predicted field strengths and coherence scales These ideas have been put together by several authors [13,14,24,25,36,38,41] to estimate B 0 and L 0 ,the field strength and coherence scale respectively at the present epoch, of magnetic fields which can undergo nonlinear evolution. First, a general constraint relating B 0 and L 0 can be found from the following criterion, that the field decays to a strength where the Alfvén crossing time across L 0 equals the present age of the universe [36]. This gives The field strength B 0 itself can be estimated assuming the scaling laws for turbulent decay starting from generation (when a = a g , T = T g ) to end of matter radiation equality (a = a eq , T ∼ 1 eV). As discussed above, we assume the comoving field strength changes negligibly thereafter. For nonhelical fields assuming the possibility of inverse transfer [38], gives B 0 = (a eq /a g ) −1/2 B g , where B g is the comoving magnetic field B * at generation. The scale factor ratio can be related to the temperature ratio using entropy conservation during the radiation era. This gives aTg 1/3 being constant with expansion, where we denote by g the degrees of freedom of relativistic particles. We assume g ∼ 100 at the epoch of generation and g ∼ 4 at the matter-radiation equality. This gives where T 100 = T g /(100 GeV) and the coherence scales is obtained from using B 0 in Eq. (3). For the case where the field is only partial helical, with h g the initial helical fraction, we have Here a h is expansion factor and τ h the corresponding epoch, when the decay of the field makes it fully helical. The initial helical fraction is defined as the ratio of initial helicity H g by the maximal helicity H max for a given energy, which for a peaked magnetic spectrum is H max ≃ B 2 g L c (τ g ) [36]. We note that the initial helicity H g is nearly conserved while energy decays. Therefore the fractional helicity subsequently scales as h ≃ H g /(E M (τ)L c (τ)) = h g (τ/τ g ) 1/2 and becomes unity when (τ g /τ h ) = (a g /a h ) = h 2 g or when (a g /a h ) 1/6 = h 1/3 g . Hence B 0 = B g (a eq /a g ) −1/3 h 1/3 g and putting in numbers, The above estimates agree reasonably with that of Banerjee and Jedamzik [36] from their detailed simulations of the magnetic field decay. Thus primordial magnetic fields surviving from the early universe could account for the lower limits to magnetic field in voids coming from the γ-ray observations of blazars emitting in the TeV energies. Their field strengths and coherence scales could even be such as to influence other physical processes in the universe. It is important to mention the following caveat with the γ-ray constraints. For this we recall how the constraint is obtained. Firstly it is argued that high energy TeV photons from a blazar interact with eV photons in the intergalactic space to produce a beam of relativistic electron-positron (e ± ) pairs, after travelling a distances of order tens of Mpc, typically into the void regions. This e ± beam inverse Compton scatters the ambient CMB photons to GeV energies such that one should see a Gev γ-ray halo around every TeV blazar, which is not detected. This null result can be explained if the beam gets sufficiently spread out due to deflection of electrons and positrons in opposite directions by an intergalactic magnetic field, which leads to the lower limit on such fields. However, there is ongoing debate as to whether the e ± beam traversing through the intergalactic medium, loses its energy due to plasma instabilities at a rate faster than the inverse Compton rate [10,13,[42][43][44][45][46]. In such a case, one would not see a GeV halo around the blazar even if there were no magnetic fields in the voids and consequently no lower limit on the intergalactic field would be obtained. Irrespective of the final outcome of this debate, the fact that one can potentially probe such a weak intergalactic magnetic field from γ-ray astronomy is very exciting. Of course, such a field need not be primordial, but could arise from the pollution of magnetic fields from galactic outflows [47,48], but the volume filling factor of such outflows is uncertain. An important challenge for magnetogenesis scenarios involving phase transitions, is the requirement that they ideally be of first order. The EW or QCD phase transition are first order only in extensions to the standard model of particle physics [49][50][51]. In the standard model, the EW and QCD phase transitions are 'crossover' transitions, with thermodynamic variables changing continuously but significantly in a narrow range of temperature around the critical temperature T c [52,53]. Magnetogenesis models which involve phase transitions of first order in the early universe and/or which generate strong magnetic fields with a blue power spectrum, like in the inflationary magnetogenesis models of [24,25], can lead to a significant stochastic gravitational wave background. This can be probed by space gravitational wave detectors like LISA in the future [54][55][56][57]. Astrophysical batteries and seed magnetic fields The Universe is charge neutral but positive and negative charged particles have different masses -a feature which is at the root of many astrophysical battery mechanisms. Biermann batteries For example suppose a pressure gradient is applied to a fully ionized hydrogen plasma. Pressure depends on number density and temperature and if these are the same for electrons and protons, the force on these fluid components will also be identical. However the electrons, being much lighter than protons, will be accelerated much more than the protons. This relative acceleration leads to an electric field, E = −∇p e /en e , which couples back positive and negative charges so that they move together, obtained by equating the electron pressure gradient −∇p e with the electric force −en e E. Here n e , p e = n e kT and T are respectively the number density, pressure and temperature of the electron fluid, and we have assumed that protons are much more massive and so do not move. If this thermally generated electric field has a curl, from Faraday's law, magnetic fields can grow from zero. Adding this electric field in Ohm's law and taking the curl gives a modified induction equation, We see that Eq. (6) now contains a source term such that magnetic fields can be generated from initially zero fields. This source is nonzero if the density and temperature gradients, ∇n e and ∇T, are not parallel to each other, and the resulting battery effect is known as the Biermann battery. It was first proposed as a mechanism for the generation of stellar magnetic fields [58,59], but has subsequently found wide applications to the cosmological context as well [60,61]. For example during reionization of the universe by star bursting galaxies and quasars, the temperature gradient is normal to the ionization front. However density gradients are determined by arbitrarily laid down density fluctuations, which will later collapse to form galaxies and clusters, and which need not be correlated to the source of the ionizing photons. The source term in Eq. (6) is then nonzero and magnetic fields coherent on the scale of the density fluctuations, or galactic and larger scales can grow. This will be amplified further during the collapse to form galaxies and one expects a seed magnetic field in galaxies B ≈ 10 −21 G [60]. Direct numerical simulations of cosmic reionzation [62] have confirmed such a scenario, and find a magnetic field ordered on Mpc scales, with a mass weighted average B ∼ 10 −19 G at a redshift of about 5. The Biermann battery can also operate in oblique cosmological shocks which arise during the formation of galaxies and large scale structures to generate magnetic fields [61]. For partially ionized hydrogen, with uniform ionization fraction χ and all species having the same temperature, p e = χp/(1 + χ) and n e = χρ/m p . Here p is the total fluid pressure. Defining Ω B = eB/m p c, Eq. (6) reduces to the same form as the induction equation but now for Ω B with a source term (∇p × ∇ρ)/(ρ 2 (1 + χ)). This source term, without the extra factor −(1 + χ) −1 , corresponds to the baroclinic term in the vorticity equation for Ω = ∇ × V, where the Lorentz force is neglected. Thus provided viscosity and resistivity are neglected, Ω B (1 + χ) and −Ω satisfy the same equation, and if they were both zero initially, they will always be equal later, i.e eB/m p = −Ω/(1 + χ). Taking the vorticity associated with spiral galaxies, Direct numerical simulations were used by Kulsrud et al. [61] to calculate the vorticity build up in structure formation shocks, which using Eq. (7) then translates into a seed magnetic field of B ∼ 10 −21 G in regions about to collapse into galaxies at redshift z ∼ 3. Battery due to interaction with radiation The difference between the masses of positive and negative charges can also lead to battery effects when an ionized plasma interacts with radiation. Indeed electrons are more strongly coupled with radiation than the protons, because the Thomson cross section for its scattering off photons is larger, being inversely proportional to the mass of the charged particle. Due to this photon-electron/proton scattering asymmetry, during recombination, both vorticity and magnetic fields are generated in the second order of perturbations. The strength of these seed fields are again of tiny B ∼ 10 −30 G on Mpc scales up to B ∼ 10 −21 G at parsec scales [63][64][65][66]. Moreover, during reionization of the universe, the radiative force from a source is larger on electrons than protons, accelerating the electrons more, again generating an electric field which couples them back together. Due to the inhomogeneity of the intergalactic medium this electric field will have a curl leading to magnetic field generation. This field is estimated to be between 10 −23 − 10 −19 G on coherence scales between hundreds of kpc to pc respectively [67,68]. Plasma effects During cosmological structure formation, the infall kinetic energy of the intergalactic medium (IGM) is expected to be converted into thermal energy through many shocks. The densities in the IGM are small with n ∼ 2 × 10 −7 (1 + z) 3 cm −3 , and therefore Coulomb collisions may not be strong enough to form these shocks, and one may need other means for particle collisions. Plasma instabilities like the Weibel instability [69,70], which occur when there are counter streaming plasma motions, generate small scale magnetic fields, which then effectively scatter particles. The idea that these fields provide seed magnetic fields has been explored by several authors [71,72]. Such a plasma instability has typical growth times where v is the upstream velocity, ω i = (4πn i e 2 /m i ) 1/2 the plasma frequency, and 'i' can represent electrons or ions, with coherence scales corresponding to the species skin depth c/ω i . These timescales are so small even for ions, s, compared to astrophysical timescales that the instability would rapidly saturate. Here v 2 = v/(10 2 km s −1 ) of order 1 − 3 is a typical inflow velocity for galaxies which will have velocity dispersions of the same order and n −5 = n/(10 −5 cm −3 ) the IGM density at redshifts of z ∼ 4 − 5, given its (1 + z) 3 scaling. Particle in cell simulations show that saturation occurs when the field grows to a small fraction ǫ B of the kinetic energy density of the inflowing plasma. Then the gyro radius of ions becomes smaller than the skin depth, whereby particles will get strongly deflected and so not counter stream. The resulting magnetic fields at saturation can be strong with B ∼ 3 × 10 −9 G(ǫ B /10 −3 ) 1/2 v 2 n 1/2 −5 , but correlated on the very small ion-skin depth 10 −8 n −1/2 −5 pc [73,74]. The long time survival of this shock generated field is unclear. Moreover, averaged over galactic scales they can only provide a tiny seed field for the dynamo (see Section 3.5). Seed fields from stars and active galactic nuclei (AGN) A seed magnetic field for the galaxy can also be provided by ejection of stronger magnetic fields from stars and active galaxies which have a much shorter dynamical time scale and form before the bulk of the galactic interstellar medium gets magnetized [75][76][77][78]. These processes can give fairly large seed magnetic fields of order a nano Gauss or larger. Of course in this case the dynamo has to operate efficiently in stars and AGN, and faces the challenges that we describe later in this review. There is also the issue of how magnetized plasma ejected from these objects is mixed with the originally unmagnetized interstellar medium in a protogalaxy, and how this affects its strength and coherence scales. Large-scale seed magnetic field from small scale fields In several contexts that we have discussed, the generated seed magnetic field even if strong, has a much smaller coherence scale than that of galaxies. In order to estimate the seed this provides for the galactic dynamo one has to determine the long wavelength (small wavenumber k) tail of the corresponding 1-dimensional magnetic power spectrum M(k). For hydrodynamic turbulence, both a 1 dimensional velocity power spectrum E(k) ∝ k 2 (called the Saffman spectrum) and E(k) ∝ k 4 are possible [37]. It has been argued that M(k) ∝ k 4 for the magnetic case using ∇ · B = 0 and the analyticity of the power spectrum [79]. To elucidate the conditions required for this, we proceed as follows: The magnetic correlation function in Fourier spaceM ij (k) is the Fourier transform of the real space correlation function. Contracting the indices, assuming statistical isotropy and homogeneity, the 3-D magnetic spectrum M 3d (k) is given by Here we have used the fact that for a statistically isotropic and homogeneous magnetic field w(r) = b(x) · b(x + r) = 1/r 2 d(r 3 M L )/dr and M L (r) is the longitudinal correlation function [80,81]. The last step in Eq. (8) has made a small kr expansion of sin(kr). The first term in the expansion in Eq. (8) is r 3 M L evaluated at infinity, and goes to zero if M L falls off faster than 1/r 3 . Then the next term dominates at small k, provided the resulting integral is non zero, which it would be in general for M L (r) falling of sufficiently rapidly. Then M 3d (k) ∝ k 2 and so the 1-d spectrum On the other hand, if the magnetic field correlator M L (r) falls off as 1/r 3 due to the persistence of long range correlations, then the first term in the integral does not vanish and instead, goes to a constant. Then M 3d (k) → constant as kr → 0, hence M(k) → k 2 . The first case would hold for example when the field is in randomly oriented magnetic field rings (or flux tubes), while the latter case will be obtained if one generates instead randomly oriented current rings. So both cases of M(k) ∝ k 2 (random current rings) and M(k) ∝ k 4 (random B flux rings) would seem possible depending on the origin of the field. For a spectrum M(k) ∝ k n , the power per logarithmic interval in k space scales as kM(k) ∝ k n+1 , and hence the magnetic field smoothed over a volume of size l = 1/k scales as B l ∝ l −(n+1)/2 . Suppose the field is coherent on a small scale l, has strength on this scale B l , and the spectrum goes as M(k) ∝ k n for kl ≪ 1, an estimate of the power on a large scale L ≫ l is given by B L ∼ B l (l/L) (n+1)/2 . For example, in case of the Weibel instability generated field of Section 3.3, with B l ∼ 3 × 10 −9 G at l ∼ 3 × 10 10 cm, taking L = 1 kpc, we get B L ∼ 10 −25 G even for the n = 2 case. On the other hand if supernovae seed fields of B l ∼ 10 −6 G on scales of 100 pc, on a larger galactic scale of say L = 3 kpc the seed field would be B L ∼ 6 × 10 −9 G for n = 2 case and B L ∼ 2 × 10 −10 G, which are fairly strong seed magnetic fields for a dynamo to act on. Turbulent dynamos and their challenges Turbulence or random motions, which is prevalent in all systems from stars to galaxy clusters is thought to be crucial for amplification of seed magnetic fields to the observed levels, a process called "turbulent dynamo". Turbulence is driven mostly by supernovae in the galactic interstellar medium [82], although during the formation of a galaxy by collapse from the IGM, accretion shocks and flows along cold streams could also be important [83][84][85][86][87]. In disk galaxies shear due to the differential rotation also plays an important role in the dynamo amplification process. Turbulent dynamos are conveniently divided into two classes, the fluctuation or small-scale and mean-field or large-scale dynamos. This split depends respectively on whether the generated field is ordered on scales smaller or larger than the scale of the turbulent motions. Here we briefly outline their role in galactic magnetism focusing on the challenges that they present. Much of our current understanding of these dynamos come from their analysis using statistical methods or direct numerical simulations (DNS). We shall focus more on some conceptual issues here. Fluctuation or small-scale dynamos The fluctuation dynamo is generic to sufficiently highly conducting plasma which hosts random motions, perhaps due to turbulence. First, in such plasma, magnetic flux through any area moving with the fluid is conserved. Moreover, in any turbulent flow, fluid parcels random walk away from each other and so magnetic field lines get extended. Consider a flux tube with plasma of density ρ, magnetic field B, area of cross section A and linking fluid elements separated by length l. Flux conservation implies BA = constant. Mass conservation in the flux tube gives ρAl = constant, which implies B/ρ ∝ l. Thus if l increases due to random stretching and ρ is roughly constant, then B increases. This of course comes at the cost of A ∝ 1/ρl ∝ 1/B decreasing, the field being concentrated on smaller and smaller scales till resistivity becomes important at a scale l B . An estimate for this resistive scale gives l B ∼ lR 1/2 m , got by balancing the decay rate due to resistive diffusion, η/l 2 B , with growth rate due to random stretching v/l. Here v and l are the velocity and its coherence scale respectively of turbulent eddies. As R m is typically very large in astrophysical systems, the resistive scale l B ≪ l. What happens when resistive dissipation balances random stretching can only be addressed by a quantitative calculation. The first such calculation was due to Kazantsev [88], who considered an idealized random flow which is δ-function correlated in time. For such a flow one can write an exact evolution equation for the two-point magnetic correlator, which has exponentially growing solutions, or is a dynamo, when R m exceeds a modest critical value R c ∼ 100. The growth rate is a fraction of the eddy turn over rate v/l, and at this kinematic stage, the field is shown to be concentrated on the scale l B . From the idealized Kazantsev model, it also turns out that R c is larger and the growth slower for compressible flows compared to the incompressible case [89][90][91]. Moreover, for Kolmogorov turbulence where the flow is multi-scale, ranging from the outer scale to the small scales where viscosity dominates, the fastest amplification is by the smallest supercritical eddy motions. In the galactic ISM, the kinematic viscosity ν is typically much larger than the resistivity η, and then growth would be expected to occur first due to dynamo action by the smaller viscous scale eddies [92,93]. For the interstellar turbulence with an outer scale of turbulence l ∼ 100 pc and velocities v ∼ 10 km s −1 , we expect R m ≫ R c and a growth time scale l/v ∼ 10 7 yr even by the largest eddies. This time scale is so much smaller than ages of even young high redshift galaxies, say a few times 10 9 yr old, that the fluctuation dynamo is expected to rapidly grow even weak seed magnetic fields to micro Gauss levels. Moreover, as smaller eddies can grow the field faster, significant amplification occurs even earlier. However as l B ≪ l, the field in the growing phase is extremely intermittent and concentrated on the small resistive scales. The big challenge is then whether these fields can become coherent enough to explain for example observations of the Faraday rotation inferred in young galaxies. This growth of random magnetic fields due to the fluctuation dynamo has been verified by direct numerical simulations of driven turbulence, albeit in the idealized setting of isothermal plasma, for both subsonic and supersonic flows [94][95][96][97][98][99][100][101]). Such simulations however have a modest values of R m /R c ∼ 10 − 20. The basic expectations of the idealized Kazantsev model during the kinematic phase are qualitatively verified. The field grows exponentially and is concentrated initially on the resistive scales. It is also found that the small-scale dynamo is less efficient for compressible compared to solenoidal forcing, as it generates less vorticity [100,102,103]. Importantly, the DNS can now also follow the field evolution in to the nonlinear regime when Lorentz forces act to saturate the dynamo. By the time the dynamo saturates, the coherence length of the field increases to be a fraction of order 1/3 − 1/4 the scale of the driving, at least when the magnetic Prandtl number Pr m = ν/η is of order unity [94,[96][97][98]104]. These DNS have resolutions from 512 3 upto 2048 3 . More modest resolution (256 3 ) DNS with large Pr m but small fluid Reynolds number Re found the magnetic energy spectrum to be still peaked at the resistive scale l B even at saturation [95]. It is difficult to directly simulate the case expected in the interstellar medium, of both a large R m /R c and large Re, as one then has to resolve both the widely separated resistive and viscous dissipation scales. Clearly the saturated state of the fluctuation dynamo deserves further study, especially in this highly turbulent and Pr m ≫ 1 regime. We have also directly determined Faraday rotation measures (RMs), in simulations of the fluctuation dynamo with various values of R m , fluid Reynolds number Re and up to rms Mach number of M = 2.4 [97,101,105]. At dynamo saturation, for a range of parameters, we find an rms RM contribution which is about half the value expected if the field is coherent on the turbulent forcing scale. Interestingly, in subsonic and transonic cases, the general sea of volume filling fields, dominates in determining the strength of RM. The rarer, strong field structures, contribute only about 10 − 20% to the RM signal, indicating that perhaps the coherence of the generated fields is associated with more typical volume filling magnetic field regions. However, when the turbulence is supersonic significant contributions to the RM also comes from strong field regions as well as moderately over dense regions. How exactly the field orders itself during saturation is at present an open problem. One may wonder if magnetic reconnection is important for dynamo action. We note that the reconnection speed, depends inversely on the magnetic field strength even when it is efficient. Thus it would be too long compared to the dynamo growth rate to be relevant during the kinematic stage of the dynamo. It could however play a role once the field becomes dynamically important. Some interesting aspects of a reconnecting flux rope dynamo have been explored in [106]. In nearly collisionless plasmas like galaxy clusters, plasma effects could set transport properties even for weak fields and small scale dynamo action in such a context is just beginning to be explored [107,108]. Simulations of galaxy formation from cosmological initial conditions have also showed evidence for amplification by the fluctuation dynamo, over and above the result of amplification by flux freezing during the compressive collapse to form the galaxy [109][110][111][112][113]. One of the main limitations of such cosmological simulations is the resolution; that it will be very difficult to capture both the galactic scale and the dissipative scales, to predict correctly the rate of growth of magnetic energy and the coherence scale of the saturated field. Intriguingly, some of the direct simulations of SNe driven turbulence, which have possibility of a multiphase medium, do not yet show a strong fluctuation dynamo [114][115][116], although they do show large-scale dynamo action (except for [117]). All in all, one expects energy of random, intermittent magnetic fields to generically grow rapidly in the turbulent ISM of galaxies. This turbulence could be driven by supernovae in disk galaxies. Galactic disks would then host significant fields, and a line of sight going through the disk could have a significant RM [97,101]. This can partly explain the statistical detection of excess RM in MgII absorption systems [4,5], which are thought to be associated with young galaxy disks at redshifts z ∼ 1. However, the abundance of these systems gives evidence that the MgII absorption arises not only in line of sights through the disk, but also in extended gaseous halos [118]. Thus one would need the halo to be also magnetized and produce a significant RM. This could occur through outflows from the disk which also carry cold magnetized "clouds". More work is required to firm up such a speculation. Mean-field or large-scale dynamos and galactic magnetism Remarkably, when turbulence is helical, magnetic fields on scales larger than the coherence scale of the turbulence can be amplified. In any rotating, stratified system like the ISM of a disk galaxy random motions driven by supernovae do become helical due to the Coriolis force, with one sign of helicity in the northern hemisphere and the opposite sign in the southern hemisphere. Such helical turbulent motions of the plasma draw out toroidal fields in the galaxy into a twisted loop generating poloidal components (called the α-effect). Differential rotation of the disk shears the radial component of the poloidal field to generate back a toroidal component (the ω-effect). These two can combine to exponentially amplify the large-scale field provided that the generation terms can overcome an extra resistivity due to the turbulence. This is quantified by a dimensionless dynamo number being supercritical. Turbulent resistivity also allows the mean-field flux to be changed. Quantitatively, in mean-field dynamo theory, the total magnetic field is split as B = B + b, the sum of a mean (or the large-scale) field B and fluctuating (or the small-scale) field b. A similar split of the velocity field gives V = V + v. The mean is defined by some form of averaging on scales larger than the turbulence coherence scale, ideally but not necessarily satisfying Reynolds rules for such averaging. These rules are [119]: and averaging commutes with both time and space derivatives. The induction equation Eq. (1) then averages to give Here a new term quadratic in the fluctuating fields arises, the mean electromotive force (EMF), To express this in terms of the mean fields themselves presents a closure problem, even when Lorentz forces are not yet important. The simplest such closure, which is valid when the correlation time τ is small compared to l/v gives E = α K B − η t ∇ × B, where the turbulent motions are also assumed to be isotropic. Here α K = − 1 3 τ v · ω with ω = ∇ × v, depending on the kinetic helicity of the turbulence and is the α-effect mentioned above while η t = 1 3 τ v 2 is a turbulent diffusivity and depends on the kinetic energy of the turbulence. In disk galaxies we also have a V = rΩ(r)φ corresponding to its differential rotation with frequency Ω along the toroidal direction φ. The mean-field dynamo equation (9) with this form for E and V , has exponentially growing solutions provided a dimensionless dynamo number has magnitude D = |α 0 Sh 3 /η 2 t | > D crit ∼ 6 [75,120,121]. Here h is the disk scale height and S = rdΩ/dr the galactic shear, α 0 a typical value of α, and we have defined D to be positive. This condition can be satisfied in disk galaxies and the mean field typically grows on the rotation time scale, ∼ 10 8 − 10 9 yr. A detailed account of mean-field theory predictions for galactic dynamo theory and its comparisons to observations is done by other authors in this volume. We focus on the challenges for this general paradigm, in our view. Magnetic helicity conservation The first potential difficulty, which has already received considerable attention, arises due to the conservation of magnetic helicity in the highly conducting galactic plasma. Magnetic helicity is usually defined as H = V A · B dV over a 'closed' volume V, with A the vector potential satisfying ∇ × A = B. It is invariant under a gauge transformation A ′ = A + ∇Λ only if the normal component of the field on the boundary to volume V goes to zero. Magnetic helicity measures the linkages between field lines [122,123], is an ideal invariant and is better conserved than total energy in many contexts, even when resistivity is included. The mean-field dynamo works by generating poloidal from toroidal field and vice-versa and thus automatically generates links between these components, and thus a large-scale magnetic helicity. To conserve the total magnetic helicity, corresponding oppositely signed helicity must then be transferred to the small-scale field, which as we shall see is done by the turbulent emf E. In fact, when helical motions writhe the toroidal field to generate a poloidal field, an oppositely signed twist must develop on smaller scales, to conserve magnetic helicity. For the same magnitude of magnetic helicity on small and large scales, the Lorentz force (J × B)/c is generally stronger on small-scales (since J the current density has two more derivatives compared to the vector potential which determines magnetic helicity). Thus Lorentz forces associated with this twist helicity can unwind the field while turbulent motions writhe it. According to closure models like the Eddy damped quasi linear Markovian (EDQNM) approximation [124] or the τ approximation [81,125,126], Lorentz forces then lead to an additional effective magnetic α-effect, α M = 1 3 τj · b/4πρ, with the total α = α K + α M . The generated magnetic α M opposes the kinetic α K produced by the helical turbulence and quenches the α-effect and the dynamo, making it subcritical, much before the large-scale field grows strong enough to itself affect the turbulence. For avoiding such quenching, small-scale helicity must be shed from the galactic interstellar medium. In principle resistivity can dissipate small-scale magnetic helicity but this takes a time longer than the age of the universe! For large-scale dynamos to work small-scale helicity must be lost more rapidly, through magnetic helicity fluxes [81,123,127]. Magnetic helicity being a topological quantity, one may wonder how to define its density and its flux! A Gauge invariant definition of helicity density was given by Subramanian and Brandenburg [128] using the Gauss linking formula for the magnetic field [122,129]. They proposed that the magnetic helicity density h of a random magnetic field b is the density of correlated links of the magnetic field [128]. This definition by construction involves only the random field b, works if this field has a small correlation scale compared to the system scale, and is closest to the helicity density defined using the vector potential in Coulomb gauge. An evolution equation can then be derived for this density of helicity which now also involves a helicity flux density F [128], This equation involves transfer of magnetic helicity from large to small scales by the turbulent emf along the mean field (−2E · B term), the dissipation by resistivity (−2η∇ × b · b) and the spatial transport by the helicity flux (∇ · F). In the absence of such a flux, and in the steady state we see that E · B = −2η∇ × b · b and so the emf along the field, which is important for the dynamo, is resistively suppressed for R m ≫ 1. Even in the time dependent case, as the B builds up, h also grows and produces an α M which cancels α K to suppress the net α effect. In the presence of helicity fluxes however, h can be transported out of the system allowing mean-field dynamos to work efficiently [127,130,131]. One such flux is simply advection of the gas and its magnetic field out of the disk, i.e. F = hV [128,130]. Several other types of helicity fluxes have been calculated like the Vishniac-Cho flux depending on shear and the mean field [81,132] and a flux involving inhomogeneous α [133]. A diffusive flux F = −κ∇h was postulated by [134] and subsequently measured in DNS [135]. A new type of helicity flux which depends on purely an inhomogeneous random magnetic field and rotation or shear has been worked out by Vishniac [136], and could be potentially important to drive a large-scale dynamo purely from random fields in the galaxy, but has not yet been studied in detail. Both the diffusive flux and the later Vishniac flux have been derived from the irreducible triple correlator contribution to F by [137] using a simple τ-closure theory, but they also find several other terms which cannot be reduced to either of these forms. A detailed study of magnetic helicity fluxes still remains one of the important challenges of the future. As an interesting application of these ideas, Chamandy et al. [138] solved the mean-field dynamo equation incorporating both an advective flux and a diffusive flux in Eq. (10). Advection can be larger from the optical spiral, where star formation and galactic outflows are expected to be enhanced. The helicity fluxes allow the mean-field dynamo to survive, but stronger outflow along spiral arms led to a relative suppression of mean field generation there and an interlaced pattern of magnetic and gaseous arms as seen in the galaxy NGC6946 [139]. Interestingly a wide spread magnetic spiral only results if the optical spiral is allowed to wind up and thus here we are constraining spiral structure theory using magnetic field observations [138,140]! In another direction, the cosmic evolution of large-scale magnetic fields during hierarchical clustering in the universe to form galaxies, has also been extensively explored [141,142]. Mean-field dynamo in presence of the fluctuation dynamo We have discussed possibilities of both fluctuation and mean-field dynamos in the turbulent interstellar medium. However random magnetic fields due to the fluctuation dynamo grow much faster on time scale of 10 7 yr, at least a factor 10 faster than the mean-field. Lorentz forces can then become important to saturate the field growth much before the mean field has grown significantly. Will then these strong fluctuations make mean field theory invalid? And can the large-scale field then grow at all? Earlier work [93] suggested that perhaps the intermittency of the small-scale dynamo generated field on saturation still allows the Lorentz force to be sub dominant in the bulk, and thus allow large-scale field growth. Bhat et al. [143] examined this issue using direct simulations of magnetic field amplification due to fully helical turbulence in a periodic box, following up earlier work on the kinematic stage by [144]. Turbulence was forced at about 1/4 th the scale of the box, so that in principle both scales smaller and larger than forcing can grow. Initially all scales grow together as a shape invariant eigen function dominated by power on small-scales. This behaviour is akin to what happens in fluctuation dynamos. But crucially on saturation of small scales due to the Lorentz force, larger and larger scales continue to grow, and come to dominate due to the mean-field dynamo action. Finally system scale fields (here the scale of the box) develop provided small-scale magnetic helicity can be efficiently removed, which in this simulation is due to resistive dissipation. Recent work by Bhat et al. [145] in fact now finds evidence for two stages of exponential growth, the sequential opertaion of both the small-scale dynamo, and as it saturates, a quasi-kinematic large-scale dynamo, which is indeed exciting! This issue of how the small-and large-scale dynamos come to terms with each other deserves much more attention including a better analytic understanding. Final thoughts We have traced briefly the generation of magnetic fields right from the early universe to their subsequent amplification by turbulent dynamos in the later universe. Several challenges remain to be addressed in each of the processes that were discussed. Apart from the issues already raised, early universe mechanisms need to be put in the context of particular particle physics models. As far as the dynamo, their saturation behaviour and how coherent the resulting fields become still raises intriguing questions. The observational future appears bright. A key objective of the Square Kilometre Array (SKA) is to elucidate the origin of cosmic magnetism. The determination of a large number of RMs and their modelling will likely yield rich dividend [146,147]. Of particular interest will be to probe magnetic fields in the high redshift universe and the field in intergalactic filaments which could reflect more pristine conditions. Surprisingly, Gamma-ray observations of TeV blazars have suggested lower limits at femto Gauss levels to the magnetic field in the IGM associated with large scale voids. Such weak magnetic fields are difficult to detect by other techniques and so it would be worthwhile to continue such studies. Gravitational wave astronomy, especially the detection of a stochastic background, could also help to probe phase transitions and associated magnetogenesis in the early universe. Clearly study of cosmic magnetism will continue to be fascinating.
13,027
2019-03-09T00:00:00.000
[ "Physics" ]
Multivessel coronary artery disease, free fatty acids, oxidized LDL and its antibody in myocardial infarction Background Free fatty acids (FFA), oxidized low-density lipoprotein (LDL) and its antibodies, lipid profile markers, which are formed under oxidative stress, play an important role in atherosclerotic disease. Assess the levels of these markers in myocardial infarction patients depending on the extent of coronary artery disease (CAD). Methods ST-elevation MI patients with hemodynamically significant stenoses of ≥75% in one, two, three, or more coronary arteries were examined. The patients were divided into three groups according to the severity of coronary lesions. Patients had a ≥75% stenotic lesion in one coronary artery (group 1, n = 135), two coronary arteries (group 2, n = 115), or three or more coronary arteries (group 3, n = 150). The control group comprised healthy subjects (n = 33). Results FFA levels on day 1 from MI onset were higher in groups 1, 2, and 3 compared with controls. On day 1 from MI onset, oxidized LDL levels were significantly higher in groups 2 and 3 than those in controls (both р = 0.001). Oxidized LDL levels were significantly higher in patients with multivessel CAD compared with those with single-vessel CAD on days 1 and 12. Antibody levels increased with the number of affected arteries. Conclusion High levels FFA, oxidized LDL and its antibody, lipid profile markers, and parameters of the pro/antioxidant systems persist during the subacute phase of MI. Introduction Myocardial infarction (MI) in patients with coronary artery disease (CAD) of different severity remains the leading cause of cardiovascular death. Early MI diagnosis, assessment of CAD severity, and secondary event risk prediction are the most important factors for preventing mortality. A previous study showed that the incidence of significant cardiovascular events in multivessel CAD patients was 23.6% vs. 19.5% in patients with two-vessel disease and 14.5% in those with single-vessel disease [1]. The 5-year risk of death in MI patients with multivessel CAD is increased by two times compared with healthy patients [2]. Dyslipidemia, which has a significant impact on MI, is a well-established factor contributing to the risk of atherosclerosis. However, dyslipidemia does not explain all of the cases of acute coronary events. According to Ansell et al., 50% of all coronary events occur without a history of hypercholesterolemia [3]. In patients with normal highdensity lipoprotein (HDL-C) levels, the number of coronary events is 30% less than that in those with decreased low-density lipoprotein (LDL-C) levels [4]. Moreover, a significant number of coronary events occur in those with normal LDL-C levels [5]. All of these factors indicate that new markers of an adverse course of CAD, especially in case of multivessel disease, are required. Measuring blood levels of free fatty acids (FFAs) can have certain diagnostic value. FFAs carry out some important functions, including ATP production, and they act as cell signal mediators (activation of various protein kinase C isoforms and initiation of apoptosis), ligand transcription factors, and basic components of biological membranes [6]. Some authors consider that increased FFAs levels are the earliest predictor of ischemia and a more sensitive marker of the severity of ischemia than electrocardiographic studies [7]. The results of prospective and clinical trials show a strong correlation between increased plasma FFA levels, CAD, and sudden risk of death [8]. Furthermore, FFAs are regarded as potential biochemical markers of postinfarct myocardial remodeling [9]. Laboratory monitoring of blood FFA levels in acute coronary events can play an important role in choosing a treatment strategy for risk stratification in this patient category. Measuring oxidized low-density lipoprotein (oxidized LDL), which plays an important role in atherosclerotic plaque formation and destabilization, as well as in the activation of systemic inflammation and acute coronary syndrome (ACS) development, can have diagnostic value. The level of oxidized LDL is an independent predictor of MI. In a study of 3033 patients, the risk of MI in patients with increased LDL levels was increased two-fold [10]. As a response to production of oxidized LDL, which has immunogenic potential, antibodies and immune complexes are produced, which in turn, can lead to further endothelial damage. Antibodies to oxidized LDL are supposed to play a key role in regulating oxidized LDL levels. Several studies have shown protective properties of antibodies, which may neutralize pathogenic and immunogenic activity of oxidized LDL in vivo physiological conditions and, thereby reduce the probability of atherosclerosis development. In others, their pathogenic activity is largely discussed. Elevated levels of autoantibodies to oxidized LDL may be regarded as a predictor of atherosclerosis and ACS [10,11]. Therefore, the purpose of this study was to assess the in-hospital levels of FFA in ST-elevation MI patients depending on the extent of CAD. The presence and severity of CAD were assessed by means of coronary angiography within the first hours from hospital admission. According to the severity of coronary lesions, the patients were divided into three groups. Stenoses of ≥ 75% were considered hemodynamically significant. Study subjects and design Group 1 consisted of 135 MI patients with a ≥ 75% stenotic lesion in one coronary artery. Group 2 included 115 MI patients with ≥ 75% stenoses in two coronary arteries. Group 3 consisted of 150 individuals with ≥ 75% stenoses in three or more coronary arteries. The patients of the three groups were similar in sex and age (Table 1). However, patients with two-or three-vessel CAD (groups 2 and 3) had cardiovascular risk factors such as arterial hypertension, hypercholesterolemia and smoking more frequently than patients with one-vessel CAD (group 1, Table 1). Groups 1 and 2 patients also had a history of angina pectoris, previous MI, or previous strokes/transient ischemic attacks more frequently than group 1 patients. The distribution of patients who had stage I or II chronic heart failure was similar in the groups. Stage III chronic heart failure was significantly more frequent in patients with single-vessel CAD than in patients with two-or three-vessel CAD (Table 1). Cardiogenic shock was diagnosed in the three groups with multi vessel CAD. In all of the groups, anterior Q-wave MI was predominant. However, in patients with single-vessel and three-vessel CAD, this type of infarction was observed more often than in those with two-vessel CAD. Posterior MI was diagnosed significantly more often in patients with two, three, or more affected arteries. With regard to MI complications, patients with single-vessel and three-vessel CAD had in-hospital arrhythmias more frequently, and those with three or more affected arteries had early post infarct angina and recurrent MI more frequently during their hospital stay. In-hospital treatment was administered according to the 2007 National Society of Cardiology Guidelines on acute ST-elevation MI diagnosis and treatment ( Table 2). Fifty-four (49.1%) patients underwent PCI for an infarct-related artery, and if contraindicated (technically unfeasible), systemic thrombolysis with streptokinase (1.5 × 10 6 IU) therapy was given, and seven (6.4%) patients had no reperfusion therapy. All of the patients were administered coronaroactive and antithrombotic therapy, including acetylsalicylic acid, clopidogrel, beta-blockers, and angiotensin-convertingenzyme inhibitors during the hospital stay, if not contraindicated. Antianginal drugs were administered according to standard. After discharge, patients continued therapy with the main classes of anti-ischemic agents and statins were taken by 88% of patients. Thirty-three age-and sex-matched patients with no cardiovascular disease were included in the control group. Lipid profile concentrations were measured at days 1 and 12 (at the end of the hospital stay) from MI onset. Finland) on a Konelab 30i biochemistry analyzer (Thermo Fisher Scientific Oy). Serum oxidized LDL levels, oxidized LDL antibodies, peroxide, protein thiol, C-peptide levels, and insulin levels were measured by ELISA with Biomedica (Waterloo, NSW, Australia) and Diagnostic Systems Laboratories (Webster, TX, USA) lab kits. The study was carried out in compliance with the Helsinki Declaration, and its protocol was approved by the Ethical Committee of the Research Institute for Complex Issues of Cardiovascular Diseases under the Siberian Branch of the Russian Academy of Medical Sciences. All of the patients who participated in the study gave written informed consent. Statistical analysis The statistical analysis was performed using the software Statistica 6.1. (InstallShield Software Corporation, USA) and SPSS 10.0 for Windows (SPSS Inc, USA). The results were presented as the median (Me) and the 25% and 75% quartiles Мe (Q1;Q3). The nonparametric tests were used to assess and analyze the obtained data: the Mann-Whitney U test or the Kolmogorov-Smirnov method (more than 50 cases in each group) were used to perform quantitative comparison of two independent groups, the Kruskal-Wallis test of variance by rank followed by the Mann-Whitney test with Bonferroni correction -for three independent groups comparison. The Spearman rank correlation coefficient was used to investigate the relationship between variables. The stepwise logistic regression analysis with odds ratios (ORs) and 95% confidence interval (CI) were used to determine the prognostic significance of the parameters in the long-term prognosis. The Cox regression was used to evaluate the risk of unfavorable events; the impact of independent variables, predictors of the risk, was determined. A value of p < 0.05 was considered indicative of statistical significance. Results Cardiovascular risk factor analysis showed that there was atherogenic dyslipidemia in all of the groups. At day 1 from MI onset, higher levels of TC, TAG, LDL, VLDL, and Apo B, a higher Apo B/Apo A1 ratio, and lower levels of antiatherogenic HDL and Apo A1 were observed in groups 1, 2 and 3 compared with the control group (Table 3). Comparative analysis of the lipid profile in patients with different severities of CAD showed significant differences in HDL levels; HDL levels decreased with an increase in the number of affected arteries. HDL levels in groups 2 and 3 were 18% (р = 0.04) and 14% (р = 0.03) lower, respectively, than those in group 1. By day 12 of the hospital stay, there was no significant improvement in the parameters under study. Moreover, TAG and VLDL (lipoproteins involved in TAG transport) levels in group 1 on day 12 were significantly higher than those on day 1. Atherogenic LDL levels were also significantly higher on day 12 in all of the groups compared with day 1 (23% [р = 0.04], 17% [р = 0.03], and 14% [р = 0.04], respectively). Additionally, by the end of the hospital stay, patients in groups 1, 2 and 3 had higher Apo B (LDL) levels compared with controls. At day 12, antiatherogenic Apo A1 levels were significantly higher in group 2, but did not reach those of the control group. Notably, there was a trend towards a decrease in atherogenic TC levels in all of the groups, which suggests impairment of the early stage of recovery of metabolism because of the therapy. In MI patients, positive correlations were found between FFA levels at day 1 from MI onset and CK-MB activity, reflecting the size of myocardial necrotic focus (r = 0.401, р = 0.02) (Figure 1). Additionally, in MI patients, positive correlations were observed between FFA levels and ESV (r = 0.47, р = 0.01), and FFA levels and EDV (r = 0.53, р = 0.01), which indicated a strong association between increased FFA levels and postinfarct myocardial remodeling. Oxidized LDL levels changed in all of the study groups. At day 1 from MI onset, oxidized LDL levels were significantly higher in groups 1, 2 and 3 than those in the control group. Oxidized LDL levels were significantly higher in groups 2 (d 1, 7%; р = 0.001) and 3 (d 1, 87%; р = 0.001) compared with group 1 at days 1 and 12 (Table 4). Notably, the increase in oxidized LDL levels Note: a -significant differences in the parameters with the control group, (p ≤ 0.05). b -significant difference parameters on day 1 and 12, (p ≤ 0.05). c -significant difference between 1 and 2, groups 1 and 3, (p ≤ 0.05). was more pronounced in group 3 (increased by 75%) than in groups 1 and 2 (p = 0.001). By day 12, oxidized LDL levels were significantly increased in groups 1, 2 and 3 compared with day 1. The levels of oxidized LDL were 26% and 113% higher in groups 2 and 3, respectively, compared with group 1 (р = 0.001). Group 3 patients had 69% (р = 0.001) higher oxidized LDL levels at day 12 than group 2 patients. Antibody levels increased with the number of affected arteries. On day 1, antibody levels were 17% (р = 0.003) and 70% (р = 0.002) lower in two-vessel and multivessel CAD patients, respectively, than in single-vessel CAD patients. More pronounced changes in antibody levels were observed in three-vessel CAD patients. On day 12, antibody levels in all of the groups started to increase and reached maximum levels in multivessel CAD patients. On day 1, the level of thiol-containing compounds in single-vessel CAD patients was 25% (р = 0.004) higher than that in controls. In two-and multivessel CAD patients, thiol-containing compound levels were 43% and 40% higher, respectively, than those in controls (р = 0.007). At day 12, thiol-containing compound levels were similar to those on day 1 and were comparable among the groups with different disease severities. Serum peroxide levels showed the same tendency as thiol-containing compounds. Serum peroxide levels were higher at days 1 and 12 in groups 1, 2 and 3 compared with controls. There were no significant differences in serum peroxide levels among the groups. We conducted a stepwise logistic regression analysis. All patients were divided into 2 groups: Group 1 -single-vessel disease, Group 2 multivessel disease, i.e. including two-and three-vessel disease (subdivided into Group 2 and Group 3). The logistic regression analysis allowed identifying factors that have the closest relationship with multivessel disease (Table 5). Among the most statistically significant parameters were the following: oxidized LDL, their antibodies and FFA. An elevation of FFA on day 1 increased 2.6-fold the risk of IR development, on day 12-2.81-fold, oxidized LDL on day 1 increased 2.9 -fold, on day 12 -1.82-fold, the antibodies to oxidized LDL on day 1-1.83-fold, on day 12-2.15-fold. Discussion The pathogenetic role of an imbalance in the lipid profile in cardiovascular diseases is well established. Large epidemiological studies have shown a correlation between blood levels of TC, LDL, and apoproteins and CAD mortality [10]. Dyslipidemia, which was found in our study on day 1, can be defined as higher atherogenic and lower antiatherogenic cholesterol fractions than controls. In the subacute MI phase, there is a tendency towards a decrease in cholesterol transport system markers, except for atherogenic LDL and Apo B; LDL and Apo B remain increased, leading to the need for early lipid-lowering therapy in this category of patients. However, we did not find any association between the severity of dyslipidemia and the extent of CAD. Therefore, there is a need to investigate other metabolic markers. Recent studies have focused on identifying new biochemical markers of clinical complications of atherosclerosis, especially FFA [10]. FFA oxidation provides up to 70% of ATP to the heart, and other energy needs are satisfied by glucose oxidation. The intensity of FFA uptake by myocardial cells is determined by their plasma concentrations. Excess products and metabolites of FFA Note: a -reliable differences in the parameters with the control group, (p ≤ 0.05). b -reliable difference parameters on day 1 and 12, (p ≤ 0.05). c -reliable difference between 1 and 2, groups 1 and 3, (p ≤ 0.05). d -reliable differences in the parameters between groups 2 and 3, (p ≤ 0.05) oxidation (acetyl-CoA, reduced NADPH, and FAD2) are natural inhibitors of pyruvate dehydrogenase complex enzymes, responsible for aerobic glucose oxidation, which leads to a decrease in myocardial glucose use [12]. In ischemia, the main metabolic pathway providing energy to cardiomyocytes is anerobic glycolysis because FFA oxidation requires more oxygen. In addition, altered myocardial use of FFAs due to myocardial ischemia and necrosis results in their accumulation in blood [13].The higher plasma FFA levels are, the lower their accumulation in tissue, and sometimes no glucose is released to cardiomyocytes. Excess FFA levels slow down not only glucose release, but also its use. This phenomenon is called the "glucose fatty acid cycle" (Randle) [14]. Therefore, high plasma levels of FFA decreases ATP production, which can lead to diastolic dysfunction, atrioventricular conduction delay, decreased fibrillation threshold, and ultimately, to heart failure. In the current study, analysis of FFA levels in MI patients showed that they were different in healthy subjects and in patients with a different number of affected coronary arteries, depending on the disease phase. Multivessel CAD in MI was associated with a more pronounced increase in FFA levels compared with controls, which can be regarded as a result of disturbance of metabolism and energy homeostasis in myocardial cells in this category of patients. In addition, the level of FFA can be regarded as a diagnostic indicator, reflecting the intensity of this derangement. However, high FFA levels in largefocal large MI reflects not only myocardial ischemia, but also the depth of myocardial necrosis. The results of correlation analysis support these suggestions. Interestingly, a decrease in FFA levels in the early recovery phase was accompanied by an increase in TAG levels, which was more pronounced in single-vessel CAD patients than in multivessel disease patients, as well as an increase in VLDL levels in all of the groups. Previous studies have shown that this phenomenon is associated with development of insulin resistance under conditions of MI. This further decreases the activity of lipolytic enzymes and insulin-induced activation of TAG-producing enzymes in hepatocytes and adipocytes, as well as VLDL, which is responsible for the transport of these enzymes. Additionally, catecholamine levels in the recovery phase are decreased relative to the acute phase, and consequently, their stimulatory effect on lipolytic enzymes is less [6]. An increase in blood FFA and Apo B levels is known to reflect an increase in LDL, which is most susceptible to oxidation by active oxygen metabolites [15,16]. Oxidized LDL is thought to reflect excess production and accumulation of LDL in the blood. Studies have shown that oxidized LDL is strongly correlated with blood levels of fine dense LDL particles [17]. Oxidized LDL is cytotoxic for endothelial cells, which enhance adhesion of neutrophils, which causes endothelial injury [18]. Additionally, oxidized LDL contributes to activation of macrophages, induction of cyclooxygenase expression, and enhancement of proinflammatory prostaglandin (PGE2 and PHI2) and matrix metalloproteinase (MMP-2 and MMP-9) production, which are involved in erosion of fibrous plaque capsules. As a result of oxidized LDL uptake by macrophages, foam cells are formed, which further accumulate in the endothelium and contribute to atherosclerotic plaque formation [18]. In our study, oxidized LDL levels in the blood were high during the entire hospital stay. Notably, the groups of patients significantly differed in oxidized LDL levels; the greater the number of affected vessels, the higher the oxidized LDL levels. In the current study, interestingly, the levels of antibodies also depended on the severity of CAD, and the highest antibody levels were observed in multivessel CAD patients. The fact that there are antibodies to oxidized LDLs in healthy people is well-established, but their levels significantly increase if a patient has cardiovascular disease [11]. Currently, there is no consensus on the role of oxidized LDL antibodies. Some authors believe that the production of antibodies has a protective effect aimed at elimination of atherogenicoxidized LDLs. However, antibodies at increased levels form immune complexes with oxidized LDLs, which then bind to the intima and cause additional damage to the endothelium [17]. Regardless, most authors believe that antibodies to oxidized LDLs can potentially be atherogenic and their measurement can be used as part of risk assessment [11,19]. An increase in oxidized LDL antibody levels might be pathological, damaging the vessel wall, and eventually, lead to atherosclerotic changes. One of the causes of accumulation of oxidized LDL and its antibodies in the blood is thought to be impaired pro-and antioxidant systems. Our study showed an increase in the levels of peroxides and thiol-containing compounds, reflecting the oxidative status, and representing the pro-and antioxidant systems, respectively. However, an increase in these markers does not depend on the severity of CAD. This suggests that this process is non-specific, and imbalance in pro/antioxidant systems can be regarded as a general pathological response in MI. Conclusion Thus, FFA, LDL and antibodies appear to be the most informative indices reflecting the severity of atherosclerotic coronary lesions that retain high concentrations throughout the in-hospital period and, moreover, these markers are the most elevated in multivessel disease. Competing interests This manuscript has been read and approved by all the authors. This paper is unique and is not under consideration by any other journal and has not been published elsewhere. The authors of this paper report no conflicts of interest. The authors confirm that they have permission to reproduce any copyrighted material. Authors' contributions OG was principal investigator, study coordinator and investigator, participated in all stages of recruitment of the patients and in analysis of the data, drafted and reviewed critically the manuscript. EU,VK was study coordinator and investigator, participated in all stages of recruitment of the patients and in analysis of the data, drafted and reviewed critically the manuscript; EB and YD, AS was study investigator, participated in all stages of recruitment of patients and reviewed critically the manuscript. OB was principal investigator. All other study investigators conducted the study and collected the data. All authors read and approved the final manuscript.
4,960.2
2014-07-09T00:00:00.000
[ "Medicine", "Biology" ]
Detecting Bid-Rigging in Public Procurement. A Cluster Analysis Approach : This paper analyses the public procurement auctions for snow removal contracts to find out whether bid-rigging occurred. Due to the limited participation in the auction processes, detection of anticompetitive agreements was possible. The econometric analysis used in our study supported the findings of a cartel agreement. Cluster analysis, statistical hypothesis, normality and symmetry and nonparametric tests reveal two types of auctions: competitive and noncompetitive bids. The aim of this paper is to analyze the public procurement auctions with nonparametric statistical methods. Our findings are in line with the literature in the field. Introduction Bid rigging in public procurements is one of the main problematic aspects targeted by the governments and the local public authorities. Government procurements usually account for 10-15% of gross domestic product (GDP) in developing countries (Global Trade Negotiations 2006) and up to 25% of GDP in the developed countries . Local public authorities outsource their utilitarian projects and services through open-bid auctions, direct negotiation, or competitive biddings (Menezes and Monteiro 2006). Due to the public procurement legislation, in most cases, public procurements procedures are conducted through competitive auctions (Bolotova et al. 2008). Collusion and corruption are the main factors of concern in the auctions (Porter 2005;Pesendorfer 2000;. Other studies (Harrington 2005;Carayannis and Popescu 2005;Porrini 2015;Liao et al. 2003) confirm that bidders concluded anticompetitive agreements to increase the bidding price and get better contracts from the public authorities. Public procurements organized by the governmental bodies and the local authorities' departments have many weaknesses, depicted in the economic literature, as follows: 1. Excessive state intervention, discrimination in awarding contracts, favoritism for local contractors create substantial problems in the awarding process (İriş and Santos-Pinto 2013; Clancy 2018); 2. the construction of the highways tendered in the public auctions, based on a multilinear regression model. McMillan (1991) describes the agreement between bidders of public works in Japan, while Howard and Kaserman (1989) evaluate the damages in bid-rigging cases with regression methods. Finally, Commander and Schankerman (1997) examined the scheme to submit identical bids within the public procurement auctions. In general, the detection of rigged tenders is based on two main components: structural screening (which addresses issues related to the specific features of the market and the tender) and behavioral screening (which addresses issues related to the behavior of tenderers). Structural screening is a predominantly preventive measure, but it can be applied both ex-ante and ex-post to the auction. Thus, the analysis of the market structure and the specificity of certain forms of auction organization can provide important information and indications about those markets or those types of purchases that are prone, by their specificity, to facilitate the auctioning of auctions. In developing the proposal for a set of indicators that could be considered for structural screening, both the indicators suggested by the OECD guide to combating auction fraud and those selected to build the aggregate competitive pressure index were considered. It is appreciated that the latter is relevant in assessing the ability of a market to facilitate or not the occurrence of such anticompetitive behavior. The role of structural screening is to act as a filter, which can lead to the identification of those tenders in industries or areas more exposed to the conclusion of agreements between competitors. Behavioral screening is performed ex-post during the auction and has the role of observing certain aspects that may suggest agreements between the participants in the auctions. Thus, certain behavioral patterns of bidders can be captured, related either to the way of bidding or to the subsequent development of contracts, which may be the result of agreements. Some of the indicators proposed by the OECD in the document entitled: "Guidelines for detection of bid-rigging in public procurement" were used for behavioral issues. Unlike structural screening, behavioral screening can lead to the identification of tenders with a high probability of fraud, the indicators considered may be, initially, indications of initiating investigations and, subsequently, even evidence, as appropriate. Traditionally, public procurement had only to be economically efficient, with little regard for other objectives than the purely economic ones. In recent times, however, due to a more general ascension of the sustainable development concept, governments have been put in the position to "lead by example" and use their purchasing power to advance the goals of sustainable development; as a specific development, sustainable public procurement has been slowly creeping in. From "secondary considerations" in the 2004 Directives Caranta et al. 2013;, the need to include social and environmental considerations in public tendering procedures has led to the coining of new terms, much more powerful and all-encompassing, such as "horizontal policies" (Kunzlik and Arrowsmith 2009;Arrowsmith and Kunzlik 2009;Comba 2010), "sustainable procurement" or even "strategic procurement". We can state that with the new 2014 Directives, the sustainability paradigm is almost taking over the realm of public procurement, and it is marketed as a major "selling point" of the new legislation. With strict reference to Romania, the special law is dedicated to the regulation of public procurement, which-among others-compels the elaboration of a national plan in this respect, with concrete objectives, but also to "the introduction in the process of public procurement of environmental protection criteria that would allow the improvement of services' quality and optimization of costs with public procurement in the short, medium and long term" (Government of Romania 2006). However, based on the studies undertaken, we can say that a rethinking of the legal norms pertaining to the public procurement field, in accordance with the best European practices, is meant to lead to the attainment of some higher performance thresholds to increase achievements. The national legal framework related to the GPP system was extremely limited at that time, consisting of (i) the national strategy for sustainable development, horizons 2013-2020-2030 (Government of Romania 2008) and the emergency ordinance, which regulated the awarding of public procurement contracts (Government of Romania 2014). Before stating the expected changes, one should mention that the chapter "Environment" of the government program 2013-2016 (Parliament of Romania 2016) contains some commitments regarding public procurement, among which their encouragement by adopting a specific action plan, aimed at "promoting the models of sustainable production and consumption", the development of clean and environmentally friendly technologies, setting efficient criteria for public procurement, recalling on the need to inform/raise the awareness of the authorities in the same field. This article presents a case study of an anticompetitive agreement put in place by the snow removal operators in Romania. We evaluate the bids during 2015-2017 and make statistical analysis between the starting prices and the bidding prices. The structure of the paper is the following. First, we introduce the statistical methods used in our analysis. Then, we make a description of the bids and test the hypothesis. In the Results section, we make a split of the data in two clusters, from the perspective of competition. The Discussion section presents conclusions of the analyses and underlines limitations of the study and future research. Materials and Methods To analyze the existence of a cartel on a given market, an adequate quantitative method is represented by the statistical analysis of the auction time-series from that market Tenorio 1993). In this case, it is necessary to use more variables, not just starting and bid awarding price. Examples of such variables are sales capacities, transportation, experience, etc. The analytical method to detect bid-rigging was the cluster method, mainly used when we do not have a priori information above the existence of a cartel (Smith 1993;Imhof et al. 2018;Abrantes-Metz et al. 2012;Andrei and Busu 2014). The first step was to divide bids into several categories, which differ significantly from one another. Thus, it was assumed that there were only two ways to bid-collusive and competitive-which means we were likely to have 2 clusters that differ significantly from each other. The first cluster with the auctions resulting from the struggle between firms to obtain the respective good, and the other involved auctions in which the participants had concluded various agreements between themselves (so-called rigged bids). To perform this analysis, we started from the ratio between the bidding price and starting price of an auction. Detecting possible anticompetitive agreements was achieved in the following steps: 1. Performing a cluster analysis. This was done by dividing the data into two groups according to the ratio defined above. The first cluster included those auctions with a high ratio (when the sale price was close to the starting price), while the second cluster included auctions with a low ratio (sale price was significantly lower than the starting price) likely to be the result of competitive behavior. 2. Applying nonparametric tests. These tests helped us check whether the two clusters were significantly different in terms of distribution from a statistical point of view. It also tested the statistical assumptions that there were significant differences between the two bidding modalities. Thus, if there were collusive behaviors, two different bidding models should have resulted. In addition, we needed to do a box-plot test to analyze the difference between the average of the two data series. 3. Testing for normality and symmetry. Normality and symmetry series data of the two groups was tested by Kolmogorov-Smirnov test. This test compared the probabilistic distribution of the two clusters with a certain theoretical distribution. In general, the normal distribution was selected for theoretical distribution. Prior, considering the selection of the two clusters, we expected the first cluster (for which the ratio was high) to have a normal distribution, while the second cluster (for which the ratio was low) had an asymmetric distribution. This result would confirm the hypothesis that the cluster for which the ratio was low corresponds to the situation of competition, while the second cluster would correspond with the situation where, among the participants, there was a collusive agreement (because anticompetitive agreements maintain an extremely low bidding level close to the starting price). 4. Testing statistical hypotheses. Testing statistical assumptions were performed to see if there were significant differences between different data categories. In our case, we tested if there were significant differences between the bids in consecutive years. National Company for Highways and National Roads of Romania (CNADNR) has the responsibilities of exploitation, permanent maintenance, modernization and development of national road network and highway day on the territory of Romania. CNADNR has its structure 7 subunits without legal personality, called the Regional Directorate of Roads and Bridges (DRDP), located in Bucharest, Craiova, Iasi, Cluj, Timisoara, Constanta, Brasov. Hereinafter, we refer to them as SDN1, SDN2, etc. The authors focused on the snow removal sector in Romania as it is one of the most important sectors with a high percentage impact on national GDP. The data were collected from the CNADNR's website. The data represent all snow removal bids during 2015 and 2017. During the analyzed period, respectively the winter seasons 2015-2016 and 2016-2017, modalities of awarding public procurement contracts used by DRDP were tendering procedures and negotiation without prior publication of a contract notice, and the award criterion was the lowest price. DRDP has concluded framework agreements, subsequent agreements, service contracts and additional acts, as their object was services routine maintenance in winter of the national roads under their management. From the information provided by CNADNR, a comparative analysis between contractual and actual prices per km removed snow was performed (see Figure 1). It was found that there are significant differences between them both at the same season as well between the two seasons. Adm. Sci. 2021, 11, x FOR PEER REVIEW 4 of 14 selection of the two clusters, we expected the first cluster (for which the ratio was high) to have a normal distribution, while the second cluster (for which the ratio was low) had an asymmetric distribution. This result would confirm the hypothesis that the cluster for which the ratio was low corresponds to the situation of competition, while the second cluster would correspond with the situation where, among the participants, there was a collusive agreement (because anticompetitive agreements maintain an extremely low bidding level close to the starting price). 4. Testing statistical hypotheses. Testing statistical assumptions were performed to see if there were significant differences between different data categories. In our case, we tested if there were significant differences between the bids in consecutive years. National Company for Highways and National Roads of Romania (CNADNR) has the responsibilities of exploitation, permanent maintenance, modernization and development of national road network and highway day on the territory of Romania. CNADNR has its structure 7 subunits without legal personality, called the Regional Directorate of Roads and Bridges (DRDP), located in Bucharest, Craiova, Iasi, Cluj, Timisoara, Constanta, Brasov. Hereinafter, we refer to them as SDN1, SDN2, etc. The authors focused on the snow removal sector in Romania as it is one of the most important sectors with a high percentage impact on national GDP. The data were collected from the CNADNR's website. The data represent all snow removal bids during 2015 and 2017. During the analyzed period, respectively the winter seasons 2015-2016 and 2016-2017, modalities of awarding public procurement contracts used by DRDP were tendering procedures and negotiation without prior publication of a contract notice, and the award criterion was the lowest price. DRDP has concluded framework agreements, subsequent agreements, service contracts and additional acts, as their object was services routine maintenance in winter of the national roads under their management. From the information provided by CNADNR, a comparative analysis between contractual and actual prices per km removed snow was performed (see Figure 1). It was found that there are significant differences between them both at the same season as well between the two seasons. The data revealed that the large differences between the values of contracts for the provision of the current winter maintenance of national roads and motorways and the amounts actually paid for them were because the value of the contracts is established considering the duration of the winter season of approximately 5 months (15 October-31 March); and uniform weather conditions throughout this period. The data revealed that the large differences between the values of contracts for the provision of the current winter maintenance of national roads and motorways and the amounts actually paid for them were because the value of the contracts is established considering the duration of the winter season of approximately 5 months (15 October-31 March); and uniform weather conditions throughout this period. Statistical analysis was approached in two steps. In the first stage, the increases were analyzed in statistical terms, i.e., percentage of the effective price paid/km, and the differences between the contract price and that actual price paid by the authorities for each lot in the 2015-2017 period. The second step is an analysis of the percentage increase of the price actually paid per kilometer from year-to-year using the cluster method to identify a discontinuity of data on the procedures for awarding public procurement contracts. The result of the statistical analyses is presented in the Results section. Testing the Statistical Hypotheses The first test was to verify whether there are statistically significant differences between the prices awarded in 2015-2016 and those from 2016-2017, calculated as price/km. The statistical assumptions that were tested were as follows: Testing these statistical assumptions was performed using Student's t-test. The results of this test are shown in Table 1. From this table, it can be noticed that there are significant differences in bid data during the period 2015-2016 to the period 2016-2017 (t 0.05,42 = −4.67, p-value = 0.000). This shows that the price paid per kilometer of removed snow has significantly increased between the two analyzed periods. To test from a statistical point of view, the differences between the two periods, regarding the contract price per skimmed and the actual price paid, an analysis was made both for the period 2015-2016 and for 2016-2017. The statistical assumptions that were tested were as follows: H 0 : µ 1 = µ 2 , µ 3 = µ 4 (there are no statistically significant differences between the contract price per kilometer paid for the 2015-2016 or 2016-2017 bids) H a : µ 1 = µ 2 , µ 3 = µ 4 (there are statistically significant differences between the contract price per kilometer paid for the 2015-2016 and 2016-2017 bids) Testing of these statistical hypotheses was carried out by Student's t-test. The results of this test are shown in Table 2. The above table indicates that there are statistically significant differences between the contract price and the price actually paid for each kilometer of snow removed, both for the period 2015-2016 (t = 12.75, p-value = 0.000) and 2016-2017 (t = 15.71, p-value = 0.000). It should be noted that for SDN 5, the contract for the period 2016-2017 was divided into smaller batches, and thus, the number of degrees of freedom (No. of contracts awarded-1) differs 2016-2017 (df = 49) compared to 2015 to 2016 (df = 42). Cluster Analysis An appropriate method to analyze market discontinuities on a given auction market is the statistical analysis of the bidding time series on that market (Hazak et al. 2016;Vadász et al. 2016). An analytical method for detecting such behaviors is the cluster method, predominantly used when there is no a priori information on the existence of collusive behaviors (Caldiero et al. 2010). The first step is to divide auctions into two categories with significant differences between them. Thus, it is assumed that there are only two ways to bid-collusive and competitive-which means we are likely to have 2 clusters that differ significantly from each other. To carry out this analysis, we start with the calculation of the percentage increases between the prices which were actually paid by the contracting authority for snow removal kilometer in the analyzed period (see Figure 2). Adm. Sci. 2021, 11, x FOR PEER REVIEW 6 of 14 The above table indicates that there are statistically significant differences between the contract price and the price actually paid for each kilometer of snow removed, both for the period 2015-2016 (t = 12.75, p-value = 0.000) and 2016-2017 (t = 15.71, p-value = 0.000). It should be noted that for SDN 5, the contract for the period 2016-2017 was divided into smaller batches, and thus, the number of degrees of freedom (No. of contracts awarded-1) differs 2016-2017 (df = 49) compared to 2015 to 2016 (df = 42). Cluster Analysis An appropriate method to analyze market discontinuities on a given auction market is the statistical analysis of the bidding time series on that market (Hazak et al. 2016;Vadász et al. 2016). An analytical method for detecting such behaviors is the cluster method, predominantly used when there is no a priori information on the existence of collusive behaviors (Caldiero et al. 2010). The first step is to divide auctions into two categories with significant differences between them. Thus, it is assumed that there are only two ways to bid-collusive and competitive-which means we are likely to have 2 clusters that differ significantly from each other. To carry out this analysis, we start with the calculation of the percentage increases between the prices which were actually paid by the contracting authority for snow removal kilometer in the analyzed period (see Figure 2). Detection of discontinuities in auction data is accomplished by dividing the data into two groups according to the percentage increase of the paid amount per snowy kilometer from the two time periods defined above. The first cluster contains those auctions for which the percentage increase is significant, while the second includes auctions for which this ratio is low or even negative (see Figure 3)(the bid price in 2016-2017 is lower than in 2015-2016). A cluster analysis was performed in SPSS, splitting the auctions that took place in two clusters. This was done to test whether there is a group of auctions for which the percentage increases in contract values are substantially higher in 2016-2017 than in 2015-2016 (over 50%). First, some data for which the analysis would have been distorted was removed. These correspond to certain changes in the contractual conditions in SDL county 3 and 4. Detection of discontinuities in auction data is accomplished by dividing the data into two groups according to the percentage increase of the paid amount per snowy kilometer from the two time periods defined above. The first cluster contains those auctions for which the percentage increase is significant, while the second includes auctions for which this ratio is low or even negative (see Figure 3) (the bid price in 2016-2017 is lower than in 2015-2016). A cluster analysis was performed in SPSS, splitting the auctions that took place in two clusters. This was done to test whether there is a group of auctions for which the percentage increases in contract values are substantially higher in 2016-2017 than in 2015-2016 (over 50%). First, some data for which the analysis would have been distorted was removed. These correspond to certain changes in the contractual conditions in SDL county 3 and 4. From this figure, it can be clearly seen, splitting into two clusters of auctions. The first cluster, located on the left side, includes 14 auctions (34% of the total) with high percentages previously defined, ranging from 50% to 126%. Thus, the price per kilometer of snow removal is significantly higher in the 2016-2017 period compared to 2015-2016. Moreover, the standard deviation of the data series on these prices is 0.32. Regarding the second cluster, comprising 27 auctions (13% of total auctions), the percentage increase was in this case from −63% to 37%. The value of the percentage increments for the second cluster has a much greater variation (the distribution normality test is done at the next step). A descriptive statistics of the two clusters could be seen in Table 3. Now we apply the nonparametric test. To test whether the distribution of each cluster is normal, the Kolmogorov-Smirnov and Shapiro-Wilk tests were performed to test the normality of a series of data. This test was applied to each cluster, and the results can be seen in Table 4. The conclusion of these tests is that the data of the first cluster is not normally distributed (Sig = 0.004 < 0.05, Sig = 0.001 < 0.05), while the data of the second cluster have a normal distribution (Sig = 0.200 > 0.05, Sig = 0.132 > 0.05). This indicates a discontinuity of data by the existence of a "gap" between the first and the second cluster. Auctions that correspond to the first cluster, those whose increase in the actual price paid in winter 2016-2017 compared to 2015-2016 was between 49% and 175%, would require a more in-depth analysis. Now, we test the normality and symmetry. This step is closely related to the previous one. We tested the symmetry of the two distributions to check for differences between compared and theoretical models. In theory, the percentage increases due to collusive behaviors follow an asymmetric distribution, while the distribution of symmetric percentage increases is related to symmetric distributions. For this, we calculated the coefficient of symmetry (skewness) for the two clusters and the kurtosis of distribution. The results can be seen in Table 5. From the table above, we observe that data from the first cluster is asymmetrical, meaning that this cluster auction tends to concentrate at the maximum value, which may correspond to a collusive behavior, while data at the second cluster has a symmetric distribution, corresponding to a possible non-affected auction. Box-Plot Analysis The data discontinuity described above can be noted in Figure A2 (Appendix A). The cluster analysis in SPSS 23 shows the following distribution in the two clusters (see Figure 4). Adm. Sci. 2021, 11, x FOR PEER REVIEW 8 of 14 Now, we test the normality and symmetry. This step is closely related to the previous one. We tested the symmetry of the two distributions to check for differences between compared and theoretical models. In theory, the percentage increases due to collusive behaviors follow an asymmetric distribution, while the distribution of symmetric percentage increases is related to symmetric distributions. For this, we calculated the coefficient of symmetry (skewness) for the two clusters and the kurtosis of distribution. The results can be seen in Table 5. From the table above, we observe that data from the first cluster is asymmetrical, meaning that this cluster auction tends to concentrate at the maximum value, which may correspond to a collusive behavior, while data at the second cluster has a symmetric distribution, corresponding to a possible non-affected auction. Box-Plot Analysis The data discontinuity described above can be noted in Figure A2 (Appendix A). The cluster analysis in SPSS 23 shows the following distribution in the two clusters (see Figure 4). Normality tests for the two clusters can be seen in Table 6. Normality tests for the two clusters can be seen in Table 6. The conclusion of these tests is that the data of the first cluster are normally distributed (Sig = 0.000 < 0.05), while data for the second cluster do not have a normal distribution (Sig = 0.061 > 0.05-Test Shapiro-Wilk, Sig. = 0.073 > 0.05-Kolmogorov-Smirnov test). This confirms that the difference between the two clusters is statistically significant. Now we focus on auctions for winter 2016-2017. The data discontinuity described above can be seen in Figure A3 (Appendix A). The cluster analysis in SPSS shows the next split between the two clusters (see Figure 5). The conclusion of these tests is that the data of the first cluster are normally distributed (Sig = 0.000 < 0.05), while data for the second cluster do not have a normal distribution (Sig = 0.061 > 0.05-Test Shapiro-Wilk, Sig. = 0.073 > 0.05-Kolmogorov-Smirnov test). This confirms that the difference between the two clusters is statistically significant. Now we focus on auctions for winter 2016-2017. The data discontinuity described above can be seen in Figure A3 (Appendix A). The cluster analysis in SPSS shows the next split between the two clusters (see Figure 5). Normality tests for the two clusters can be seen in Table 7. The conclusion of these tests is that the data for the second cluster are normally distributed (Sig. = 0.60 > 0.05-Shapiro-Wilk test, Sig = 0.200 > 0.05-Kolmogorov-Smirnov test) of the first cluster did not have a normal distribution (Sig. = 0.033 < 0.05-Shapiro-Wilk test; Sig. = 0.025 < 0.05-Kolmogorov-Smirnov test). This statistically confirms the discontinuity in data from the two clusters. Now we perform a cluster analysis on the percentage increases between the price per contract kilometer in 2016-2017 compared to 2015-2016. The data discontinuity described above can be seen in Figure A4 (Appendix A). The cluster analysis in SPSS shows the next division in the two clusters ( Figure 6). Normality tests for the two clusters can be seen in Table 7. The conclusion of these tests is that the data for the second cluster are normally distributed (Sig. = 0.60 > 0.05-Shapiro-Wilk test, Sig = 0.200 > 0.05-Kolmogorov-Smirnov test) of the first cluster did not have a normal distribution (Sig. = 0.033 < 0.05-Shapiro-Wilk test; Sig. = 0.025 < 0.05-Kolmogorov-Smirnov test). This statistically confirms the discontinuity in data from the two clusters. Now we perform a cluster analysis on the percentage increases between the price per contract kilometer in 2016-2017 compared to 2015-2016. The data discontinuity described above can be seen in Figure A4 (Appendix A). The cluster analysis in SPSS shows the next division in the two clusters ( Figure 6). Normality tests for the two clusters can be seen in Table 8. The conclusion of these tests is that the data of the second cluster is normally distributed (Sig. = 0.096 > 0.05-Shapiro-Wilk test; Sig. = 0.062 > 0.05-Kolmogorov-Smirnov), while the of the first cluster do not have a normal distribution (Sig. = 0.000 < 0.05-Shapiro-Wilk test, Sig = 0.000 > 0.05-Kolmogorov-Smirnov test). The statistical tests performed in this section are proving the existence of two types of bid-rigging during the analyzed period: competitive and anticompetitive. Discussion of the Results The existence of the two clusters indicates a certain "gap" or discontinuity in data. This discontinuity occurs at auctions where price growth is over 15% in 2016-2017 compared to 2015-2016. Most maintenance contracts for snow removal have a 20-40% increase in value in the 2016-2017 winter season compared to that of 2015-2016. Regarding the actual price paid per km of snow removal, the increases of percentages were set between 49% to 175% (in the analyzed period), which may not be economically reasonable. From the cluster analysis on the percentage increases between the actual price and the contractual price per designated km in the two analyzed seasons (winter 2015-2016 and winter 2016-2017), it follows that there is a category of contracts with a normal (Gaussian) value and another category of contracts that do not follow such data distribution. This indicates that auctions in the second cluster could raise some suspicions about the organization/performance of these award procedures. The lots/SDNs in this category are found in Figure A2 of Appendix A. We also note a discontinuity in the data as far as it concerns the contract prices per km of snow in the 2016-2017 season, compared to 2015-2016. This discontinuity appears in contracts whose value has increased by over 15% (with a maximum increase of 104%) ( Figure A1 in Appendix A). This is a confirmation through the statistical analysis that there have been certain award procedures where the increase in the price paid per kilometer of snow can raise some suspicions about the organization/running of these award procedures. Lots/SDNs in this category can be found in Figures A1 and A2 of Appendix A. We may conclude, based on the testing of the statistical assumptions, that there was a significant difference between the contractual price and the actual per km of snow, both for the winter 2015-2016 and for winter 2016-2017 (Table 2). In addition, there are significant differences in terms of both statistics and in terms of the actual price paid per km over the 2016-2017 season compared to 2015-2016 ( Figures A3 and A4 from Appendix A). These results show that we may be in the presence of an anticompetitive practice/agreement/bidrigging regarding the results of the procedures for the snow removal auctions for the period 2016-2017. The econometric analyses used in our study (Sections 3.1-3.3) supported the findings of a cartel agreement. Cluster analysis, statistical hypothesis, normality and symmetry and nonparametric tests reveal two types of auctions during the analyzed period: competitive and noncompetitive bids. Conclusions This scientific research confirms the need to pay maximum attention to the procurement problem, for the reasons we referred to in the paper, in line with our wide review of the specialized literature. Incidentally, a clear institutional framework was adopted at the EU level, containing norms meant to bring significant improvements to the above-mentioned plan, which were predominantly transposed at the Member State level. However, as we have found by studying the reports of the relevant institutions, the margins of expectation are still high regarding the actual achievements. Second, when we discuss the issues related to the elaboration and implementation of national legal instruments aimed at stimulating public procurement, we come up with a whole series of critical issues. In a nutshell, we find that these instruments have not demonstrated enough efficiency in stimulating green procurement in the public sector. The statistical analyses underline the high probability of stating for prerequisites for alleged anticompetitive agreements between the undertakings which participated in public procurement auctions in the analyzed period. The analytical methods for detecting anticompetitive behaviors are often used by worldwide competition authorities in dealing with anticompetitive cases. The enterprises could claim compensation whenever they have been harmed by the existence of a cartel on their operational market. The use of analytical methods based on statistical data could be a method for observing certain anticompetitive behaviors on the market. By utilizing these methods, we are not able to directly prove the collusive behavior of the analyzed enterprises, but we could highlight the improbable results, which would require more careful attention. These methods aim primarily to avoid false-positive and false-negative results. A false-positive result states that there is an anticompetitive agreement on a given market, although it does not actually exist. False-negative results are those which state that there is not an anticompetitive agreement on a certain market, although this cartel really exists. Moreover, the use of these analytical methods: should have empirical support, be easily applied and not too costly to implement. There is a limitation of this study, which comes from the fact that only five regions are analyzed, and therefore further research should be extended to other regions. Another limitation of the research could be related to the fact that the number of companies that can provide the service object of the contract was slightly different between the two analyzed periods. Thus, further research should focus on other similar analyses to other types of bid-rigging in public procurements. -70% -50% -30% -10% 10% 30% 50% 70% 90% 110% Cluster 1 Cluster 2 Figure A4. Percentage increase in estimated prices per kilometer of snow in 2016-2017 as compared to 2015-2016.
7,721
2021-02-10T00:00:00.000
[ "Economics" ]
Pitting Corrosion of Hot-Dip Galvanized Coatings Lead (Pb) addition to hot-dip galvanizing (HDG) baths affects the physical characteristics of zinc coatings and is also useful to protect kettles. The influence of lead additions on both corrosion rate and morphology as well as on structure of zinc coating is less investigated. In this paper, three different additions, (Pb = 0.4–0.8–1.2 w/w) were chosen for three series of steel substrates, plus references without lead. The three steels chosen as substrates contained silicon (Si) = 0.18, 0.028, 0.225 w/w, respectively. The experimental part included both macro- and micro-electrochemical measurements, weight loss vs. time plots, Glow Discharge Optical Emission Spectroscopy (GDOS) and SEM/EDX microanalysis of both surface and cross-section of samples. Lead concentration is responsible for evident bimetallic coupling in the surrounding of lead inclusion with consequent increased dissolution rate, chunk effect, and rougher surface morphology. Introduction Hot-dip galvanizing found the first application after an original idea of a chemist in 1742. The first industrial plant in Italy was built in Milan in 1883. Since the first industrial applications, it has appeared to be among the best methods to protect steel surfaces against the aggressivity of atmosphere; continuous improvements were applied to get the best results both for corrosion resistance and for aesthetics, leaving alone the economic aspects linked to the thickness of zinc layers, the time of permanence in the bath and its bath composition and temperature, and the quality of the steel. The widening of its use enhances the need for a well-defined relationship between aesthetics and durability, since the former can vary from shiny silver to a dull matte grey finish depending upon steel composition, bath composition, bath temperature, bath shape, and mass of the object. This concern has caused many research works to be published with the aim of optimizing the main function parameters of the coating, namely morphology (for both aesthetic and durability reasons), corrosion rate, and consequent corrosion morphology as related to environmental impact. Both thickness and composition of the coating will be mainly driven by the process parameters, namely both bath and steel substrate composition and bath immersion time, not forgetting those parameters more linked to skillful operations such as shape of objects, temperature, time of immersion [1][2][3][4]. Silicon is added to steels to remove oxygen. These steels are known as "killed steels". The influence of the amount of silicon and phosphorus in steel on HDG coatings is widely described in the literature [5][6][7][8][9][10][11][12][13]. The silicon content should always be taken into consideration for steels that will be galvanized (see e.g., Figure 1). The silicon and phosphorus content is the basis for the following steel division in four groups: [2] In HDG, Al and Pb are added due to their influence on both thickness and brilliant appearance of the coating. Pb addition affects the physical characteristics of zinc, in particular both viscosity and surface tension [15]. The results are both a better wetting of steel tackle by molten zinc and due to an increased fluidity of the bath, an easier flow of zinc in excess from the surface of the coated object during the extraction from the tank and coatings thickness. Krepski [16] has shown that addition of 0.03%-1.2% weight Pb is decreasing zinc consumption up to 60%. Pb additions are also useful to protect the tank and especially in the cyclic operation of extraction of mattes. In fact, they have a higher specific weight to the zinc and tend to be deposited in contact with the bottom of the tub. In the presence of a deposit of molten Pb lying on the bottom of the tank, the mattes will float. In this way, a gap is created between mattes and tub, which enables the appropriate buckets of slip, by performing the operation of their removal without the risk of hitting and damaging the bottom of the tank [17]. The standard ISO 14713-1:2019 [18] shows that HDG coatings should protect steel up to 10-20 years in the corrosive atmosphere C5 with the thickness of 85 µm. It was observed in facts that pitting corrosion sometimes appears already after 2-3 years both on roads and in urban infrastructures both in Poland and in Czech Republic [19,20]. They vary from concluding of no influence (Schulz and Thiele [2]) to an excessive influence on growth of zinc crystallites and their dendritic structure, therefore modifying both texture and appearance and affecting also positively corrosion protection [22]. Changes of crystals orientation and their influence on corrosion protection have been investigated by Changa and Shina [23] and in other works [24][25][26]. No effect on corrosion was observed after 20 years of tests in the field of the zinc coating with lead contents of 0.0055%, 0.049%, and 0.84% [27]. The same results are reported for percentages of 0.5% and 0.68% [28]. The results presented by Vala [29] show that addition of lead to zinc in galvanizing bath promotes the formation of large, smooth surface spangles providing good sacrificial ability. He concluded, by a comparison of calculated corrosion rates, that the optimum lead content requirement in the bath is 1.0 % to 1.5%. Data available in the literature are therefore (1) low silicon < 0.03% (Si + P); (2) from the Sandelin area 0.03%-0.12%; (3) in the Sebist range of 0.12%-0.28%; (4) high silicon 0.28%-0.60%. In HDG, Al and Pb are added due to their influence on both thickness and brilliant appearance of the coating. Pb addition affects the physical characteristics of zinc, in particular both viscosity and surface tension [15]. The results are both a better wetting of steel tackle by molten zinc and due to an increased fluidity of the bath, an easier flow of zinc in excess from the surface of the coated object during the extraction from the tank and coatings thickness. Krepski [16] has shown that addition of 0.03%-1.2% weight Pb is decreasing zinc consumption up to 60%. Pb additions are also useful to protect the tank and especially in the cyclic operation of extraction of mattes. In fact, they have a higher specific weight to the zinc and tend to be deposited in contact with the bottom of the tub. In the presence of a deposit of molten Pb lying on the bottom of the tank, the mattes will float. In this way, a gap is created between mattes and tub, which enables the appropriate buckets of slip, by performing the operation of their removal without the risk of hitting and damaging the bottom of the tank [17]. The standard ISO 14713-1:2019 [18] shows that HDG coatings should protect steel up to 10-20 years in the corrosive atmosphere C5 with the thickness of 85 µm. It was observed in facts that pitting corrosion sometimes appears already after 2-3 years both on roads and in urban infrastructures both in Poland and in Czech Republic [19,20]. They vary from concluding of no influence (Schulz and Thiele [2]) to an excessive influence on growth of zinc crystallites and their dendritic structure, therefore modifying both texture and appearance and affecting also positively corrosion protection [22]. Changes of crystals orientation and their influence on corrosion protection have been investigated by Changa and Shina [23] and in other works [24][25][26]. No effect on corrosion was observed after 20 years of tests in the field of the zinc coating with lead contents of 0.0055%, 0.049%, and 0.84% [27]. The same results are reported for percentages of 0.5% and 0.68% [28]. The results presented by Vala [29] show that addition of lead to zinc in galvanizing bath promotes the formation of large, smooth surface spangles providing good sacrificial ability. He concluded, by a comparison of calculated corrosion rates, that the optimum lead content requirement in the bath is 1.0 % to 1.5%. Data available in the literature are therefore dispersed most probably in reason of chunk effects [32,33] and bimetallic corrosion and the reliability this last statement [29] needs further confirmation. In this paper, we intend to investigate the effects of lead on zinc layers grown on three steel substrates with different silicon content. Their chemical composition was determined via the Spark OES method using a Magellan Q8 Bruker device. (Table 1). The steel samples were degreased and rinsed in dichloroethane, etched for 20 min in an 18% HCl solution, then rinsed in demineralized water and immersed in Tegoflux 60 flux for approximately 5 min. After drying the samples were kept in a dryer at 140 • C, then they were hot-dip galvanized at 450 ± 0.5 • C for 6 min in zinc bathes with addition of 0-0.4-0.8-1.2 w/w Pb. The lower amount of lead was chosen based on the average values used in galvanizing plants, and the upper range is below the solubility limit of lead at 450 • C (1.6%). For every test, the number of samples tested is given. zinc coatings were produced at the Silesian University of Technology in Katowice, Poland. Weight Loss Measurements on Samples Exposed in the Salt Chamber Gravimetric tests for weight loss were performed for zinc coatings exposed in a salt chamber (Klimatest HKT 500) for 16, 49, 63, 91, and 117 days. Before weighing, zinc corrosion products were selectively dissolved in an aqueous solution of chromium trioxide (200 g/L) at 80 • C for about 1 min, and then rinsed in distilled water and dried. The results are presented as an average value of three different specimens. Morphology of the Zinc Layers The morphology of the zinc layers was studied by means of a JEOL 6010 LV Scanning Electronic Microscope, both on upper surface and in cross-section, also detecting its average thickness. Chemical analysis was carried out through EDX analyzer. The results are presented as a significant representation of three measurements on two different specimens. GDOS Profile Spectra Glow Discharge Optical Emission Spectroscopy (GDOS) has been employed to describe the chemical profile of the samples. The experimental set -up has been described in [34]. Two samples for each measurement were used and each one was scanned five times to get an average reliable result. Potentiodynamic Tests The tests were performed in a three-electrode system (SCE reference, platinum counter-electrode and test sample as working electrode) in the potential range from −1.3 V SCE to −0.4 V SCE. A scanning speed of 0.2 mV/s was used. The IVIUM Stat set was used for electrochemical tests. Every test was repeated three times for every sample of Table 2. The potential of the samples was tested before making polarization curves. The tested surface area was the same every time. The tests were carried out in a 3% sodium chloride solution. Each time the potential was stabilized for 300 s. Local Electrochemical Measurements Local electrochemical behavior of samples was investigated using the technique of the local microcells. This technique allows analyzing the corrosion behavior of the material in the microscale [35][36][37][38]. The experimental set-up of the system is shown in Figure 2. The tests were performed in a three-electrode system (SCE reference, platinum counter-electrode and test sample as working electrode) in the potential range from −1.3 V SCE to −0.4 V SCE. A scanning speed of 0.2 mV/s was used. The IVIUM Stat set was used for electrochemical tests. Every test was repeated three times for every sample of Table 2. The potential of the samples was tested before making polarization curves. The tested surface area was the same every time. The tests were carried out in a 3% sodium chloride solution. Each time the potential was stabilized for 300 s. Local Electrochemical Measurements Local electrochemical behavior of samples was investigated using the technique of the local microcells. This technique allows analyzing the corrosion behavior of the material in the microscale [35][36][37][38]. The experimental set-up of the system is shown in Figure 2. High resolution enables local electrochemical measurements in micro areas and electrochemical characterization of the behavior of the individual phases of metallic or intermetallic both inclusions and precipitates. An 80-micrometer diameter micro-capillary was used. The micro-capillary is mounted in an electrochemical cell with platinum counter-electrode and Ag/AgCl electrode as a reference. Both cell and micro-capillary tube were filled with electrolyte (in this case 0.1 M NaCl). The microcell apparatus was homemade and the potentiostat an Autolab PGSlat-30. A scanning speed of 1.0 mV/s was used. The electrochemical cell is placed on the focus of one lens in the optical microscope, which ensures both the precise location of the micro-capillary and a measurement in a selected place on the surface of the working electrode, i.e., the sample ( Figure 2). The end of the capillary is covered with Silicone, which creates a seal, ensuring good contact of the micro-capillary with the surface of the tested sample and prevents leakage of electrolyte. Seeing that the measurements relate to very small areas and therefore low currents (on the order of nA, -pA), the whole system was placed in a Faraday inclusion matrix High resolution enables local electrochemical measurements in micro areas and electrochemical characterization of the behavior of the individual phases of metallic or intermetallic both inclusions and precipitates. An 80-micrometer diameter micro-capillary was used. The micro-capillary is mounted in an electrochemical cell with platinum counter-electrode and Ag/AgCl electrode as a reference. Both cell and micro-capillary tube were filled with electrolyte (in this case 0.1 M NaCl). The microcell apparatus was homemade and the potentiostat an Autolab PGSlat-30. A scanning speed of 1.0 mV/s was used. The electrochemical cell is placed on the focus of one lens in the optical microscope, which ensures both the precise location of the micro-capillary and a measurement in a selected place on the surface of the working electrode, i.e., the sample ( Figure 2). The end of the capillary is covered with Silicone, which creates a seal, ensuring good contact of the micro-capillary with the surface of the tested sample and prevents leakage of electrolyte. Seeing that the measurements relate to very small areas and therefore low currents (on the order of nA, -pA), the whole system was placed in a Faraday cage. It fulfills the task of a screen protecting against external electromagnetic fields that might affect measurements. Reliable measurements were possible only on samples with lead inclusions sized over 50 µm. The best result was obtained with A3 sample (1.2% Pb on low silicon steel). Tests were carried out at AGH in Krakow, Poland. Results Tests were made on the samples described in Table 2. The tests shown in the "Materials and Methods" part involved multiples of 12 different samples relevant to three different steel types in four different types of bath. Table 2 includes also thickness of the coatings. It is evident that zinc coatings are thinner on steel substrate with 0 % of Si when compared to thickness of coatings on 0.18% Si steel. The higher percentage of Si (0.228%) again reduces the zinc coating thickness. The results of weight loss of the samples up to 117 days in salt spray chamber are displayed in cage. It fulfills the task of a screen protecting against external electromagnetic fields that might affect measurements. Reliable measurements were possible only on samples with lead inclusions sized over 50 µm. The best result was obtained with A3 sample (1.2% Pb on low silicon steel). Tests were carried out at AGH in Krakow, Poland. Results Tests were made on the samples described in Table 2. The tests shown in the "Materials and Methods" part involved multiples of 12 different samples relevant to three different steel types in four different types of bath. Table 2 includes also thickness of the coatings. It is evident that zinc coatings are thinner on steel substrate with 0 % of Si when compared to thickness of coatings on 0.18% Si steel. The higher percentage of Si (0.228%) again reduces the zinc coating thickness. The results of weight loss of the samples up to 117 days in salt spray chamber are displayed in cage. It fulfills the task of a screen protecting against external electromagnetic fields that might affect measurements. Reliable measurements were possible only on samples with lead inclusions sized over 50 µm. The best result was obtained with A3 sample (1.2% Pb on low silicon steel). Tests were carried out at AGH in Krakow, Poland. Results Tests were made on the samples described in Table 2. The tests shown in the "Materials and Methods" part involved multiples of 12 different samples relevant to three different steel types in four different types of bath. Table 2 includes also thickness of the coatings. It is evident that zinc coatings are thinner on steel substrate with 0 % of Si when compared to thickness of coatings on 0.18% Si steel. The higher percentage of Si (0.228%) again reduces the zinc coating thickness. The results of weight loss of the samples up to 117 days in salt spray chamber are displayed in Weight loss was the highest for low Sebist range steel. The two other steels showed similar weight losses. The mirror finished cross-section of the samples was observed by SEM analysis ( Figure 6) and submitted to GDOS in order to get a chemical profile along with the coating depth ( Figure 7) before salt spray exposure. Table 3 summarizes the whole set of results including the approximate thickness Both the low silicon steel (A0-A3) and high Sebist range steel (C0-C3) show well distinguished sequence of the phases. B-series shows more disordered view. Thickness values are displayed on top of every cross-section. The distribution of phases in the coatings as well as of both iron and added lead are clearly displayed in the GDOS profiles as it can be seen, e.g., in Figure 7 relevant to low silicon steel. In Figure 8 comparison of the lead distribution showed on the GDOS dependently on steel type and amount of lead addition is shown. The distribution of phases in the coatings as well as of both iron and added lead are clearly displayed in the GDOS profiles. High concentration of lead prevailingly accumulated towards the upper part of the layer. This effect is less evident at the highest lead content (1.2%). Table 3 summaries the phases thickness evaluations. The corroded surface of samples after salt spray chamber was analyzed by SEM and EDX spectra were obtained in various locations relevant to the effect of Pb inclusions on surface morphology. An example representative of the obtained results is shown in Figure 9 relevant to sample B2 showing the locations of EDX spectra listed in Table 4. Weight loss was the highest for low Sebist range steel. The two other steels showed similar weight losses. The mirror finished cross-section of the samples was observed by SEM analysis (Figure 6) and submitted to GDOS in order to get a chemical profile along with the coating depth ( Figure 7) before salt spray exposure. Table 3 summarizes the whole set of results including the approximate thickness evaluation of cross-section of coatings by both GDOS and SEM, including the evaluation of the sequence of the phases. B-series shows more disordered view. Thickness values are displayed on top of every cross-section. The distribution of phases in the coatings as well as of both iron and added lead are clearly displayed in the GDOS profiles as it can be seen, e.g., in Figure 7 relevant to low silicon steel. In Figure 8 comparison of the lead distribution showed on the GDOS dependently on steel type and amount of lead addition is shown. The distribution of phases in the coatings as well as of both iron and added lead are clearly displayed in the GDOS profiles. High concentration of lead prevailingly accumulated towards the upper part of the layer. This effect is less evident at the highest lead content (1.2%). Table 3 summaries the phases thickness evaluations. The corroded surface of samples after salt spray chamber was analyzed by SEM and EDX spectra were obtained in various locations relevant to the effect of Pb inclusions on surface morphology. An example representative of the obtained results is shown in Figure 9 relevant to sample B2 showing the locations of EDX spectra listed in Table 4. Table 4. Table 4. EDX analysis of locations shown in SEM micrograph of Figure 9 (w/w %). An example of the effect of lead addition and consequent inclusion of lead in form of droplets on surface morphology is shown in Figure 9. Table 4 shows the differences of lead distribution on the tested surface because of the bimetallic corrosion. Spectrum O (w/w %) Fe (w/w %) Zn (w/w %) Pb (w/w %) Total (w/w %) The effect of Pb inclusions has been further investigated by means of local micro-electrochemical analysis, which has shown evidence of the difference in both corrosion potential ( Figure 10) and Table 4. Table 4. EDX analysis of locations shown in SEM micrograph of Figure 9 (w/w %). An example of the effect of lead addition and consequent inclusion of lead in form of droplets on surface morphology is shown in Figure 9. Table 4 shows the differences of lead distribution on the tested surface because of the bimetallic corrosion. Spectrum O (w/w %) Fe (w/w %) Zn (w/w %) Pb (w/w %) Total (w/w %) The effect of Pb inclusions has been further investigated by means of local micro-electrochemical analysis, which has shown evidence of the difference in both corrosion potential ( Figure 10) and polarization behavior (Figure 11) between the Pb inclusion and the surrounding Zn matrix. An example of the effect of lead addition and consequent inclusion of lead in form of droplets on surface morphology is shown in Figure 9. Table 4 shows the differences of lead distribution on the tested surface because of the bimetallic corrosion. The effect of Pb inclusions has been further investigated by means of local micro-electrochemical analysis, which has shown evidence of the difference in both corrosion potential ( Figure 10) and polarization behavior (Figure 11) between the Pb inclusion and the surrounding Zn matrix. The whole set of potentiodynamic curves carried out is shown in Figure 12. The whole set of potentiodynamic curves carried out is shown in Figure 12. The random distribution of the Pb inclusions accounts for the establishment of a mixed potential value which is in continuous evolution during the action of the local microcells. It follows that the average corrosion rate is evaluable only through the long-term weight loss, while the instantaneous corrosion rate is not giving any reasonably plottable value of corrosion rate through the calculation of polarization resistance. Nevertheless, the E/log i plots shown in Figure 12 allow the argument that the rate-determining mechanism is linked to Zn dissolution, while the cathodic reactions depend on the instant conditions of the coupling with Pb inclusions. The whole set of potentiodynamic curves carried out is shown in Figure 12. The random distribution of the Pb inclusions accounts for the establishment of a mixed potential value which is in continuous evolution during the action of the local microcells. It follows that the average corrosion rate is evaluable only through the long-term weight loss, while the instantaneous corrosion rate is not giving any reasonably plottable value of corrosion rate through the calculation of polarization resistance. Nevertheless, the E/log i plots shown in Figure 12 allow the argument that the rate-determining mechanism is linked to Zn dissolution, while the cathodic reactions depend on the instant conditions of the coupling with Pb inclusions. Discussion The aim of the research was to investigate the appearance of corroded areas with both aesthetic and functional effects on HDG structures on road infrastructure constructions in Poland. The analysis of extracted samples showed that the surface degradation was accompanied by the presence of lead inclusions. The study was directed to the three possible structural steels the composition of which might affect both shape and thickness of HDG, as a function of Si content, centered on low silicon Discussion The aim of the research was to investigate the appearance of corroded areas with both aesthetic and functional effects on HDG structures on road infrastructure constructions in Poland. The analysis of extracted samples showed that the surface degradation was accompanied by the presence of lead inclusions. The study was directed to the three possible structural steels the composition of which might affect both shape and thickness of HDG, as a function of Si content, centered on low silicon and Sebist one. The Pb content span was chosen in order to obtain a single phase bath, usually used in practice, taking in account that at the normally used bath temperature of 450 • C the solubility limit is about 1.6 w/w %. At room temperature Pb is practically insoluble in zinc. It is distributed in the upper layers of the coating in form of randomly distributed droplets but for the highest concentration where more uniform distribution of droplets is observed, as both EDX and GDOS showed. Such a morphology is suitable to produce, due to the high distance between zinc and lead in the galvanic series, a consequent number of galvanic microcells producing an increase of zinc dissolution rate. This event is reflected by the results of the weight loss measurements. HDG layers on steel are aimed to provide a stable, uniform surface aspect and a durable corrosion resistance in harsh environments. HDG is carried out in many different factories with a variety of bath formulations and on steels of different both structure and composition. Table 2 provides a matrix of the most common possible combinations of both lead additions to bath and silicon content in steel. Mainly weight loss (proportional to actual corrosion rate) and consequent surface morphology were interrelated as a function of layers structure, composition, and thickness. Figures 3-8 Tables 3 and 4 summarize the experimental results. It appears that zinc coatings grown on A-and C-samples show a clear and well distinguished sequence of phases, while on B-samples the ξ phase is more disordered with presence of crystals of different both shape and composition. The A-and C-samples show similar weight loss while for B-samples weight loss is double (about 2-3 g after 117 days compared to 5-6 g). The weight losses for all baths without lead were lower as compared to the alloyed ones. Up to 92 days for all steels the weight loss was proportional to the lead amount. Between 92 and 117 days this trend changed for C2 and C3 samples and for A2 samples. together with Morphology of corroded samples mirrors the complexness of both structure and composition of the surface layers, including the presence of elements differing as for electronegativity. Fe is more evenly distributed due to higher solubility in Zn, while Pb is present as an archipelago of inclusions. The aggressive environment of salt spray causes many so-called "chunk effects", i.e., removal of intact Pb particles due to bimetallic corrosion of the surrounding anodic Zn matrix. After 92 days, the removal of many Pb droplets allows the remaining zinc surface to be less affected by bimetallic corrosion with consequent lower weight loss. After 48 days the weight loss is lower, probably because all corrosion products present at the surface of the samples were completely dissolved. The cross-section SEM micrographs ( Figure 6) show clearly that on samples A0, A1, A2, A3 the distribution of the phases along with the growth of zinc is pretty ordered, with a hardly perceivable thinning of the η layer and a progressive thinning of the whole layer along with increasing Pb content. On samples B0, B1, B2, B3 the η layer is intrinsically thinner (see sample B0 in comparison, e.g., with A0) and the Pb content worsens this phenomenon so that ξ phase is partly in contact with environment. Again, Pb tends to produce thinner coatings. On high Si steels (Samples C0, C1, C2, C3) Pb additions show no influence on both thickness and phase distribution. The analysis of GDOS spectra allows for an empirical evaluation of the HDG phases (Table 3). B samples give obviously thicker coatings, while Pb develops wider η phases, but extensively polluted by Pb particles. Corrosion morphology is also affecting zinc dissolution rate as well as the aesthetics of the structures. From the EDX analysis (Table 4) of the corroded sample B2 shown in Figure 11 the complex both morphology and composition of the surface is displayed. The composition of the whole sample is reflected in the data of spectrum 1, while spectra 5 and 6 point out the composition of oxidized Pb inclusions. The comparison of spectra 2 and 3, both showing no Pb, reflects the relative inertness of an area where Pb inclusions were absent as compared with the areas relevant to spectra 3 and 5, where a dense population of inclusions accelerated local corrosion of the anodic surrounding zinc, with consequent chunk effect and detachment of the inclusions. Consequently, a mismatching follows between weight loss and corrosion current and hence corrosion rate. The micro-electrochemical measurements are a clear evidence of such situation. They account for the complexity of surface composition. As an example, the scattered electrochemical behavior is observed in different locations on the same sample. When the measurements are carried out on the Pb inclusion the corrosion potential is about 500 mV nobler than the value observed on zinc (Figure 10), while breakdown potential is about 300 mV nobler ( Figure 11). The presence of very active bimetallic coupling is clearly detected. The analysis of potentiodynamic plots puts in clear evidence that the anodic branches are hardly distinguishable from each other. The complex mechanism of depolarization as well as of inhibition of both branches of the corrosion process leads to the conclusion that corrosion rate is mainly the result of zinc dissolution in NaCl solution. The random distribution of the Pb inclusions accounts for the establishment of mixed potential values which are in continuous evolution during the action of the local microcells. It follows that the average corrosion rate is evaluable through the long-term weight loss, while the instantaneous corrosion rate is not giving any reasonably plot table value of polarization resistance. High lead additions are harmful for both durability and reliability of HDGcoatings. On site observations have shown that the effects are much more extended in occluded surfaces. Given the highly scattered results present in the literature of the topic, further investigations of the different environments as well as of the geometry of the structures are necessary. Possible replacements with Bismuth (for both durability and ecology reasons) are studied at 10 times lower concentrations than lead, taking account of both size and distribution of Bi inclusions [39,40].
7,105
2020-04-26T00:00:00.000
[ "Materials Science" ]
Discovery of a novel ferroptosis inducer-talaroconvolutin A—killing colorectal cancer cells in vitro and in vivo Ferropotsis is among the most important mechanisms of cancer suppression, which could be harnessed for cancer therapy. However, no natural small-molecule compounds with cancer inhibitory activity have been identified to date. In the present study, we reported the discovery of a novel ferroptosis inducer, talaroconvolutin A (TalaA), and the underlying molecular mechanism. We discovered that TalaA killed colorectal cancer cells in dose-dependent and time-dependent manners. Interestingly, TalaA did not induce apoptosis, but strongly triggered ferroptosis. Notably, TalaA was significantly more effective than erastin (a well-known ferroptosis inducer) in suppressing colorectal cancer cells via ferroptosis. We revealed a dual mechanism of TalaA’ action against cancer. On the one hand, TalaA considerably increased reactive oxygen species levels to a certain threshold, the exceeding of which induced ferroptosis. On the other hand, this compound downregulated the expression of the channel protein solute carrier family 7 member 11 (SLC7A11) but upregulated arachidonate lipoxygenase 3 (ALOXE3), promoting ferroptosis. Furthermore, in vivo experiments in mice evidenced that TalaA effectively suppressed the growth of xenografted colorectal cancer cells without obvious liver and kidney toxicities. The findings of this study indicated that TalaA could be a new potential powerful drug candidate for colorectal cancer therapy due to its outstanding ability to kill colorectal cancer cells via ferroptosis induction. Introduction Colorectal cancer (CRC) is one of the most frequent cancer types. It is ranked the third most common morbidity and second global mortality, and associated with nearly two million new cases and~900,000 deaths in 2018 alone 1,2 . Global statistics show that CRC is widespread, especially in economically developed regions, such as Europe, North America, Australia, and Japan 3 . In China, the incidence of CRC is on the rise with the continuous development of economy 3 . The occurrence of distal colon and rectal cancers has increased most rapidly in adolescents and young adults in recent years 4,5 . Various strategies have now been developed for the treatment of CRC, such as surgery, chemotherapy, radiotherapy, targeted therapy, and immunotherapy 6,7 . Chemoradiotherapy is often used before or after surgery to prevent the recurrence and metastases of the disease 8,9 . However, current chemotherapy drugs are not able to fully control CRC. Relapse occurs in~30% of stages I-III and 65% of poststage IV patients, which emphasizes the urgency of the search for new, more effective drugs 10 . The cancer-preventive and anticancer activities of plant-derived small-molecule compounds, such as terpenoids, carotenoids, anthocyanidins, and flavonoids, have been extensively investigated 11 . Some compounds of the aforementioned small-compound classes regulate the gene expression, and are thus involved in crucial biological processes, such as cell proliferation, differentiation, apoptosis, and autophagy 6,12 . In addition to plant-derived small-molecule compounds, microbial-derived smallmolecule compounds have also attracted substantial research attention in recent years, and have undergone screening for determination of their anticancer potential 13,14 . For example, Ekbatan et al. reported that chlorogenic acid suppressed the proliferation of colon cancer Caco-2 cells via cell cycle arrest and apoptosis induction. However, the IC 50 value of chlorogenic acid for colon cancer cell proliferation was higher than 100 μM, which limited its feasibility 14 . Hence, no effective microbial molecules that are capable of destroying colon cancer cells have been identified. Therefore, we dedicated our efforts to search for new, more effective microbial sources of small-molecule compounds which can inhibit or kill CRC cells. Cancer cell death can occur via different mechanisms, such as necrosis, apoptosis, autophagy, pyroptosis, and ferroptosis. Of them, ferroptosis is the most recently discovered cell death pathway that is based on the action of ferric ion and reactive oxygen species (ROS) 15 . Ferroptosis is morphologically and mechanistically distinct from apoptosis 16 . Cells undergoing ferroptosis have specific morphological features, including ruptured cell membranes, vesicle formation, reduced mitochondrial size, increased density of the mitochondrial membrane, decreased or disappeared mitochondrial ridge, and broken outer mitochondrial membrane; the nucleus has a normal size but lacks chromatin condensation 17 . An observation under an electron microscope reveals smaller-than-usual mitochondria and increased bilayer membrane density 18 . Emerging evidence suggests that ferroptosis is an ancient and delicate physiological process and that insufficient ferroptosis can induce carcinogenesis 19 . Ferropotsis is probably one of the most important mechanisms of cancer suppression, which could be harnessed for tumor therapy. The use of ferroptosis for the development of new anticancer strategies has recently attracted considerable research attention. In a previous study, cancer cells were found to have higher iron requirements than normal cells, a phenomenon known as "iron addiction" 20 . This specificity increases the susceptibility of cancer cells to lipid peroxidation-induced ferroptosis, which may provide new opportunities for cancer treatment 21 . Several small molecules and FDA-approved clinical drugs were established to promote ferroptosis in cancer cells. Therefore, the antitumor activities of ferroptosis inducers have been recently investigated in various experimental tumor models, which confirmed the potential of ferroptosis as a novel method of anticancer therapy 18,22 . The anticancer activity of TalaA, a natural product isolated from the endophytic fungus Talaromyces purpureogenus inhabiting Panax notoginseng, has not been investigated until now. In our present study, multiple experimental perspectives were employed, and, as a result, we discovered that TalaA had the ability to kill various CRC cell lines (HCT116, SW480, and SW620). Furthermore, our in vivo experiment showed that TalaA suppressed the tumor growth in the transplant of xenograft nude mice. In the present investigation, we found that TalaA did not induce apoptosis, but powerfully triggered ferroptosis by ROS upregulation, which led to lipid peroxidation and decreased the levels of antioxidant molecules. Morphologically, TalaA caused mitochondrial shrinkage and cell membrane perforation. Moreover, the ferroptosis inhibitor ferrostatin-1 neutralized the lethal effects of TalaA, which additionally verified that TalaA activated the ferroptosis pathway. On the other hand, transcriptome sequencing and enrichment analysis data showed that the ferroptosis pathway was among the major pathways in the KEGG-enrichment analysis results. We discovered that TalaA not only upregulated lipid peroxidases such as ALOXE3 and ALOX12, but also suppressed the synthesis of the antioxidant glutathione via downregulation of SLC7A11, a crucial channel protein involved in the transport of cystine from extracellular to intracellular sites. Besides, various iron metabolism-related genes, including FTL, FTH1, and FTH1P23, as well as HMOX1 were also upregulated by TalaA treatment. ROS is ubiquitous in living organisms 23 . It is not only a product of normal cell physiological activities, but also an important signaling molecule 24 . The growth rate and ROS levels in healthy cells of normal tissues are usually low. However, in cancer cells, ROS production is increased due to the vigorous cell metabolism and proliferation. Meanwhile, a set of antioxidant systems against ROS is derived by cancer cells to prevent themselves from damage caused by ROS; moreover, they can utilize ROS as a positive regulatory signal for advanced survival and proliferation 25 . When the oxidative stress in cells, caused by ROS, is too strong, they enter programmed death pathways, such as apoptosis and ferroptosis. Of note, cancer cells have higher baseline ROS levels than normal cells. Thus, a strategy to elevate the content of ROS and suppress the activities of antioxidant molecules, which induces cancer cell death, would be a highly sensible strategy for cancer treatment. Notably, in this study we found TalaA strongly elevated the ROS level in CRC cells, which was an important reason why TalaA killed cancer cells via ferroptosis. It is worth noting that the anticancer activity of TalaA is significantly higher than that of erastin in killing cancer cells and triggering ferroptosis. TalaA suppresses the growth of CRC cells through two pathways: (1) by elevation of cancer cell ROS level to initiate ferroptosis; (2) by alteration of the expression of ferroptosis-related molecules (e.g., SLC7A11, ALOXE3, GSS, and HMOX1), which accelerates ferroptosis. Due to its high anticancer activity and low toxicity, TalaA could be a powerful potential candidate drug for CRC chemotherapy. This study reveals the anticancer mechanism of TalaA, and provides important experimental evidence that will facilitate the development of novel anticancer drugs. Fermentation, extraction, and isolation The fungus T. purpureogenus was isolated from the stems of P. notoginseng collected in September 2015 from Baoding, Hebei Province, P.R. China. The isolate was identified as T. purpureogenus by an analysis of the ITS region of the rDNA (GenBank Accession No. KY230505) and assigned the accession no. XL-025. A voucher specimen was deposited in School of Pharmaceutical Sciences, South-Central University for Nationalities. The fungus T. purpureogenus was inoculated aseptically into three 500 mL Erlenmeyer flasks each containing 300 mL of potato dextrose broth (PDB), and then cultured at 28°C for 3 days with shaking at 160 rpm to afford the seed culture. The large-scale fermentation was performed into 150 flasks (500 mL), and each flask contained 80 g of rice and 80 mL glucose solution (20 g/ L). Then, 5.0 mL of the seed culture was inoculated into each flask and incubated at room temperature for 50 days. The harvested fermentation material was ultrasonically extracted three times with CHCl 3 /MeOH (1:1, v/v), and the organic solvent was evaporated under reduced pressure to yield a brown residue. The residue was then suspended in H 2 O and extracted three times with an equal volume of ethyl acetate (EtOAc) to yield 70 g of crude extract. The EtOAc extract was subjected to a silica gel column chromatography (CC) with a gradient mixture of CH 2 Cl 2 /MeOH (100:1-0:1) to afford eight fractions (Fr. A-Fr. H). Fraction C (1.2 g) was further purified by silica gel CC (petroleum ether/EtOAc = 1:15), Sephadex LH-20 (MeOH), and semi-preparative HPLC using with a solvent system of MeOH/H 2 O (96:4, 2 mL/min, 254 nm) to afford TalaA (25 mg, t R = 20 min). 1 H-NMR and 13 C-NMR spectra were recorded on an Agilent DD2 (600 MHz) spectrometer in CD 3 OD using signals as internal standards (CD 3 OD, δ H 3.30 ppm; δ C 49.0 ppm). Silica gel (200-300 mesh, Anhui Liangchen Inc., China) and Sephadex LH-20 (Amersham Biosciences, Uppsala, Sweden) were employed for CC. Semipreparative high-performance liquid chromatography (HPLC) was performed on a Lab Alliance instrument (Systems Inc., State College, Pennsylvania) using a Prevail C 18 column (250 mm × 10 mm, 5 μm, GRACE Corporate, Columbia, MD, USA) and an UV detector (Mode 201). Finally the TalaA was evaporated into dry powder. Cell activity measurement Cell Counting Kit-8 (CCK8 for short) is a fast and highly sensitive test kit based on WST-8 that is widely used in the detection of cell activity. We used a commercial available CCK8 kit (C0037, Beyotime, China) to test the anti-cancer effect of TalaA against CRC cells including HCT116, SW480, and SW620 according to manufacturer's instruction. Measurement of DNA synthesis rate by EdU method To detect effects of compounds on cell proliferation is one of basic method to evaluate the antitumor activity of drug effect. It is widely accepted that the most accurate way is to directly detect the synthesis of DNA in cells. EdU (5-ethynyl-2′-deoxyuridine) is a thymidine analog which can be substituted for thymidine in DNA synthesis. We used the EdU-594 cell proliferation detection kit (C0078L, Beyotime, China) to examine the synthesis of DNA according to manufacturer's instruction. The results of the EdU staining were photographed under a fluorescence microscope. Flow cytometry for cell cycle and apoptosis test To examine the effects of the TalaA on the cell cycle, flow cytometry was employed. The cells were harvested by trypsin digestion after treatment, and fixed in 70% cold ethanol (in PBS) overnight at 4°C. Then PI/RNase staining solution (#4087, cell signaling technology, USA) was added for 20 min (in dark) and the stained cells were tested and counted by flow cytometer. The cells for apoptosis analysis were harvested by trypsin treatment, and stained by Annexin V-FITC Early Apoptosis Detection Kit (#6592, Cell Signaling Technology, USA) for 20 min in darkness. The stained cells were counted and recorded by flow cytometer. Cell clone formation assay Cell clone formation test is a powerful technique to detect cell proliferation ability or sensitivity to killing factors. Three hundred cells were cultured into 12-well plate. After incubation at 37°C with 5% CO 2 for 12 h, different concentrations of TalaA was added and the cells were cultured for 12 days (change media and TalaA every 3 days). After chemical treatment, the cells were fixed by 4% PFA for 10 min, and stained by crystal violet solution (C0121, Beyotime, China). The stained cells were photographed under a microscope. Cell membrane staining experiment To study the effects of TalaA on cell membranes, we performed Dio-dye staining experiments. DiO, short for 3,3′-dioctadecyloxacarbocyanine perchlorate, is one of the most commonly used membrane of the fluorescent probe. DiO is a lipophilic membrane dye, which can gradually cause the whole cell membrane to be stained by lateral diffusion after entering the cell membrane. After cell culture and chemical treatment, the cells were washed by PBS and fixed by 4% PFA. Then the DiO cell membrane staining kit (C1038, Beyotime, China) was used to test the cell membrane according to the manufacturer's instruction, and the stained cells were photographed under a fluorescence microscope. ROS test by H2DCFDA H2DCFDA is a cell-permeable probe used to detect intracellular ROS. To detect the ROS induced by TalaA, the ROS-sensitive probe H2DCFDA (HY-D0940, MCE, China) was employed. After the cells were co-incubated with H2DCFDA staining solution, the fluorescent photographs were recorded via a fluorescence microscope. Moreover, the H2DCFDA-stained cells were also detected by flow cytometer. Cell microstructure test by transmission electron microscope The SW480 cells with or without TalaA treatment were scraped by cell scraper and centrifuged at 400×g for 10 min. After the supernatant was discarded, 0.5% glutaraldehyde fixative solution was added into the tube to suspend the cells. After incubation at 4°C for 10 min, the cells were centrifuged at 12,000×g for 10 min. Then, the supernatant was discarded, and 2.5% glutaraldehyde was slowly added along the wall to fix the cells. The photos of fixed cells were taken using transmission electron microscope (Hitachi HT7800, Japan) with different magnifications. Lipid peroxidation test To test the lipid peroxidation, the cell-based lipid peroxidation assay kit (Abcam, USA) was used. This kit employed a sensitive ratiometric lipid peroxidation sensor which was able to change its fluorescence from red to green upon peroxidation by ROS in cells. CRC SW480 cells were stained with 1× lipid peroxidation sensor for 30 min at 37°C. During the last 10 min of incubation, hoechst 33342 was added to satin the cell nucleus. After incubation, cells were washed three times with HHBS and imaged with a fluorescence microscope. RNA sequencing To reveal transcriptome changes caused by TalaA, the RNA was extracted by TRIzol reagent (ThermoFisher Scientific, USA) from SW480 cells with or without TalaA treatment. The library construction and RNA-sequencing were completed by Novogene Company (Novogene, Beijing, China). Reverse transcriptional PCR and real-time PCR The RNA was extracted by TRIzol reagent (Thermo-Fisher Scientific, USA) from SW480 cells with different concentrations of TalaA treatment. The cDNA was synthesized with a cDNA synthesis kit (D7170M, Beyotime, China), and the real-time PCR was performed using a SYBR Green qPCR kit (D7260, Beyotime, China) in a light cycler 480 II (Roche, USA). The primer sequences for real-time PCR are listed in Table 1. Western blotting To detect the protein level alteration of ferrotosisrelated molecules, western blotting was performed. The cell samples were lysed by RIPA buffer containing 0.1% Table 1 The primer list for real-time PCR. SDS. After running the SDS-PAGE, the protein samples were transferred onto the PVDF membrane, and followed by antibody incubation. The antibodies were listed as follows: rabbit anti-SLC7A11 (1:1000, PA1-16893, Invitrogen, USA), rabbit anti-ALXOE3 (1:800, ab118470, Abcam, USA), and mouse anti-β-actin (1:5000, A5441-100UL, Sigma, USA). The blots were explored with ECL, and the immunoreactive signals were generated in a luminescence detection system using horseradish peroxidase-labeled secondary antibody. Gene knockdown with shRNA ShRNA interference fragments were designed for human SLC7A11 and ALOXE3. These interference fragments were constructed into the downstream U6 promoter of the AgeIdigested and EcoRI-digested lentivirus vector (pLKD-CMV-EGFP-2A-Puro-U6-shRNA) by molecular biological means. The shRNA fragment sequences were shown as following: SLC7A11 shRNA1: Glutathione examination After being washed by PBS, cells were collected by cell scraper into a 1.5 ml EP tube. Super sonication was performed to lyse the cells. After centrifuge, the glutathione in supernatant was detected by a glutathione assay kit (S0053, Beyotime, China) following the manufacturer's instruction. Xenograft 5 × 10 6 HCT116 cells were inoculated subcutaneously in the underarm of Balb/c nude female mice (5-week old). The inoculated mice were randomly divided into two groups (6 mice each group). When the tumor reached 300 mm 3 , the drug group was given TalaA intraperitoneally at a dose of 6.0 mg/kg, and the control group was given the same dose of cosolvent-corn oil. The drug (or cosolvent) was injected every 2 days. Body weight and tumor volume were measured every 2 days. After the mice were sacrificed, the tumor was taken and fixed with 10% formalin. The animal experiments are carried out in accordance with animal ethics. All protocols and procedures were approved by the Institutional Review Committee of Jining Medical University for animal warfare. Statistical analysis When comparing the two groups of data, SPSS was used to analyze whether the data were normally distributed. For the comparison between two groups of data with normal distribution, the method of t-test was employed. When p value < 0.05, the differences were considered as significant. Purification and identification of TalaA TalaA was purified from the solid fermentation cultures of an endophytic fungus T. purpureogenus isolated from the stems of P. notoginseng, and its purity was detected by HPLC (as shown in Fig. S1). As shown in Fig. S2, the chemical structure of TalaA was characterized by comparison of its NMR data with literature value 26 TalaA-suppressed CRC proliferation TalaA suppressed the cell growth of CRC cell lines HCT116, SW480, and SW620 in dose-dependent (Fig. 1B, C) and time-dependent manners (Fig. S4). As can be seen in Fig. 1B, in high-concentration (10%) FBS medium, the IC 50 values of TalaA in HCT116, SW480, and SW620 cells were 9.23, 8.15, and 5.82 μM, respectively. In the low-concentration (1%) FBS medium, the IC 50 values of TalaA in HCT116, SW480, and SW620 cells were 1.22, 1.40, and 1.27 μM, correspondingly. Besides, we measured the cell activity via determination of the rate of DNA synthesis. Our EdU experiments showed that the treatment with TalaA decreased DNA synthesis (Fig. 1D, E). Furthermore, the results of the clone formation experiments also clearly revealed the anticancer function of TalaA against CRC cells (Fig. 1F). Morphologically, the cells treated with TalaA lost their original natural morphological characteristics, indicating that TalaA not only suppressed cell growth but also induced cell death. TalaA killed CRC cells via ROS level elevation but not apoptosis To investigate the mechanism of TalaA-induced CRC cell growth inhibition, we examined its effects on the cell cycle and apoptosis of CRC cells. As illustrated in Fig. 2A, TalaA did not cause cell cycle arrest, but increased the subG1 peak, indicating that it can induce cell death. Moreover, TalaA did not significantly induce early apoptosis (Fig. 2B), but triggered other programmed death mechanisms in CRC cells. Interestingly, TalaA treatment led to drastic changes in the morphology of the cell membrane: the membrane surface was no longer smooth and intact, but had multiple perforations (Fig. 2C). To determine the integrity of the cell membrane, we used a membrane/nucleus double-staining kit. Notably, TalaA caused a loss of the original integrity of the cell membrane (Fig. 2D). The ability of TalaA to change cell morphology, destroy membrane integrity, and elevate ROS levels without induction of apoptosis indicated that TalaA might trigger a novel cell death pathway other than apoptosis. Through H2CDFDA staining, we found that TalaA treatment obviously increased ROS levels in CRC cells (Fig. 2E), which was also supported by the results from flow cytometry (Fig. 2F). TalaA-induced ferroptosis in CRC cells As can be seen in Fig. 3A, TalaA-treated cells showed the typical subcellular morphological characteristics of ferroptosis: cell membrane vesicles or ruptures, smaller or shriveled mitochondria, and decreased or disappeared mitochondrial cristae. All aforementioned morphological features characterize ferroptosis. We pretreated the CRC cells with ferrostatin-1, a strong inhibitor of ferroptosis, and then performed cell treatment with TalaA. It is noteworthy that ferrostatin-1 significantly alleviated TalaA-caused membrane perforations (Fig. 3B). Moreover, ferrostatin-1 dose-dependently neutralized TalaAinduced cell death (Fig. 3C), which indicated that ferroptosis is the critical mechanism by which TalaA kills CRC cells. To verify this hypothesis, transmission electron microscopy was employed to observe the cell membrane and mitochondria, which could vividly reflect the subcellular morphological characteristics of ferroptosis. Besides, we also confirmed that TalaA induced the death of colon cancer cells by the ferroptosis pathway. As depicted in Fig. S5, deferiprone, an iron-chelating agent, partially alleviated the cell death caused by TalaA in a (see figure on previous page) Fig. 1 TalaA killed colorectal cancer cells. A The structure of TalaA. B CRC cells were incubated with TalaA in DMEM media containing 10% FBS for 24 h, then the CCK8 kit was employed to examine the cells activities. From left to right, the cells were HCT116, SW480, and SW620, respectively. For each concentration point, three repeats were performed. C CRC cells were incubated with TalaA in DMEM media containing 1% FBS for 24 h, then the CCK8 kit was employed to examine the cells activities. From left to right, the cells were HCT116, SW480, and SW620, respectively. For each concentration point, three repeats were performed. D After HCT116 cells were incubated with TalaA in 10% FBS contained media for 48 h, Edu solution was added and cells were stained according to manufacturer's instruction. Red spots meant Edu-positive cells, and blue spots meant Hoechst33342-positive cells. E After SW480 cells were incubated with TalaA in 10% FBS contained media for 48 h, Edu solution was added and cells were stained according to manufacturer's instruction. Red spots meant Edu-positive cells, and blue spots meant Hoechst33342-positive cells. F The crystal violet staining results for the clonogenicity of SW480 cells. The SW480 cells were cultured with 0-10 μM TalaA for 12 days. dose-dependent manner. Furthermore, we found that TalaA increased lipid peroxidation (Fig. 3D, E), which is an essential step and landmark of ferroptosis and deferiprone neutralizd TalaA-induced lipid peroxidation. Impressively, TalaA killed CRC cells more effectively than erastin, a known ferroptosis inducer (Fig. 3F). The differences between the anticancer effects of TalaA and erastin from morphological perspective can be seen in Fig. 3G. TalaA-induced ferroptosis was verified by RNA-seq analysis Transcriptome sequencing was performed to further investigate the mechanism by which TalaA causes CRC cell death. Figure 4A Then, we conducted functional enrichment analysis of differentially expressed genes. Remarkably, the KEGG analysis results were consistent with our speculation that TalaA could induce ferroptosis. We based our conclusion on the following observations: (1) ferroptosis pathway was among the top pathways established by KEGG analysis in the treatments with both the low and high concentrations of TalaA (Fig. 4C, D); (2) the constructed heat map showed that the low concentrations of TalaA led to significant changes in 29 genes closely related to ferroptosis, whereas the high concentrations of the compound caused significant changes in 39 genes tightly associated with ferroptosis (Fig. 4E, F). Additionally, the levels of most of the ferroptosis-related molecules were altered by TalaA in a concentration-dependent manner. The RNA-Seq results were uploaded and are publicly available in the Sequence Read Archive (SRA) database (https://www.ncbi.nlm.nih. gov/sra) under accession number PRJNA637941. Using two independent approaches, we clearly evidenced that TalaA induces ferroptosis (Figs. 3 and 4). RT-qPCR verification To further verify our hypothesis and confirm the above transcriptome-sequencing results, we assessed the expression levels of eight important ferroptosis-related molecules by reverse transcription-PCR, followed by realtime qPCR. As can be seen in Figs. 5A, 6A, and S6, ferroptosis positively correlated molecules such as FTL, FTH1P23, SAT2, HMOX1, ALOXE3, and ACSL5 were upregulated by TalaA. Conversely, the ferroptosis negatively correlated molecules such as SLC7A11 and SLC39A14 were downregulated by TalaA. TalaA-induced ferroptosis by SLC7A11 downregulation We found that TalaA dose-dependently decreased the expression of SLC7A11 in SW480 cells (Fig. 5A, B). To investigate the involvement of SLC7A11 in TalaA-caused ferroptosis, we overexpressed SLC7A11 in SW480 cells (Fig. 5C). As can be observed in Fig. 5D, the cell activity of TalaA-treated SLC7A11-overexpressed cells was significantly higher than that of TalaA-treated control cells. This result indicated that SLC7A11 might play an important role in TalaA-induced cell death. To further confirm the role of SLC7A11 in TalaA-induced ferroptosis, we knocked down SLC7A11 (gene for a channel protein important for cystine transmembrane transport) in SW480 cells (Fig. 5E, F). Additionally, cystine is an essential component in the synthesis of glutathione, which is a crucial antioxidant in cells. The knockdown of SLC7A11 decreased the glutathione level (Fig. 5G) and increased the sensitivity to TalaA (Fig. 5H-J). As seen in Fig. 5K, the ferroptosis inhibitor ferrostatin-1 neutralized the anticancer effects caused by SLC7A11 knockdown and TalaA treatment. It is obvious from Fig. 5 (entire panel) that SLC7A11 had an essential part in cell resistance to ferroptosis; TalaA decreased the SLC7A11 level, thereby aggravating ferroptosis. Besides, because of the important role of GPX4 in ferroptosis, we investigated the combined effect of TalaA and GPX4 inhibitor-FIN56 on SW480 cells. The GPX4 inhibitor-FIN56 enhanced the anticancer effect of TalaA, showing that TalaA might induce ferroptosis through pathways other than GPX4 (Fig. S7). (see figure on previous page) Fig. 2 TalaA elevated ROS in CRC cells. A The SW480 cells were co-incubated with or without TalaA for 24 h, and the cells were stained by PI/ RNase. And the stained cells were detected by flow cytometry to examine the cell cycle. The two red peaks represent G1 and G2 stage, and the crosscourt area represents S stage. The blue arrow means the dead cell debris. B The SW480 cells were co-incubated with or without TalaA for 24 h, and the cells were stained by PI and Annexin V-FITC. The stained cells were detected by flow cytometry to examine the apoptosis. C The SW480 cells were treated by 0-10 μM TalaA for 24 h, and the cellular morphology was recorded by microscope. Note: cells with membranes perforated were marked with yellow arrows; and the dead cells were marked with blue arrows. D The SW480 cells were treated by 8.0 μM for 24 h, and the cell membrane and nucleus were stained by DiO (green) and Hoechst 33342 (blue). Note: the yellow arrow indicated damaged membrane and the blue indicated membrane fragment without nucleus. E The SW480 cells were treated by 8.0 μM for 4 h, and the ROS was detected by H2DCFDA. The yellow arrow indicated ROS increased cells. F The SW480 cells treated with TalaA or H 2 O 2 were incubated with 5 µM H2DCFDA in PBS in the dark for 30 min at 37°C. After being digested, the H2DCFDA-stained cells were detected by flow cytometer. Figure 6A displays the dose-dependent increase in the level of ALOXE3 mRNA by TalaA, which was consistent with the obtained RNA-Seq results. Western blotting data confirmed that TalaA elevated the protein level of ALOXE3 in SW480 cells (Fig. 6B). To investigate the role of ALOXE3 in TalaA-induced ferroptosis, we knocked down ALOXE3 in SW480 cells (Fig. 6C, D), which revealed a lower degree of destruction caused by TalaA treatment than that of the wild-type cells (Fig. 6E). That is to say, the TalaA-induced perforation of the cell membrane was alleviated by ALOXE3 knockdown. Furthermore, during the knock-down of ALOXE3, the extent of lipid peroxidation caused by the same concentration of TalaA was obviously reduced (Fig. 6F). Then, the concentration-dependence curve of the TalaA-inhibited CRC cell growth was analyzed. It is visible from Fig. 6G that the quantification curve of TalaA-caused cell growth inhibition shifted to the right, because of ALOXE3 knockdown, which showed the importance of ALOXE3 as a ferroptosis accelerator. TalaA triggered ferroptosis by upregulation of ALOXE3, which increases lipid peroxidation, the critical trigger for ferroptosis. TalaA suppressed xenograft tumor growth in vivo To detect the antitumor effect of TalaA, Balb/c nude mice were inoculated with cancer HCT116 cells. Next, mice were intraperitoneally injected with TalaA. As seen in Fig. 7A, the tumor growth speed decreased after the TalaA injection was administered. The final tumor weight in the TalaA treatment group was significantly lower than that in the control group (Fig. 7B). Conversely, the treatment with TalaA affected neither mice body weight (Fig. 7C, D) nor routine blood indexes (Fig. S8), evidencing the low toxicity or side effect of TalaA. Figure 7E illustrates the histopathological staining results of our in vivo experiment, in which TalaA decreased the Ki67 level in the xenograft tumor, meaning TalaA was able to retard tumor growth. Moreover, the IHC-staining result showed that TalaA treatment decreased the level of the ferroptosis-related molecule SLC7A11 but increased that of HMOX1, which is consistent with the results obtained in our cell experiment. The H&E staining results confirmed that TalaA did not lead to the histomorphological alterations in the mice liver and kidney. Discussion Different pathways of cell death exist, including apoptosis, autophagy, and necrosis 27 . Cell apoptosis induction is one of the most important therapeutic approaches for the treatment of tumors, especially in chemotherapy 16,28 . Caspase-based apoptosis has been long considered the main form of regulated cell death, which has been widely used for the development of anticancer drugs. However, treatment outcomes are usually unsatisfactory due to acquired resistance of cancer cells to apoptosis 29 . In clinical cases, the overexpression of anti-apoptotic molecules diminished the positive therapeutic outcomes against malignant cells and even aggravated the disease 30 . In recent years, the traditional understanding of regulated cell death has been challenged by the discovery of a novel cell death pathway that is distinct from apoptosis, autophagy, and necrosis 17 . Although cell ferroptosis can be distinguished from apoptosis in many ways, two major differences can be pointed out: (1) ferroptosis is directly or indirectly caused by an iron-death initiator, and lipid peroxide (LPO) severely damages cell integrity and structure 17 ; (2) Ferroptosis bypasses apoptosis inhibition, avoiding the induction of membrane-specific proteins (such as P-glycoprotein and multi-resistant-related protein family) related to multidrug resistance, which may provide novel insights into the development of chemo-(see figure on previous page) Fig. 3 TalaA-induced ferroptosis in CRC cells. A Transmission electron microscopy is used to observe the microscopic substructure of cells: The SW480 cells were treated by 5.0 μM TalaA for 24 h and fixed by 2.5% glutaraldehyde. The fixed cells were taken photos using transmission electron microscope (Hitachi HT7800, Japan) with different magnifications (the magnification was shown in picture). The green arrow indicates mitochondria and red arrow indicates membrane. After being treated by TalaA, the mitochondria are wrinkled, with the internal crest disappearing; and the cell membrane broken. B The SW480 cells pretreated by 0.1 μM ferrostatin-1 were co-incubated with 10.0 μM TalaA for 12 h. Then the cellular morphology was recorded by microscope. (The cells that look like membranes perforated were marked with yellow arrows.) C The SW480 cells pretreated by 0-0.5 μM ferrostatin-1 were co-incubated with 10 μM TalaA for 24 h, and the cells were tested by CCK8 kit. Error bars means SD, N = 3 independent repeats. p values were calculated using two-tailed unpaired Student's t-test, * means p < 0.05; *** means p < 0.001 versus TalaA treatment. D The lipid peroxidation was detected by cell-based lipid peroxidation assay kit. The lipid peroxidation sensor changes its fluorescence from red to green upon peroxidation by ROS in cells. The stained cells were taken photos in a fluorescence microscope. E The ratio of green fluorescence to red fluorescence was calculated with Image J software to show the degree of lipid peroxidation. F Comparison of the anti-cancer effect between erastin and TalaA on colon cancer cells. Colorectal cancer SW480 cells were treated by different concentrations of TalaA and erastin respectively, for 24 h, and the relative cell activity was detected with CCK8 kit. Red points represent TalaA treatment group, and blue triangles represent erastin treatment group. For each concentration point, three repeats were performed. G SW480 cells were co-incubated with 0, 7.5, 15 μM TalaA and erastin, respectively. After 48 h, the cultured cells were taken pictures with phase contrast microscope. To compare the morphological alteration, the photos of 15 μM TalaA and erastin-treated SW480 cells were amplified. The red arrows indicated dead cells with obvious morphological alteration. resistant tumor therapy 31 . Moreover, it is worth noting that many mesenchymal cancer cells, which are prone to metastasis and are usually resistant to various treatments, are sensitive to ferroptosis 32,33 . Therefore, achieving cancer cell death via ferroptosis induction could be a novel strategy for metastatic cancer treatment. The significance of ferroptosis is remarkable because iron serves as both an acceptor and a donor of electrons; it is not only the necessary nutrients, but also the excess of toxins; it is not only the motility factor of oxidative stress, but also the braking factor of oxidative stress 34 . Homeostasis imbalance of iron not only leads to DNA oxidative damage and increased tumorigenesis, but also contributes to cancer cell apoptosis through the process of ironinduced cell death 35 . Despite the infancy of ferroptosisrelated anticancer research, its importance and potential for clinical treatment are increasingly prominent. In this study, we discovered that TalaA can kill CRC cells by ferroptosis induction. The ferroptosis inhibitor did not completely attenuate TalaA-caused cell death, which suggested that we should not exclude the likelihood that TalaA may induce cell death through other mechanisms. Nevertheless, both the lipid peroxidation inhibitor ferrostatin-1 and the iron-chelating reagent deferiprone alleviated TalaA-induced cell death, showing that ferroptosis is the main mechanism by which TalaA induces cell death. It is noteworthy that the capability of TalaA to induce ferroptosis is stronger than that of erastin, a well-known specific ferroptosis inducer 36 . Moreover, IC 50 of TalaA was much lower than that of erastin in CRC cells, indicating that TalaA would have greater potential as a therapeutic agent in cancer treatment than erastin. In the present study, TalaA treatment markedly decreased the level of an important channel protein in CRC cells, SLC7A11. Consistently, several recent studies reported that SLC7A11 was closely negatively correlated with ferroptosis 37 , and the suppression of SLC7A11 induced ferroptosis 38 . SLC7A11 is a trans-membrane amino-acid transporter of extracellular cystine, which is essential for glutathione synthesis, into cells 39 . In recent years, this nutrient transporter has been linked to the occurrence of a new form of iron-dependent cell death, caused by excessive iron-dependent cell accumulation of LPOs 19 . Since cystine is an essential biosynthesis precursor to glutathione, cystine depletion or SLC7A11contained cysteine transport blockage can lead to cell ferroptosis. Interestingly, besides SLC7A11, we also found that a glutathione synthetase, GSS, was significantly downregulated in TalaA-treated CRC cells. Our discoveries evidenced that TalaA is a potent blocker of the SLC7A11-GSS-GSH axis, which is positively associated with ferroptosis. Other research has also been focused on the use of SLC7A11 as a target molecule for ferroptosis promotion 40 . ALOXE3, a gene-encoding arachidonate lipoxygenase 3, is a representative of the lipoxygenase family, which catabolizes the oxidation of arachidonic acid-derived compounds 41 . In our study, both RNA-Seq and RT-qPCR results showed that TalaA upregulated ALOXE3 expression, which indicates that TalaA might trigger lipid polyunsaturated fatty acid peroxidation via elevation of arachidonate lipoxygenase 3 levels. Lipid peroxidation has been previously reported to be a key step in ferroptosis 42 . We established that TalaA upregulated the expression of ALXOE3 and increased lipid peroxidation, which in turn enhanced ferroptosis. We also found that TalaA considerably increased the expression of gene-encoding heme oxygenase (HMOX1), an essential enzyme in heme catabolism which cleaves heme to generate biliverdin, subsequently converted to bilirubin and carbon monoxide by biliverdin reductase 43 . Fang et al. reported that HMOX1 upregulation led to heme degradation and the release of free iron, which can be accumulated in mitochondria and cause lipid peroxidation 44 . Recently, the noncanonical ferroptosis induction function of HMOX1 has mentioned that HMOX1 activation can lead to heme degradation, in turn releasing labile Fe (II) through direct targeting of Kelch-like ECHassociated protein 1 (KEAP1), which can trigger ferroptosis 45 . The findings of our study indicated that TalaA killed CRC cells by triggering ferroptosis via acceleration of lipid peroxidation. In normal cells where a redox balance exists, ROS levels are usually low, and a variety of antioxidant substances are available to counteract the damaging effects caused by ROS. However, due to the high metabolic activity in tumor cells, excessive ROS is generated, but tumor cells are able to adjust the signaling channel to adapt to the high ROS level, including raising the expression of antioxidant molecules quantity (such as SOD, GSH, and thioredoxin), to remove excessive ROS, thus ensuring the proliferation and survival of tumor cells. Notably, in cases of continuous increased ROS level, a breakthrough is achieved upon reaching a certain threshold; thereafter, excessive oxidative stress can cause irreparable cell damage or trigger programmed cell death (e.g., ferroptosis) 25 . In other words, the baseline ROS level of tumor (see figure on previous page) Fig. 5 TalaA accelerated ferroptosis in CRC cells by down-regulation of SLC7A11. A SLC7A11 mRNA was decreased by TalaA dose-dependently; *p < 0.05, **p < 0.01, N = 3 independent repeats. B SLC7A11 protein level was decreased by TalaA in a dose-dependent manner. C The SLC7A11 protein level was increased by SLC7A11 overexpression plasmid (SLC7A11 OVX) transfection. D The relative cell activities of SLC7A11-overexpressed cells and control cells after treated by 5.0 μM TalaA. **p < 0.01, N = 3 independent repeats. E The mRNA expression was suppressed by SLC7A11specific lenti-shRNA; **p < 0.01 versus shCon, N = 3 independent repeats. F The SLC7A11 protein level was decreased by lenti-shSLC7A11. G Total glutathione and reduced glutathione was decreased as SLC7A11 being knocked down; **p < 0.01, N = 3 independent repeats. H 5.0 μM TalaA induced slight cell membrane to get destroyed in wild type SW480 cells. However same concentration of TalaA induced strong membrane to get destroyed in SLC7A11 knocked down SW480 cells. The yellow arrows indicate membrane-damaged cells. I 5.0 μM TalaA-treated SLC7A11 knockeddown SW480 cells had lower cell activity than wild type SW480 with same concentration TalaA treatment; **p < 0.01, N = 3 independent repeats. J The scatter plot of TalaA inhibited cell growth. The blue points represented wild type SW480, and red squares represented SLC7A11 knocked down SW480 cells. For each concentration point, three repeats were performed. K The SLC7A11 knockdown SW480 and wild type SW480 were treated by 5.0 μM TalaA with or without Ferrostatin-1. The cell activity of SW480 cells was detected with CCK8 kit; **p < 0.01, N = 3 independent repeats. Fig. 6 TalaA enhanced ferroptosis in CRC cells by up-regulation of ALOXE3. A ALOXE3 mRNA was increased by TalaA dose-dependently; *p < 0.05, **p < 0.01, N = 3 independent repeats. B The protein level of ALOXE3 was elevated by TalaA in a dose-dependent manner. C The mRNA level was decreased via lenti-shALOXE3 infection. **p < 0.01 versus ShCon, N = 3 independent repeats. D The ALOXE3 protein level was reduced by lenti-shALOXE3. E Although 10 μM TalaA violently caused cell membrane destroy in wild type SW480 cells, same concentration TalaA only led to mild membrane destroy in ALOXE3 knocked down SW480 cells. The yellow arrows indicated broken cells. F The lipid peroxidation was detected by cellbased lipid peroxidation assay kit. The stained cells were recorded with a fluorescence microscope. When the lipids were peroxidized, the fluorescence shifted from red to green. G The cell activity curve right shifted as ALOXE3 was knocked down. The black points represented wild type SW480, and purple triangles represented ALOXE3 knocked down SW480 cells. For each concentration point, three repeats were performed. cells is already high, and a further increase or impaired ability to counteract ROS action can result in tumor cell death (Fig. 8). The development of antitumor drugs based on the aforementioned tumor cell features would represent an effective strategy for cancer therapy. The natural compound TalaA does exactly that: on one hand, it elevates the ROS level of cancer cells, but on the other, the altered expression of ferroptosis-related molecules accelerates cancer cell death via ferroptosis induction. Moreover, the in vivo experiments in this study showed that TalaA neither affected mice body weight and blood routine index, nor damaged the liver and kidney tissues in mice. Therefore, the potential of this valuable compound for the development of an anticancer drug is immense. Conclusion The present study not only discovered a new function of TalaA-its ability to kill tumor cells via ferroptosis induction, but also elucidated its molecular pharmacological mechanism of upregulated expression of molecules (such as ALOXE3 and HMOX1), which promotes lipid peroxidation and suppresses the expression of antioxidant-related molecules (such as SLC7A11 and GSS), thereby causing cancer cell death by ferroptosis induction (Fig. 8). Nevertheless, we do not rule out the possibility of other molecular pharmacological mechanisms for achieving cancer cell death. Therefore, TalaA is a potential drug candidate that can not only take advantage of the high ROS level to kill cancer cells, but also provides targeted therapy for cancer types with high expression of anti-oxidation molecules such as SLC7A11. This study is of great significance for the development of new anticancer drugs via ferroptosis induction. Conflict of interest The authors declare that they have no conflict of interest. (see figure on previous page) Fig. 7 TalaA inhibited xenografted tumor growth in vivo. A Tumor column was recorded. The black points represented blank control group (corn oil), and the red squares TalaA treatment group (six mice for each group). B The final tumor weight was compared between the two groups: ***p < 0.001 indicated the significant difference. C Mice body weight was recorded. The black points represented blank control group and red squares TalaA treatment group. D The final body weight was compared between the two groups: no significant difference between the two groups; "ns" represent no significant difference. E Pathological staining for xenografted tumors of the above two groups: H&E staining photos and IHC staining for Ki67, SLC7A11, and HMOX1 photos for both control group and TalaA treatment group. F The mice liver and kidney were fixed in the formalin and stained with H&E dye for both control group and TalaA treatment group. figure). In normal healthy cells the ROS is low, and REDOX reaches intracellular homeostasis; but in cancer cells, due to vigorous cell metabolism and proliferation, the ROS level is much higher. However, a set of antioxidant system against ROS is derived by tumor cells, so that tumor cells cannot be harmed by ROS, but utilize ROS as a positive regulatory signal for advanced survival and proliferation. When ROS level continues to rise beyond the tolerance threshold of tumor cells, a programmed death (such as ferroptosis) will be triggered. TalaA was able to strongly induce ferroptpsis at least via the following mechanism: (1) TalaA elevates the ROS level in colorectal cells; (2) TalaA down regulates the SLC7A11 and GSS expressions, which suppresses the synthesis of important antioxidant molecule-GSH, and in turn enhance ferroptosis; (3) oxidation of arachidonic acid is an important cause of iron death, and TalaA increases the arachidonic acid oxidase-ALOXE3, which accelerates ferroptosis. (4) TalaA causes upregulation of HMOX1 which lead to the degradation of heme and the release of free iron, accumulating in mitochondria and giving rise to lipid peroxidation. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
10,218.6
2020-11-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Bisphenols as Environmental Triggers of Thyroid Dysfunction: Clues and Evidence. Bisphenols (BPs), and especially bisphenol A (BPA), are known endocrine disruptors (EDCs), capable of interfering with estrogen and androgen activities, as well as being suspected of other health outcomes. Given the crucial role of thyroid hormones and the increasing incidence of thyroid carcinoma in the last few decades, this review analyzes the effects of BPS on the thyroid, considering original research in vitro, in vivo, and in humans published from January 2000 to October 2019. Both in vitro and in vivo studies reported the ability of BPs to disrupt thyroid function through multiple mechanisms. The antagonism with thyroid receptors (TRs), which affects TR-mediated transcriptional activity, the direct action of BPs on gene expression at the thyroid and the pituitary level, the competitive binding with thyroid transport proteins, and the induction of toxicity in several cell lines are likely the main mechanisms leading to thyroid dysfunction. In humans, results are more contradictory, though some evidence suggests the potential of BPs in increasing the risk of thyroid nodules. A standardized methodology in toxicological studies and prospective epidemiological studies with individual exposure assessments are warranted to evaluate the pathophysiology resulting in the damage and to establish the temporal relationship between markers of exposure and long-term effects. Introduction Thyroid hormones (THs) play a critical role in the regulation of physical development, somatic growth, metabolism, and energy provision and are essential for normal brain development in humans [1]. Thus, any interference with THs status and signaling during development may have an impact on physical health, and can be associated with neurological deficits and even irreversible mental retardation in the case of severe maternal TH deficiency [2]. Meanwhile, thyroid cancer (TC) incidence rates have been rising in many western countries, including the United States where the incidence increased 3.6% per year during 1974 to 2013 [3]. TC is the most common endocrine malignancy, and by 2030, it is estimated to become the fourth leading cancer diagnosis in the United States [4]. Papillary thyroid cancer (PTC), in particular, is the most frequent histotype with a typically excellent prognosis, accounting for 70% to 90% of well-differentiated thyroid malignancies, and though over diagnosis of small tumors is thought to contribute significantly to the increase in incidence, PTC incidence has significantly increased for every stage and tumor size category [3]. The etiology of TC is multifactorial and the proposed risk factors in the literature include sex, family history of TC, radiation exposure, excess weight, iodine intake, and dietary habits [5]. Although the thyroid is characterized by a low proliferation index, it is particularly susceptible to environmental chemicals that may contribute to the increasing incidence of TC [6]. Table 1. Concentration of bisphenols in the environment and human body, and estimated exposure by age groups to bisphenol A (a), principal bisphenol A substitutes (b), and halogenated derivatives of bisphenol A (c). Thyroid Disrupting Properties of BPs: in Vitro Studies Biological function of thyroid hormone triiodothyronine (T3) is generally mediated by the nuclear receptors TRα1, TRβ1, and TRβ2 that are conserved in all vertebrates. T3 binds to the TRs with similar affinities mediating TH-regulated transcription with different levels in different tissues [60]. TRα1 is the predominant subtype in cardiac muscle and bone, TRβ1 is the predominant subtype in kidney and liver, while TRβ2 is more abundantly expressed in the hypothalamus and in the pituitary gland, and has a critical role in the regulation of the hypothalamic-pituitary-thyroid (HPT) axis [61]. TRs bind at DNA as homodimers or forms heterodimers with the retinoid X receptor to T3 response elements and they can regulate transcription both in the absence and in the presence of ligands [62]. On positively regulated genes, the unliganded TRs bind to corepressor proteins such as the silencing mediator of retinoid and thyroid hormone receptor (SMRT) or the nuclear receptor corepressor (N-CoR), resulting in the suppression of transcription [63]. The binding of T3 to TRs leads to a dissociation of the corepressors, and the subsequent recruitment of coactivator proteins, such as those of the p160/SRC (steroid receptor coactivator) family, including SRC1, SRC2, and SRC3, thus promoting activation of transcription [63]. In vitro models have tested and verified the ability of BPs to disturb thyroid function through multiple mechanisms that may produce different consequences depending on the heterogeneity of experimental conditions among studies such as the chemical tested, the concentrations used, and the presence/absence of T3 or T3 antagonists. BPs were reported to exert numerous effects on the thyroid, and each affected pathway may lead to perturbations of thyroid hormone levels, leading to a dysregulation of thyroid function. The pathways are not necessarily inter-connected, but there is some evidence that BPs may lead to an impact on the gland and its function at multiple levels, as reported in the following paragraphs. Interference with T3 Transcriptional Activity Numerous studies have evaluated the ability of BPs to suppress hormonal transcriptional activities mediated by TRα1 and TRβ1 in competitive binding and transient expression assays (Table 2). Whereas BPA alone did not induce visible effects on T3-induced transcription [2,73,75,76], in the presence of physiological concentrations of T3, low-dose BPA enhanced the interaction of TR with N-CoR by directly binding to TR [2]. Cell Proliferation The rat tumor pituitary cell line GH3 has been frequently employed as a standard pituitary cell model for assessing TH effects [81]. Indeed, cell proliferation and growth hormone (GH) secretion primarily depend on THs [81] and involve TR-mediated mechanisms, specifically the induction of gene expression [82]. A series of investigations assessed the agonistic and antagonistic properties of BPs in GH3 cell growth both in absence and in presence of T3 (Table 2). BPA, and in particular BPA derivatives, generally promoted GH3 cell proliferation and GH release in the concentration range of 10 −6 -10 −4 M [70,81,83]. In some studies, the agonistic activity was detected exclusively in the presence of T3 [82,84], whereas in others BPA and its substitutes inhibited cell growth with T3, and TH-antagonistic effects appeared to depend on the tested dose and the time of exposure [80,85]. Effects of BPs on cell growth were antagonized by amiodarone, a known TR antagonist [80]. Nonetheless, amiodarone was also reported to act as a slight agonist at low concentrations and antagonist at increasing doses, and BPA and its halogenated derivatives exhibited comparable doseresponse curves [76]. In PTC cells, BPA had similar proliferative effects as E2 [86], and consistent with this finding, co-exposure to E2 potentiated the increased GH3 cell proliferation (from 190% to 252% after 96 h) by BPA and BPAF [85]. In contrast, TBBPA could not counteract the inhibitory effect of fulvestrant, a strong antiestrogen, on cell growth [81]. Cell growth was further antagonized by U0126, an inhibitor of MEK, the kinase responsible for the activation of ERK in the Raf-MEK-ERK pathway in mammalian cells [87]. Similarly, TBBPA at concentrations in the lower micromolar range caused arrest of cells growth in the G1 or G2 phase, depending on the duration and intensity of the treatment and on cell specific and dose dependent modulations of the Raf-MEK-ERK pathway [87]. BPA may exert disrupting effects on TH-mediated transcription interfering with a different non-genomic mechanism mediated by integrin αvβ3, a heterodimeric transmembrane glycoprotein [77]. In normal conditions, T3 and thyroxine (T4) induce serine phosphorylation of TR-β1 by binding to αvβ3 and activating mitogen-activated protein kinases (MAPK) and/or c-Src/PI3K pathways [78], which determines the dissociation of N-CoR or SMRT from TR-β1 and consequent activation of transcription. The competitive binding of BPA to αvβ3 antagonizes the serine phosphorylation of TR-β1 leading to the recruitment of N-CoR/SMRT to TR-β1 and suppression of transcription [79]. Cell Proliferation The rat tumor pituitary cell line GH3 has been frequently employed as a standard pituitary cell model for assessing TH effects [81]. Indeed, cell proliferation and growth hormone (GH) secretion primarily depend on THs [81] and involve TR-mediated mechanisms, specifically the induction of gene expression [82]. A series of investigations assessed the agonistic and antagonistic properties of BPs in GH3 cell growth both in absence and in presence of T3 (Table 2). BPA, and in particular BPA derivatives, generally promoted GH3 cell proliferation and GH release in the concentration range of 10 −6 -10 −4 M [70,81,83]. In some studies, the agonistic activity was detected exclusively in the presence of T3 [82,84], whereas in others BPA and its substitutes inhibited cell growth with T3, and TH-antagonistic effects appeared to depend on the tested dose and the time of exposure [80,85]. Effects of BPs on cell growth were antagonized by amiodarone, a known TR antagonist [80]. Nonetheless, amiodarone was also reported to act as a slight agonist at low concentrations and antagonist at increasing doses, and BPA and its halogenated derivatives exhibited comparable dose-response curves [76]. In PTC cells, BPA had similar proliferative effects as E 2 [86], and consistent with this finding, co-exposure to E 2 potentiated the increased GH3 cell proliferation (from 190% to 252% after 96 h) by BPA and BPAF [85]. In contrast, TBBPA could not counteract the inhibitory effect of fulvestrant, a strong antiestrogen, on cell growth [81]. Cell growth was further antagonized by U0126, an inhibitor of MEK, the kinase responsible for the activation of ERK in the Raf-MEK-ERK pathway in mammalian cells [87]. Similarly, TBBPA at concentrations in the lower micromolar range caused arrest of cells growth in the G1 or G2 phase, depending on the duration and intensity of the treatment and on cell specific and dose dependent modulations of the Raf-MEK-ERK pathway [87]. Cytotoxicity MAPKs have an important role in cellular signaling pathways, and the kinases JNKs/SAPKs and p38 MAPKs are often activated by cellular stresses and thus primarily linked to cytokine biosynthesis and induction of apoptosis [88]. Thus, any interference of exogenous chemicals with kinases and phosphatases involved in cellular signaling processes can result in possible cytotoxic effects, including cell death [87]. Similar to cell proliferation, cell viability has been evaluated in cell lines exposed to BPs (Table 2). Cytotoxicity was observed after exposure to BPA and its halogenated derivatives at a concentration range of 10 −5 -10 −4 M; alone and/or with T3 [10,73,76,82,87]. TBBPA was found to produce cytotoxicity 100 times higher than BPA [75] although in other cell models comparable doses of BPA, TBBPA, and TCBPA did not cause changes in cell viability [65,70,89]. Competitive Binding with Thyroid Hormone Binding Proteins One of the possible mechanisms of BPs for disrupting TH homeostasis is the competitive binding with serum transport proteins due to the structural similarity to T4 and T3. THs mainly bind to three transport proteins in human serum, namely thyroxine-binding globulin (TBG), which is responsible for 75% of the specific T4 binding activity, transthyretin (TTR), and human serum albumin [90]. A few studies tested the capability of these chemicals to compete with THs for binding to TTR, which in non-mammalian vertebrates exhibit a higher affinity for T3 than T4, whereas in human plasma is responsible for only 10% to 15% of the TH transport [91] (Table 2). Meertz et al. [92] found no TTR binding for 17 polybrominated diphenyl ethers at maximum concentrations confirming that hydroxylation at the para position with at least one adjacent halogen substituent could represent a prerequisite for TTR binding. Indeed, TBBPA was the most potent competitor among the phenolic compounds tested, binding to TTR in a range from 1.6 [84] to 10.6-fold [92] stronger than the natural ligand T4. Moreover, the affinity of TBBPA for TTR was three times greater than that of BPA [70], and this is in line with the higher binding affinity of halogenated derivatives for TRs compared with BPA [10,71]. Actually, the hydroxylated derivatives of BPA also exhibited a strong affinity for TBG, as elucidated in a transport protein-based biosensor assay [93]. Using a fluorescent probe, Cao et al. observed that BPA affinity for TTR and TBG was weaker than T4 by 300 to 2666 fold; hence the current levels of BPA in humans are unable to interfere with T4 serum transport [90]. Perturbation of Thyroid Hormone Uptake Thyroid hormone uptake into target cells is controlled by membrane bound transporters, such as monocarboxylate transporter (MCT) 8, MCT10, and multiple members of the Na-independent organic anion transport protein (OATP) family [94]. OATP1C1, in particular, shows a high degree of tissue selectivity, being expressed predominantly in brain and testis, and high preference for T4 and reverse T3 as the ligand [95], and it facilitates the transport of T4 across the blood-brain barrier [96]. In different species, MCT expression has been detected in numerous tissues including the brain wherein MCT8 is responsible for the neuronal uptake of T3 [95]. Mutations in the MCT8 gene cause a severe X-linked psychomotor retardation associated with highly elevated serum T3 levels and decreased T4 concentrations whereas the thyroid-stimulating hormone (TSH) values remain in the normal or slightly elevated range levels [97]. Stimulation of growth by all tested chemicals. In presence of T3, potentiating effect on T3-induced growth. [81] 0.5/1 nM T3; 5*10 −7 -5*10 −6 BPA/BPA-DM; 1*10 −5 /2.5*10 −5 M TBBPA ±1 nM ICI Suppression of induced cell proliferation by the antiestrogen ICI. None of the compounds able to counteract the inhibitory effects of ICI. In a recent study performed in cells overexpressing the human MCT8 gene, among the several common environmental contaminants classified as flame retardants, pesticides, plasticizers, and others that are suspected to disrupt TH signaling, only BPA was observed to reduce T3 uptakes to around 60% and 40% of the control at concentrations (125 µM) below those that reduced cell viability <80% [94]. This finding is consistent with an earlier study that detected a slight inhibition of T3 transport capabilities of MCT8 by BPA, though at concentrations likely higher than those occurring in vivo [98]. Dysregulation of Gene Expression In addition to the ability to interfere with TR signaling throughout a direct binding to the receptor, a number of studies observed that BPs may directly affect thyroid gene expression ( Table 2). At doses as low as 10 −6 M, BPA and its analogues induced expression of transcripts of genes implicated in thyroid cell activity and proliferation (e.g., the Thyroid stimulating hormone-receptor (Tsh-r)), TH biosynthesis (e.g., Tg, Sodium iodide symporter (Slc5a5 encoding NIS), Thyroid-peroxidase (Tpo) and their transcription regulators (e.g., Paired box 8 (Pax8), NK2 homeobox 1 (Nkx2-1), and Forkhead box E1 (Foxe1)) by over 1.5 fold [89,99,100]. Conversely, BPA did not markedly affect transcriptional expression of Slc5a5, Nkx2-1, and Tpo but inhibited NIS-mediated iodide uptake [100]. BPA increased the expression of Tg gene in the presence of increasing TSH amounts, suggesting a potency similar to that of TSH in enhancing Tg-promoter activity [94]. The authors also reported that two anti-estrogens, which alone induced the activity of the Tg promoter, were not able to enhance BPA activity on the Tg promoter, indicating that the effects triggered by BPA do not necessarily involve ER signaling [89]. BPA at the nanomolar range significantly impaired the transcriptome of thyroid cells in a time dependent manner [101]. In fact, whereas short-term exposure to BPA did not cause any relevant transcriptomic changes, long-term exposure, though unable to exert visible damage on cells, determined a slight deregulation of many genes involved in cell proliferation/death, cancer, and DNA repair [101]. BPA inhibited the activities of DIO1 and DIO2 [102], and both BPA and TBBPA markedly dysregulated transcription of Dio3, which is responsible for protection of tissues from TH excess and is the predominant deiodinase expressed in human placenta [103], and hepatic phase II metabolizing genes (Sulfotransferases (Sult1) and UDP-glucuronosyltransferases (Ugt)) [75]. TBBPA, but not BPA, increased expression of the Ttr gene [75], an action at mRNA level that corroborates the competing binding capabilities of TBBPA with TTR [70,84]. Thyroid Disrupting Properties of BPs: in Vivo Studies In vivo effects of BP exposure on thyroid function/action are contradictory and difficult to compare, as a consequence of the scarce number of studies performed especially in mammals, the different models, and diversity of the experimental conditions, i.e., the chemical used, the time and dose of treatments, the outcomes assessed. In regards the risk of thyroid cancer associated to BPs exposure, the subject remains almost entirely unexplored. Rodents In rodents, most of the studies have been performed in pregnant females, and have evaluated the variations of TH levels in mothers and pups following prenatal and/or lactational exposures (Table 3). In accordance with numerous in vitro studies, BPA can act as a selective TH antagonist on TRβ, inhibiting TH-negative feedback. Indeed, Zoeller et al. [104] and Zhang et al. [105] observed a significant increase of serum T4 levels in pups of both sexes and in female adults, respectively, without any apparent interference on TSH release. In male adult rats, treatment with BPA led to an increase of T4 levels and a reduction of the T3/T4 ratio, suggesting that in exposed animals BPA may impair the peripheral conversion of T4 to T3 [102]. In other experiments, BPA exposure did not produce significant variations in plasma T4 levels [106][107][108][109] or, alternatively, the effects may not endure after BPA removal and metabolism [110][111][112]. It is still unclear whether exposure to BPA and its derivatives can cause hypothyroidism due to limited evidence. A decrease in T4 levels was found in male and female adult rodents [110,113,114] and in rat pups of both sexes [114] or with a sex-specific effect [112,115]. The competition of BPs with TTR, as observed in vitro [70,84,92], resulting in a portion of serum T4 displaced from TTR, could determine an increased rate of T4 metabolism and elimination and the consequent reduction of T4 circulating levels [110]. Perinatal or neonatal exposure to BPA was associated with a significant increase of TSH levels in juvenile males [116] and in females in estrus [117,118], accompanied by a significant increase of GH levels and an impaired sensitivity of the thyroid gland to TSH stimulation, respectively, both of which indicate an alteration of the HPT axis [117]. On the other hand, the reduction of serum T3 or T4 after TBBPA treatment induced feedback stimulation, as suggested by the increased pituitary weight [114], whereas in other studies this was insufficient to affect serum TSH or TH levels, thyroid histopathology, and thyroid weight [107,110]. Adult males treated with BPA showed a decrease activity of hepatic DIO1, coherently with what has been reported in vitro [102]. Moreover, in female adult rats BPA lowered thyroid iodide uptake and thyroid peroxidase (TPO) activity, which are two essential steps in TH biosynthesis, probably due to an elevation of reactive oxygen species (ROS) production. Both NIS and TPO have been found to be sensitive to ROS [119,120], and in particular the decrease in TPO activity could be attributable to the oxidation of this enzyme [106]. Increased expression of pituitary Tshβ was reported in female rat neonates exposed to BPA [117], whereas Silva et al. did not find any significant reduction of Tshβ mRNA levels in treated female rats [106]. To date, it remains unclear whether BPA plays a role in the pathogenesis of thyroid carcinoma. Zhang et al. recently demonstrated that BPA could enhance the susceptibility to TC [105]. Rats pre-treated with N-bis (2-hydroxypropyl) nitrosamine, a drug stimulating thyroid proliferation and promoting a cancerous phenotype [121] and then exposed to BPA and excess iodine for 64 weeks, exhibited a significant increase in incidence of TC and thyroid hyperplasia lesions as well as the up-regulation of ERα in the hyperplasia lesions. The authors speculated that BPA could increase ERα expression in the thyroid, which possibly participated in the proliferation process [105]. Sheep Sheep are considered a more relevant model to humans than rodents to evaluate fetal exposure to thyroid disruptors and their effects on the mother/newborn thyroid functions because of a similarity in the timing of the ontogenesis of thyroid [122]. In both species thyroxine binding globulin is the main blood transport protein for THs [123], and thyroid system maturation is qualitatively similar in the sheep and human fetuses, although the total maturation time is different (165 days vs. 300 days) [122,124]. Two studies have investigated the relationship of BPA exposure with thyroid function (Table 3). Viguié et al. [122] reported that BPA exposure of pregnant ewes was associated with a transitory hypothyroxinemia of both mothers and their newborn lambs, with a significant reduction of both circulating total T4 (TT4) and free T4 (FT4), findings in agreement with rodent studies [111,112]. In a following study, the authors confirmed alterations of gestational thyroid function, observing a significant reduction of FT4 and total T3 (TT3), but not TT4, in pregnant ewes treated with environmentally-relevant BPA concentrations via subcutaneous and dietary routes of administration [123]. After subcutaneous administration, the maximum serum concentration of BPA obtained was significantly higher (0.4 nmol/mL vs. 0.1 nmol/mL) and more prolonged than after dietary administration [123]. Zebrafish Numerous studies have been published on the use of zebrafish (Danio rerio) to explore the effects of EDCs on the thyroid, due to several advantages: a short life cycle, high rates of production, real-time observations during the entire embryonic development, and high conservation of the molecular mechanisms regulating thyroid development with those of mammals [125,126]. The early life of fish, in particular, is acknowledged as highly sensitive to the effects of EDCs [127]. Coherently with results observed in vitro and in rodents, BPs may disturb TH homeostasis and gene expression in zebrafish embryos/larvae (Table 3). Positive [37,38,128] and negative [37,38,[128][129][130][131] associations between exposure to BPs and T3 and/or T4 levels have been reported, depending on the chemical used, the dose tested, and the time of exposure. Reductions in T4 concentrations, when accompanied with higher TSH contents, may compensate hypothyroidism in zebrafish larvae and stimulate TH synthesis [38,129]. Some experiments observed an interaction between TH levels and sex [129,131]. Tang et al. showed a reduction in whole-body TT4 and TT3 levels but not a significant variation of ratio TT3/TT4, which indicates the relative normal TH homeostasis [130]. Similarly, BPs disrupting effects on thyroid gene expression vary according to the different experimental conditions, especially the duration of exposure [126]. Hence, transcription levels of genes implicated in thyroid cell function and proliferation (Tsh-r), TH activity (Trα, Trβ), and transport (Ttr) can be up- [37,68,126,130,132,133] or down-regulated [38,68,126,128,130,133]. The transcription of Hematopoietically expressed homeobox (Hhex) was up-regulated in larval fish following exposure to BPA or BPF, although it is important to note that the Hhex gene is expressed in early life, contributing to differentiation and development of the thyroid gland, as well as of other organs, such as the pancreas and liver [37]. Increased [89,128,129] or decreased [130,132,134,135] expression of Slc5a5, Tpo, Pax8, and Tg transcripts was dependent on the dose and window of exposure. Additionally, BPA and BPS appeared to interact with PAX8 and thyroid transcription factor 1 (TTF1) in silico [135]. Genes such as Tpo, Tg, and Slc5a5 have binding sites for PAX8 or TTF1 on their enhancer or promoting regions. Differences of interactions between BPs and the transcription factors could be attributable to stimulation or inhibition with varying BPs doses, and produced as final effect altered expression of the genes controlled by PAX and TTF1 [135]. Exposure to BPA and BPA analogues further induced transcription of genes involved in TH metabolism, i.e., Dio1 and/or Dio2 [37,129,132,134], which are implicated in activation/inactivation of T4 and in conversion of T4 to T3 in peripheral tissues, respectively [136,137], and Ugt1ab [37,38,129]. Notably, co-treatment with T3 appeared to reverse or eliminate thyroid disrupting effects of TBBPA on THs levels and gene transcription in zebrafish larvae [128], whereas a combined exposure of BPAF and sulfamethoxazole, an antibiotic used especially in aquaculture, produced more pronounced changes in transcription levels [134]. Higher T4 levels in BPA-exposed animals, whereas no variations of T3 levels. [106] Thyroid iodine uptake/TPO activity Significant reduction of TPO and NIS activity. ROS generation In BPA-treated animals, significant generation of H 2 O 2 generation in the thyroid. In the reproduction study, decrease of T4 levels in pups of both sexes, and increase of T3 levels only in females. In the subacute toxicity study, significant decrease of T4 and increase of T3 levels in males, whereas in females parallel though not significant trends. [114] Necropsy Dose-dependently increase of pituitary weight in male pups in the reproduction study. No effects in the subacute toxicity study. Histology No changes observed in the histology of the pituitary gland both in the reproduction and in the subacute toxicity studies. In the group exposed to DHPN, statistical significances among all groups, and the KI group had the heaviest thyroid weights. In the group not exposed, no significant differences were found among groups. [105] Histology In the group exposed to DHPN + KI + 1000BPA, all thyroids had a tumor or focal hyperplasia. Significant difference in the total number of hyperplasia lesions among all groups of animals exposed to DPNA. Chemiluminescence immunoassay/ELISA In the groups exposed to DHPN, TSH was significantly higher in the KI group than in the controls and the highest FT4 concentration was in the BPA1000 group. In the groups not exposed to DHPN, the highest concentration of TSH was in the controls whereas FT4 increased with increasing doses of BPA. Western blotting detection In the groups exposed to DPNA, increased protein levels of ERα in the BPA250 and BPA1000 groups compared to the control. Thyroid Disrupting Properties of BPs: Human Studies Perturbations in THs parameters consequent to exposure to BPA have been documented in humans, i.e., the general population, pregnant women, or occupational settings, although the study design, predominantly cross-sectional, does not allow establishment of any causal relationship. Research has highlighted positive, negative, or null associations with T4 levels, whereas a few prospective birth cohort studies suggest that prenatal BPA exposure may modify THs normal serum concentration in a sex-specific manner. Several investigations have demonstrated BPA-induced disruption of thyroid function by altering serum TSH levels. This effect could occur from a direct action of BPA on the pituitary gland via the estrogen signaling pathway or, alternatively, from a transient increase of T3 or T4 production that could lead to a feedback mechanism and the subsequent release inhibition of TSH. Overall, discrepant results among studies may be attributable to BPA levels, time of exposure, iodine intake, differences in age, ethnicity, diet, socioeconomic status, and the determination methods of THs, while at present the potential role of BPs in thyroid carcinogenesis in humans remains to be deeply explored (Table 4). It is noteworthy that the absence of adjustment for other confounding factors such as co-exposure to other EDCs makes the overall evaluation of thyroid dysfunction related to BPs exposure complex. Furthermore, a comparison between effects observed in animal models with those reported in epidemiological studies is complex given different serum T4 half-lives (12-24 h in rats vs. 5-9 days in humans), metabolic pathways of BPs, and doses of exposure, which in humans is more likely to be chronic and low level [138]. In agreement with results from studies in animals [104,105,108], exposure to BPA led to TSH release/suppression independent of alterations in circulating THs levels [143,145,147,148,150,152] or, less frequently, was associated with variations of serum T4 levels [139,149]. There was no association of BPA with hypothyroidism in Japanese women with a history of recurrent miscarriages [151], nor any significant relationship between serum TBBPA in Korean infants with congenital hypothyroidism and THs levels [140]. Conversely, middle-aged and elderly Chinese with overt or subclinical hyperthyroidism had higher urinary BPA than euthyroid subjects [151], and an increased content of urinary BPA was also observed in obese adults undergoing a diet program or bariatric surgery compared to lean controls, probably due to differences in food intake [152]. Sex-related differences in the relationship between BPA and THs were reported both in the general population [142] and in newborns [143,148], coherently with studies performed in rats [112,115], and possibly attributable to a less efficient ability to metabolize BPA, i.e., a reduced expression of uridinediphosphate-glucuronosyltransferase 2B1 in male compared to female livers [153], or to a different androgen-related metabolism of BPA [154]. The interactions between BPA and THs during pregnancy and fetal development have been recently studied. The association between BPA and TSH levels in newborns was stronger when the time elapsed between the two measurements was shorter [143,148], suggesting that specific windows of exposure may influence susceptibility to BPA or, alternatively, that a transient effect on the HPT axis may occur, as shown in rodents [111,112]. However, the inverse association of BPA-TSH in pregnant women detected through repeated measures as well as stratified analyses by visit could indicate the absence of a specific window of vulnerability [139]. Association with Thyroid Diseases The influence of BPA on thyroid autoimmunity is controversial (Table 4). Whereas urinary BPA concentration was associated with variations of TH levels both in children and adults of both sexes, independent of serum thyroglobulin antibodies (TgAb) and thyroid peroxidase antibodies (TPOAb) [149,155], another study found a positive relationship between serum BPA and TPOAb in men and women [156]. Moreover, a significant negative correlation of serum BPA with FT4 in male subjects was found only after exclusion of subjects with positive thyroid antibodies, suggesting that TgAb might be a mediator of the relationship between BPA and FT4 [144]. Kim and Oh reported a slight positive correlation between serum TBBPA and thyroid-stimulating hormone receptor antibodies, indicative of metabolic diseases, in mothers of infants with congenital hypothyroidism, suggesting that brominated derivatives of BPA might affect thyroid function status [140]. Recent investigations have explored the role of BPs as risk factors of occurrence of thyroid nodules (TNs), palpably and/or ultrasonographically discrete lesions, distinct from the surrounding parenchyma of the thyroid gland, which are either benign or malignant [157]. A study reported no association between BPs and higher risk of TNs in adult females [147], whereas Wang et al. observed an inverse correlation of urinary BPA and the risk of multiple TNs but not of solitary TNs in schoolchildren [158]. On the other hand, a significant near linear association between BPA and higher risk of TNs was observed exclusively among participants positive for TgAb and TPOAb [159]. Both urinary BPA and creatinine-adjusted BPA levels were higher in Chinese women with TNs than those without TNs [159], which is consistent with the increased urinary BPA contents in patients with nodular goiter and PTC (160), while median urinary BPA levels were lower in the cases compared to controls among women from Cyprus and Romania [147]. In the study by Zhou et al. [160], which was aimed to investigate the relationship between BPA and iodine exposure with nodular goiter and PTC, sex-specific associations were shown, with higher concentrations of BPA in women than in men affected by PTC and nodular goiter, and a lower urinary BPA content in the female PTC group than the female nodular goiter group, probably due to differences in BPA elimination rates. Marotta et al. recently found a significant dose-independent correlation between BPAF and the risk of differentiated TC in subjects with TNs. Of note, this association was not related to an increase of TSH levels, indicating a potential direct mutagenic action of BPAF on thyroid cells [161]. Table 4. Summary of human studies on the association between bisphenols exposure and thyroid parameters. Study Design Country Study Sample Sample Size (N) Age Principal Results ≥18 The obese group had higher urinary levels of BPA. Positive relationship of urinary BPA with serum TSH in lean subjects. Discussion The thyroid is highly susceptible to environmental pollutants, which may act as either genotoxic or non-genotoxic carcinogens [147]. BPA is a widespread chemical detected in urine specimens of the majority of adult populations. BPA analogues and derivatives are ubiquitous contaminants, measured in environmental and biological matrices, exhibiting a thyroid disrupting potential comparable and even stronger than BPA. The mechanisms of BPs action on THs are complex and need to be still elucidated. Overall, the in vitro studies demonstrate that BPs may bind to TRs, acting mainly as TR antagonists, but also as agonists or without exerting any effect on TH signaling. Similarly, different patterns of Trβ expression following BPs exposure were observed in in vivo models. THs and their receptors regulate many important processes such as proliferation, differentiation, and apoptosis, and since TRβ is the major isoform in the thyroid, it can be hypothesized that disruption of its expression, leading to abnormalities in T3-induced transcriptional activity, could be involved in tumorigenesis [72,162]. In vivo experiments, supported by in vitro evidence, highlighted the ability of BPA and its substituting chemicals to affect thyroid follicular cell gene expression, particularly transcriptional levels of those genes encoding for factors involved in THs synthesis (TPO, NIS, Tg, PAX8). Up-regulation of Tg and Slc5a5 transcript levels may promote thyroid development to compensate for the depressed T4 concentration, as also reported for polybrominated diphenyl ethers [163]. Transcriptional levels of deiodinases were more elevated in exposed zebrafish, in accordance with a study reporting that hypothyroidism caused by EDCs is associated with higher activity and expression of Dio2 [164]. On the other hand, a recent study reported a reduction of liver DIO1 activity in BPA-treated adult rats [102], which is a finding worth of note as decreased expression of DIO1 was observed in nearly all PTCs and is likely an early event in malignant TC [165]. In rodents and in two different types of cell lines, BPA up-regulated Pax8 transcripts, suggesting a role of BPA in increasing Pax8 expression independent from the cellular context [89]. PAX8 is a cell-lineage-specific transcription factor that has been mainly characterized in the thyroid gland for its role in thyrocyte differentiation, and it has been revealed as a potential diagnostic marker for several cancer sites including TC [166]. TSH should represent an effective index of activation of the HPT axis to evaluate central effects of xenobiotics on thyroid function through measurement of TSH secretion or expression as a compensatory mechanism for maintaining TH homeostasis. Moreover, TSH levels are an independent predictor of thyroid nodule malignancy regardless of age, sex or family history [6]. Increased expression of TSH and TSHβ observed in vivo was also reported after exposure to pesticides and halogenated chemicals in fish [163,167] suggesting that elevated production of TSH could represent one of the mechanisms of action of BPs, as already hypothesized for other EDCs (9). In pituitary cells, BPA and E 2 could further induce release of TSH desensitizing the response to thyrotropin releasing hormone from hypothalamus [118]. In contrast, humans and pregnant ewes exhibited hypothyroxinemia after BPA exposure without significant modifications of TSH, whilst other epidemiological studies reported a decreased TSH production, probably consequent of a direct action of BPA on pituitary gland through estrogen receptor signaling or of a feedback mechanism triggered by BPA-mediated perturbations on circulating T3 and T4 [139]. The frequency of chronic autoimmune Hashimoto's thyroiditis, the most common cause of primary hypothyroidism in western countries, has increased in the last two decades, and a variety of factors such as tobacco smoking, iodine and selenium intake, and exposure to EDCs, may contribute to the elevated incidence by interacting with susceptibility genes (6). Autoimmune thyroiditis may coexist with TC [168], and a recent meta-analysis demonstrated that this condition predisposes patients to the development of the papillary histotype [169]. Thyroid autoantibodies were reported to be positively associated with the level of urinary BPA, therefore subjects with thyroid autoantibodies positivity, as characterized by immune dysfunction and a lower ability to eliminate damaged cells, are probably more vulnerable to the effects of BPs on TNs [159]. It cannot be excluded that exposure of thyrocytes to BPA involves hydrogen peroxide generation due to an elevated activity of a calcium-dependent NAPDH oxidase (DUOX) [106]. TPO is a key enzyme in the synthesis of THs, catalyzing, through the cofactor H 2 O 2 , the iodination of tyrosyl residues in Tg [106]. Thus, the increased oxidative stress in the thyroid gland, which is related to a reduction of TPO activity, corroborates the negative correlation between TPO and DUOX2 in thyroid nodular lesions [170]. Furthermore, oxidant/antioxidant balance was recently reported to be impaired in children affected by autoimmune thyroiditis, though it is unclear whether oxidative stress is the real cause of the disease or the likely consequence of exposure to EDCs, including BPA [155]. Finally, BPA is potentially linked to excess iodine in the pathogenesis of the nodular goiter and TC in animals [105] and humans [160]. High urinary iodine is a risk factor for the development of benign TNs and PTC [171], being associated with reduced expression of NIS, an early abnormality in the pathway of thyroid cell transformation, and increased occurrence of BRAF mutations [172], which are both hallmarks of differentiated TC [173]. Conclusions This review aims at evaluating the extensive body of experimental and human studies that in the last two decades have attempted to explore the effects of BPA, its substitutes, and its halogenated derivatives on the thyroid at different levels. Despite the variety of approaches applied and the heterogeneous and sometimes even conflicting results from the examined studies, a series of interesting indications supports the hypothesis of a role of BPs in interfering with the normal thyroid function. Although the toxicity pathways of BPs on the thyroid need to be further elucidated, BPA analogues and halogenated derivatives do not emerge as safer alternatives to BPA in term of TH disruption. There is evidence that BPs alters THs circulating levels, inhibiting TH-negative feedback, act as selective TR antagonists, and interfere with expression of genes involved in thyroid stimulation, TH synthesis, TH activity, and TH transport and metabolism. Several reported findings, mainly from experimental studies, are, however, rather inconsistent, while the association of BPs exposure with thyroid cancer is so far almost unexplored. The lack of uniformity in experimental methodology, as well as substantial differences in populations investigated in epidemiological studies, do not allow definitive conclusions to be drawn. Standardized in vivo, in vitro, and in silico studies are recommended to evaluate the physiopathology of the damage associated with exposure to environmentally relevant levels of BPs, identify other potential molecular targets, and clarify the structure−activity relationship of BPs. At the same time, large population-based human studies with prospective designs and repeated measures of urine BPs concentrations and thyroid volume over time, as well as an accurate control of confounders, should be performed for the assessment of the temporal relationship between markers of exposure and long-term effects. Thyrotropin-releasing hormone Trh-R Thyrotropin-releasing hormone receptor TSH Thyroid stimulating hormone Tshβ TSH-specific β subunit Tsh-r Thyroid stimulating hormone-receptor TTR Transthyretin TT3 Total triiodothyronine TT4 Total thyroxine Ugt Uridine diphosphate -glucuronosyltransferase
8,950.2
2020-04-01T00:00:00.000
[ "Biology", "Medicine" ]
Inkjet printed IGZO memristors with volatile and non-volatile switching Solution-based memristors deposited by inkjet printing technique have a strong technological potential based on their scalability, low cost, environmentally friendlier processing by being an efficient technique with minimal material waste. Indium-gallium-zinc oxide (IGZO), an oxide semiconductor material, shows promising resistive switching properties. In this work, a printed Ag/IGZO/ITO memristor has been fabricated. The IGZO thickness influences both memory window and switching voltage of the devices. The devices show both volatile counter8wise (c8w) and non-volatile 8wise (8w) switching at low operating voltage. The 8w switching has a SET and RESET voltage lower than 2 V and − 5 V, respectively, a retention up to 105 s and a memory window up to 100, whereas the c8w switching shows volatile characteristics with a low threshold voltage (Vth < − 0.65 V) and a characteristic time (τ) of 0.75 ± 0.12 ms when a single pulse of − 0.65 V with width of 0.1 ms is applied. The characteristic time alters depending on the number of pulses. These volatile characteristics allowed them to be tested on different 4-bit pulse sequences, as an initial proof of concept for temporal signal processing applications. Influence of IGZO thickness on resistive switching characteristics To transition into a sustainable economy, it is critical to minimize the waste.In this work, to print a 1 mm 2 square, only 10 µL of IGZO precursor ink were consumed.In this section the focus will be on the optimization of the IGZO printing and the impact on layer thickness and consequently its resistive switching characteristics. First, to achieve a good quality printed IGZO layer the influence of UV surface treatment before printing each layer was studied.One of the issues in uniformity of inkjet printed films is the coffee ring effect.One approach to mitigate this effect is to heat the substrate during printing 49 , which was adopted in this work.Without any surface treatment, the profilometer data summarized in Fig. 1b and in Figure S1 shows the printed IGZO layer has a low coffee ring and a good coverage without the existence of deep valleys.It is found, that after drying the first IGZO layer, its dimensions shrink 25% to 0.75 × 0.75 mm.Moreover, the IGZO thickness does not scale in a linear trend in function of the number of layers.One printed layer has a thickness of 141 ± 42 µm whereas 5 printed layers have a thickness of 572 ± 181 µm, about 4 times higher than a single layer.The atomic force microscopy (AFM) measurements, shown in Figure S2, are in agreement with the profilometer data.For 1 layer of IGZO, it shows an average thickness of 115 ± 23 nm with an average roughness of 20 ± 5 nm.The average roughness increases to a maximum of 45 nm for 5 printed layers of IGZO.The inset of Fig. 1b shows a microscope image of the printed Ag/IGZO/ITO memristors.The printing definition is great with well-defined borders.On the other hand, when 15 min of UV were applied before printing the IGZO layer, the film shows noticeable overspreading, thus reducing the thickness of the printed IGZO films.The thickness of 1 and 5 layers are 15 ± 10 µm to 319 ± 40 µm, respectively (Figure S3), a respective decrease of 90% and 45% when compared to the non-treated films. Figure 1c shows the influence of the device characteristics in function of the IGZO thickness.Regardless the thickness, the devices show gradual switching in both SET and RESET.Moreover, the devices became more resistive with the increase of the thickness (Figure S4).For an IGZO thickness between 50 and 620 µm, the average read current drops from 1.7 ± 0.2 mA to 0.15 ± 0.09 mA for LRS and from 0.3 ± 0.1 mA to 0.7 ± 0.5 µA for HRS (Fig. 1d).The HRS current decreases with IGZO thickness increment.Also, the memory window, ratio between the LRS and HRS current, increases in an exponential trend from 5 up to 200 (Fig. 1e,h). Figure 2(a,d) shows the endurance curves of Ag/IGZO/ITO memristors with different IGZO thickness with respective current levels when reading at 100 mV Fig. 2(e,h).The devices show stable switching with low dispersion.Both SET and RESET voltages increase with the increase of IGZO thickness, from 1 to 2 V and from − 1.5 V to − 5 V for SET and RESET, respectively.There is a decrease in cycle variation with the decrease of thickness.Moreover, the switching became more gradual with the decrease of thickness.However, the trade-off is the decrease of memory window.Also, the IGZO devices with the lowest thickness (50 nm) show an increase of the HRS current during cycling, thus gradually reducing the memory window during endurance test.For the thicker devices that trend in HRS current does not occur.Moreover, the existence of a secondary volatile switching with opposite polarity was found (Figure S5). To study the switching mechanism of the Ag/IGZO/ITO memristors, temperature measurements were performed in vacuum from 150 to 300 K.As shown in Fig. 3a, there are no significant changes in current at LRS at 150 K and 300 K, but HRS displays a more pronounced change with temperature.Figure 3b,d show the best current-voltage and current-temperature fittings for LRS.LRS charge transport is assumed to be controlled by variable range hopping (i.e., temperature dependency of 1/T 4 ) which is usually reported for strongly disorder systems 50,51 .It is bulk-limited model indicating that defects in IGZO have important roles.The slope double logarithmic plot is 1.01, implying ohmic characteristics as dominated conduction.Therefore, the origin of defects is most probably related to diffusion of silver ions in filament formation.At HRS, the characteristics show a good fit of Schottky emission (thermionic emission) with a good quality fit when plotting the ln (I/V) in function of V 1/2 also validated by temperature dependency fitting as shown in Fig. 3e. Proof of concept of volatile and non-volatile switching characteristics From the results of the previous section, IGZO with a thickness of 50 nm was chosen for this proof of concept because it leads to a more gradual switching and a lower variability when cycling the devices.These characteristics are important for neuromorphic computing.The optimized fabricated memristors (the device structure and connection depicted in Fig. 4a, show two distinctive I-V characteristics according to the endurance and current-time relation measurements.The distinct switching mechanisms are explained in the discussion section.There is a significant difference in current levels between two switching modes (Fig. 4b), meaning that only a certain level of current defines resistive switching modes.By standard definition, the electroforming process is the one-time application of a high electric field, higher than the set voltage (V f > V set ).In the presented devices, in both volatile and non-volatile modes, there is no significant difference between initial SET and other cycles.This way, the device can be categorized as formingfree.In previous works 18,19,52,53 , the IGZO memristors presented a forming-free performance.The devices in their pristine state have a low current (ranging from 10 -10 A to 10 -5 A) and a rectification of 1000 (Figure S6).Using the terminology originated from the works of Dittmann and Waser [54][55][56] , the switching polarity can be classified into counter-eightwise (c8w) and eightwise (8w) in relation to the active electrode.When the voltage of the active electrode is displayed and the voltage of the other electrode is grounded, then the switching polarity will be called c8w if the SET occurs at negative voltage and reset occurs at positive voltage.The c8w I-V curve (in a linear scale) has a drawing direction which is against that of the handwriting of a (tilted) '8' .The opposite switching polarity is called 8w 55 .In this work the voltage is applied to the Ag electrode whereas the ITO electrode is grounded.Since the Ag is a more reactive electrode than ITO, thus we considered the Ag as the active electrode (AE).Therefore, the 8w switching has non-volatile bipolar nature where the c8w switching have volatile characteristics. In Fig. 4c, the current follows an 8w pinched hysteresis loop with bipolar non-volatile properties (the linear I-V characteristic is shown in Figure S1).The device is compliance free, reaching currents up to 50 mA with a rectification feature.The SET voltage is at 0.8 V while the RESET voltage shows a higher variation being between − 0.5 and − 0.9 V.In the lower voltage regime, however, the direction of SET and RESET are reversed demonstrating so-called c8w switching and volatile resistive switching is obtained.The volatile behaviour shows a very low cycle-to-cycle variability during endurance and a low threshold voltage between − 0.2 V and − 0.3 V (Fig. 4d) with a rectification in the ratio of 100 similar to our previous report 18 using sputtered IGZO device.Only the c8w switching behaviour shows retention time as shown in Fig. 4e for 10 5 s. The short-term memory effect of the memristor for the volatile regime can be described by a time constant τ, from an exponential decay function.Figure 4f depicts the decay curve after a − 0.65 V pulse for 100 µs.The relaxation time, τ, is 0.75 ms.As a result, when programming the device, the device state not only depends on the programming pulse itself, but also on the number of pulses and the pulse intervals. To demonstrate the similitude amongst the dynamic memory retention of the device and that of the human memory, a single stimulus (− 0.65 V with a duration of 0.1 ms) spaced by a period greatly larger than τ (Fig. 5a) The pulses have an amplitude of − 0.65 V with a duration of 0.1 ms.The results are described in Fig. 5b where a good quality fitting is also presented.Similarly to the results from literature 44 , the current decay follows a simple exponential decay function.Both the relaxation time constant, τ, and the initial current increase with the increasing number of pulses, suggesting that the dynamic retention can be increased by repeating stimulations.The τ increases from 0.75 ms to 11 ms (Fig. 5c) and the initial current increases from 1.8 µA to 2.5 µA. Figure 5d shows the effect of the interval between pulses.After the stimulation, the higher is the interval between pulses, the lower is the read current.For a pulse interval of 0.1 ms, the current reaches 1.5 µA after 7 pulses, whereas for a pulse interval of 1 ms, the current only reaches 0.5 µA.Since the application for temporal signal processing requires very short-term memory, volatile memristors like the one presented in this work are a great potential candidate.Figure 6a shows the volatile mode memristor device characteristics to different temporal inputs. The "1" state corresponds to a pulse with − 0.65 V of amplitude and a width of 100 µs.The "0" state corresponds to the absence of pulses, 0 V amplitude for 1 s.There are 5 read pulses: one when the sequence initiates and then after finishing each state.Figure 6b shows that a pulse width variation of 10 µs, 50 µs and 100 µs can be used to activate the device.The longer the pulse, the higher is the corresponding read current.Also, when applying different pulse voltages (− 0.35 V, − 0.5 V, − 0.65 V), the read current is higher (Fig. 6c). When a pulse is applied, the state of the memristor will be changed by increasing its conductance and if the pulse interval is short enough, its conductance will be increased.For long intervals the conductance decays to its resting state 43 .Therefore, different temporal inputs will lead to different states of the device.[0110] and [0101] sequences, using the same "0″ and "1″ pulse parameters depicted in Fig. 6a.There is a low variation of the read current over the cycles, however, the trend is the desired one. The working devices are very consistent in terms of voltage operation (Figure S8a), the same pulse scheme works for the different devices as can be shown in Figure S8b.However, they present some variability regarding the current state due to the presence of pinholes.We also note that reducing the size of the active region can lead to faster switching times, therefore shorter and more intense pulses may induce faster switching. Discussion The devices show a co-existence of threshold switching volatile memristor and bipolar non-volatile switching.The devices can alter from volatile mode (c8w) to non-volatile (8w), but not vice versa.The protocol to change from volatile to non-volatile is by increasing the switching voltage without needing electroforming.The switching polarity is related to the dominated means of the defect redistribution.In VCM filamentary systems, the c8w switching mode the device is SET to its low resistive state (LRS) by applying a negative voltage at the AE and RESET to its high resistive state (HRS) by applying a positive voltage at the AE.The second resistive switching, 8w, occurs at the opposite polarity of the c8w mode.In 8w switching a positive voltage is applied to the AE of the device to bring it to its LRS and a negative voltage is needed to RESET the device to its HRS 57 .On oxide thin film memristors, it has been demonstrated that both switching modes can appear in the same device by changing the operating conditions 54,56,[58][59][60][61] .In the context of non-volatile 8w switching, the temperature and voltage characteristics suggests that the charge transport in the low-resistance state (LRS) is governed by the variable range hopping model, which is bulk-limited.The electrons are injected into the IGZO without significant potential barrier, and the transport-limiting element is the conduction from defect to defect.Non-volatile switching with silver as active electrode is usually due to metallic cation migration, recognized as electrochemical metallization mechanism (ECM) 62 .This is illustrated in the schematic shown in Fig. 4a.The explanation for the increase in current with increasing temperatures is the nanostructured morphology of the filaments 63 . The rectification characteristics on the I-V curves on both switching polarities, are due to the presence of small Schottky-type barriers at the interface of Ag/IGZO layers and ITO/IGZO layers for non-volatile 8w switching, and volatile c8w switching respectively.On non-volatile switching at HRS, the characteristics shown a good fit of Schottky emission (thermionic emission).This means that the conduction in the non-volatile HRS is supported by the conduction band of the IGZO and the transport-limiting element is the injection of electrons at the contact interface. The coexistence of c8w and 8w switching was reported in 58 for Pt/TiO 2 /Ti/Pt devices where both switching modes occur from the competition between drift/diffusion of oxygen vacancies in the oxide layer and an oxygen exchange reaction across the Pt/TiO 2 interface.A similar concept can be applied here for the c8w resistive switching, categorized as diffusive memristor following ion exchange at the interface of IGZO and ITO.We have already shown in our previous works 18,19,52,53 , that amorphous oxide semiconductors (AOS)-based memristor present a forming-free performance.One of the main reasons relies on defect profiles of AOS active material at the interface with the electrode which can be easily tuned into distinct resistance states especially in c8w Figure 6.(a) 1100 pulse stream with the optimized parameters: for the state 1 it was applied a − 0.65 V pulse for 0.1 ms; the state 0 is 1 ms after the last "1" pulse; the reading was performed at 0.05 V for 1 ms.(b) Influence of the pulse length for the "1" state (0.1 ms, 0.5 ms and 0.01 ms) using a [1100] pulse stream.(c) Influence of the pulse amplitude for the "1" state (− 0.35 V, − 0.5 V and − 0.65 V) using a [1100] pulse stream (d) Read current for various pulse streams using the "1" and "0" pulse parameters in a). Vol:.(1234567890) Scientific Reports | (2024) 14:7469 | https://doi.org/10.1038/s41598-024-58228-ywww.nature.com/scientificreports/resistive switching behaviour as shown in one or our previous works 64 .The corresponding resistive switching is area-dependent; however, multi-filamentary nature is not excluded.Moreover, the coexistence of secondary switching in a single memristor cell is usually volatile as reported in 53,56,58 , 65,66 .In these works, the volatile switching mode is explained by an oxygen exchange reaction between the Pt electrode at the interface with active layer, e.g.metal-oxide.The occurrence of volatile mode at negative voltage polarity laine may be related to ion-related migration at the interface 41 .The exchange of oxygen between ITO and the switching layer can influence the conductivity of the latter 67 .In case of an n-type material like IGZO, the conductivity will increase with the decrease of oxygen content.Hence, under positive polarity at the top electrode, oxygen moves into the ITO layer and gets accommodated as interstitial oxygen, which corresponds to the SET operation.This interstitial oxygen is released back into the IGZO under positive bias during the RESET operation (see Fig. 4a). Conclusions It is demonstrated that solution-based memristors fabricated by inkjet technique have a strong potential for applications due to their scalable production at low cost and low waste formation.In this work, a printed IGZO memristor has been fabricated where only 10 µL of IGZO precursor ink was spent to print a 1 mm 2 square with minimum waste.The devices shows both volatile and non-volatile behaviour depending on the programming schemes.The IGZO thickness influences the switching voltage and memory window.The non-volatile response follows an 8w switching polarity with a SET and RESET voltage higher than 2 V and − 5 V, respectively, with low cycle variability and a retention up to 10 5 s and a memory window up to 100.The LRS charge transport is found to be controlled by variable range hopping where the origin of defects on IGZO is most probably related to the diffusion of silver ions in the form of filaments.On the other hand, the volatile switching mode follows an 8w scheme with very low threshold voltage (V th < − 0.65 V) and switching times below 1 ms.The volatile characteristics provide short term retention with a τ of 0.75 ms.Those combined characteristics show that a low-cost technology like printed metal oxide memristors can be used for simple and efficient designs of fully memristive architecture based on IGZO, where the reservoir state (volatile mode) can be processed with the IGZO memristive readout neural network (non-volatile mode).A further step for the demonstration of the system should involve a crossbar design and the corresponding test.Further, it is worth noticing that IGZO memristors can be applied on flexible biocompatible substrates, such as polyimide with parylene as biofriendly encapsulation to be implemented in IoMT device application. The IGZO precursor ink was optimized for inkjet printing while considering the Reynolds (Re), Webber (We) and Ohnesorge (Oh) numbers.The Ohnesorge number is a dimensionless value that describes the tendency for a drop to either stay together or fly apart, by comparing viscous forces with inertial and surface tension forces.The Ohnesorge number is related to the Reynolds number, and Weber number.The value of Z is defined as the inverse of Oh and used to evaluate the drop formation.For a stable drop formation, the value of Z must be between 1 and 10 68,69 .The IGZO ink has a viscosity of 4.16 cP at 20 °C (same temperature condition during printing) and a Z number of 4.8 (Table S1).The Figure S9 shows the Re, We and Oh values for the ink are inside the optimal area for a stable drop formation.The ink viscosity was measured using a Brookfield DV2T viscometer with a speed ranging from 1 to 50 rpm. Device fabrication The developed ITO/IGZO/Ag devices have a common bottom electrode structure.Figure 1a explains the fabrication of the IGZO memristors.The bottom electrode consists of ITO covered commercial glasses.The printing of the IGZO layer was carried out in a Dimatix DMP 2850 inkjet system using a piezoelectric multi-nozzle printing head from Dimatix (DMCLCP-16110) with 10 pL cartridge.The cartridge and stage temperature were kept at 25 °C and 50 °C, respectively.The frequency was 5 kHz and the drop spacing was set to 30 µm.The IGZO layer were printed with an area of 1000 × 1000 µm 2 followed by a post treatment at 200 °C for 1 h.Before the deposition of the top contacts, a 15 min UV/ozone surface activation was carried.As top contact, silver electrode was used due to facility of deposition with printing techniques 70,71 .Silver inks are very reliable and have a sintering temperature of 200 °C or lower, unlike other metal-based inks which need higher sintering temperatures to be conductive.Therefore, the silver nanoparticle colloidal ink (Sicryst I50T-13 from PV Nano Cell company)) was printed as two subsequent layers with an area of 250 × 250 µm 2 , on top of the IGZO layer by inkjet printing. The device thickness was measured using a stylus XP-Plus 200 Stylus profilometer from Ambios.The surface morphology of the samples was also determined by Atomic Force Microscopy (AFM), with an Asylum MFP3D.The quasi-static current-voltage (I-V) characteristics and the pulse studies were measured using a Keithley www.nature.com/scientificreports/4200 SCS semiconductor analyser connected to the Janis ST-500 probe station.The signal was applied to the top electrode (Ag) while maintaining the bottom electrode (ITO) grounded.The speed of the measurements was at normal mode with a measurement rate of 50 mV/s without any delay time and the integration time was in auto setting. Figure 1 . Figure 1.(a) Schematic depicting the fabrication of printed Ag/IGZO/ITO memristors on glass substrate by inkjet printing technique (b) Average IGZO thickness in function of the number of printed layers with an optical microscope image of the printed Ag/IGZO/ITO memristors as inset.(c) I-V characteristics of Ag/IGZO/ ITO in function of the IGZO Thickness: 350 nm, 500 nm and 530 nm.(d) Read current at 100 mV on HRS and LRS in function of the IGZO thickness.(e) respective memory window. Figure 3 . Figure 3. Study of the mechanism of Ag/IGZO/ITO devices.Fitting of the SET curve: (a) on LRS for hopping and (b) on HRS for Schottky emission.Temperature measurements carried out in vacuum from 150 to 300 K with a step of 10 K on Ag/IGZO/ITO memristors: (c) SET sweep at 150 K and 300 K, (d) read current on LRS and HRS from 150 to 300 K, (e) Fitting mechanism of the LRS for hopping, (f) Fitting mechanism of the HRS for Schottky emission. Figure 4 . Figure 4. (a) Schematic of Ag/IGZO/ITO devices emphasizing the coexistence of two distinctive switching mechanisms.(b) Different I-V characteristics of the memristors taken from voltage sweeps: at larger current it follows a counter8wise switching (non-volatile) while for low current it follows an 8wise switching (volatile).(c) 100 cycles endurance voltage sweep for non-volatile programming.(d) 50 cycles endurance voltage sweep for volatile programming (e) Retention test for 105 s at 0.1 V (f) Current decay in IGZO memristor after being programmed by 1 write pulse (− 0.65 V, 0.1 ms); the current was then monitored by read pulses at 0.05 V for 25 ms. Figure 5 . Figure 5. (a) Current levels during the application of a − 0.65 V pulse every 60 ms.(b) Current decay for 25 ms after being applied different number of pulses (1, 5, 15) with its respecting fitting.(c) Time constant (τ) as a function of the number of pulses.(d) Read current taken at 0.05 V for 0.1 ms after applying a single pulse with an amplitude of − 0.65 V with different pulse intervals: 0.1 ms and 1 ms. https://doi.org/10.1038/s41598-024-58228-y
5,378.8
2024-03-29T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
11β hydroxysteroid dehydrogenase 1: a new marker for predicting response to immune-checkpoint blockade therapy in non-small-cell lung carcinoma Understanding the status of intratumoural immune microenvironment is necessary to ensure the efficacy of immune-checkpoint (IC) blockade therapy. Cortisol plays pivotal roles in glucocorticoid interactions in the immune system. We examined the correlation between intratumourally synthesised cortisol through 11β hydroxysteroid dehydrogenase (HSD) 1 and the immune microenvironment in non-small-cell lung carcinoma (NSCLC). We correlated 11βHSD1 immunoreactivity in 125 cases of NSCLC with the amount of intratumoural immune cells present, and 11βHSD1 immunoreactivity with the efficacy of IC blockade therapy in 18 specimens of NSCLC patients. In vitro studies were performed to validate the immunohistochemical examination. 11βHSD1 immunoreactivity showed a significant inverse correlation with the number of tumour-infiltrating lymphocytes and CD3- or CD8-positive T cells. 11βHSD1 immunoreactivity tended to be inversely correlated with the clinical efficacy of the IC blockade therapy. In vitro studies revealed that 11βHSD1 promoted the intratumoural synthesis of cortisol. This resulted in a decrease in cytokines and in the inhibition of monocyte migration. Our study is the first report clarifying the inhibitory effects of intratumourally synthesised cortisol through 11βHSD1 on immune cell migration. We propose that the response to IC blockade therapy in NSCLC may be predicted by 11βHSD1. concentrations of cortisol available for GR. 9 Thus, its increased expression in situ could also enhance the actions of GC in its target tissues. In addition, the local balance between these two enzymes regulates GC actions. It is important to evaluate the status of both 11βHSD1 and 11βHSD2 in situ in order to further explore GC actions in GR-positive target tissues. Lung cancer itself was reported to express both of these enzymes with higher 11βHSD2 in adenocarcinoma than in squamous cell carcinoma. [13][14][15] However, the status of 11βHSD1 and its role in immune system and cancer progression of NSCLC has remained virtually unknown. In this study, we initially evaluated the correlation between the status of 11βHSD1 and the tissue-immune microenvironment in NSCLC. This evaluation included its correlation with the efficacy of IC blockade therapy, and the effects of cortisol on the production of cytokines in NSCLC cells. Our goal was to identify new predictors of the efficacy of IC blockade therapy and provide new insights into the therapeutic strategies of NSCLC. Lung cancer cases We examined a total of 125 NSCLC cases who were all Japanese and underwent surgical resection from 2014 to 2015 at the Department of Thoracic Surgery, Miyagi Cancer Center, Miyagi, Japan. All these patients had not received chemotherapy or irradiation prior to surgery. The cases included 95 adenocarcinoma (53 males and 42 females, median age: 68.0 years, range: 41-82 years and standard deviation: 9.3 years) and 30 squamous cell carcinomas (27 males and 3 females, median age: 72.0 years, range: 56-83 years and standard deviation: 6.4 years). We examined the sections that include the largest diameter or the most representative sections of each tumour. Apart from this cohort, as a new cohort, we also studied 18 cases in order to explore the correlation between 11βHSD1 immunoreactivity and the therapeutic efficacy of IC blockade therapy, in the biopsy or surgical specimens of NSCLC patients retrieved from 2017 to 2019 at Tohoku University Hospital, Miyagi, Japan. These 18 cases did not harbour any mutations of epidermal growth factor receptor and anaplastic lymphoma kinase, and demonstrated relatively abundant PD-L1 immunoreactivity assessed as high expression (the total proportion score: >50%) according to KEYNOTE-024 (ClinicalTrials. gov, NCT02142738). All 18 cases were treated with pembrolizumab following the pathological diagnosis. The clinical therapeutic efficacy was assessed according to the Response Evaluation Criteria in Solid Tumours (RECIST) version 1.1. 16 There were no cases showing pseudoprogression after the pembrolizumab treatment. All the specimens were fixed in 10% formalin and embedded in paraffin. The study protocol was approved by the Ethics Committee at the Tohoku University School of Medicine and Miyagi Cancer Center, respectively. Immunostaining The characteristics of the antibodies used were as follows: . Immunohistochemistry for PD-L1 was performed using the Dako PD-L1 22C3 pharmDx kit (Dako, Carpinteria, CA) on the Dako Link 48 platform. A Histofine Kit (Nichirei, Tokyo, Japan) using the streptavidin-biotin amplification method was used for 11βHSD1, GR, CD3 and CD8, and the EnVision kit (Dako, Agilent Technologies, Inc., Santa Clara, CA, USA) was used for 11βHSD2 in this examination. The antigen-antibody complex was visualised using the 3,3ʹ-diaminobenzidine (DAB) solution (1 mM DAB, 50 mM Tris-HCL buffer, pH 7.6 and 0.006% H 2 O 2 ) and counterstained with haematoxylin. Evaluation of immunoreactivity Immunoreactivity was evaluated using whole tissue sections of all surgical and biopsy cases examined. Cells demonstrating higher immunointensity than the background were defined as positive cells. A modified H score was used for the assessment of 11βHSD1, 11βHSD2 and GR (Fig. 1a-c). In brief, it was obtained by adding the percentage of strongly stained cells (2×) with that of weakly stained cells (1×) in tumour cells, providing a possible range of 0-200. The amount of intratumoural immune cells was assessed by the number of these cells infiltrated into the tumour cell nests (defined as within one tumour cell diameter from the tumour nests), based on a previous report. 17 The numbers were counted in three randomly selected fields (×400) per case, and the average of the three counts was calculated. TILs were evaluated using haematoxylin and eosin stain, whereas CD3-and CD8-positive T cells were assessed using the immunohistochemistry ( Fig. 1d-f). Six cases with adenocarcinoma were not studied in the evaluation of TILs and CD3-and CD8-positive T cells, because of the presence of marked neutrophil infiltration. The evaluation was performed independently by two of the authors (R.S. and T.A.). Survival analysis Survival analysis regarding 11βHSD1 in NSCLC was performed using KM-Plotter (http://www.kmplot.com/analysis/index.php? p=service&cancer=lung) at the gene expression level. KM-Plotter was an online survival analysis database to explore the prognostic value of biomarkers using transcriptomic data in various human malignancies. With this tool, the overall 10-year survival rates of 720 cases with adenocarcinoma and 524 cases with squamous cell carcinoma were analysed, respectively. 11βHSD1 was entered as the gene symbol, and the median value of 11βHSD1 expression using the JetSet probe (probe ID: 214610_at) was selected as the cut-off for the high and low 11βHSD1 groups. Multivariate Cox regression with 11βHSD1 expression and stage was performed to compute the HR and P values. The Kaplan-Meier and Log-Rank tests were used to estimate and display the clinical outcomes of the patients. Cell culture Human cell lines used in this study included A549 (American Type Cell Culture Collection (ATCC), Manassas, VA, USA), NCI-H23 (ATCC), PC3 (Cell Resource Centre for Biomedical Research, Tohoku University, Sendai, Japan), PC9 (Riken Cell Bank, Tsukuba, Japan) and LCSC#1 (Cell Resource Centre for Biomedical Research) for lung adenocarcinoma; LK2 (Cell Resource Centre for Biomedical Research), and RERF-LC-AI (Riken Cell Bank) for lung squamous cell carcinoma; peripheral blood mononuclear cell (PBMC) from a healthy donor (Precision Bioservices, MD, USA). The lung cancer cell lines were used in previous reports from our group. 18,19 All cells were maintained in RPMI 1640 (Sigma-Aldrich, Saint Louis, MO, USA) supplemented with 10% foetal bovine serum (FBS) (Nichirei Co. Ltd.) and 1% penicillin/streptomycin at 37°C in a humidified incubator containing 5% CO 2 . Western blotting Total protein was extracted using the PhosphoSafe Extraction Reagent (Biosciences Inc., Darmstadt, Germany) from cultured cells. Following the measurement of protein concentrations (Protein Assay Rapid Kit Wako, Wako), the total proteins were individually subjected to SDS-PAGE (SuperSep Ace, Wako). These proteins were transferred onto the Hybond P polyvinylidene difluoride membrane (GE Healthcare, Buckinghamshire, UK). Next, the proteins on the membrane were blocked in 5% non-fat dry skim milk powder (Wako) for over 1 h at room temperature and were incubated with primary antibodies overnight at 4°C using ImmunoShot (Cosmo Bio Co., Ltd., Tokyo, Japan). The dilution of primary antibodies used in this study was as follows: 11βHSD1, 1:500; 11βHSD2, 1:1000; GR, 1:1000; β-actin (Sigma-Aldrich Co., St. Louis, MO, USA), 1:1000. These antibody-protein complexes were detected on the blot using ECL-plus western blotting detection reagents (GE Healthcare) following an incubation with anti-rabbit or anti-mouse IgG horseradish peroxidase (GE Healthcare) at room temperature for 1 h. Cortisol production assay LK2 was seeded in six-well plates (5 × 10 5 cells/ml) in the RPMI 1640 medium containing 10% FBS. Several hours later, the cells were washed with phosphate-buffered saline (PBS), and treated with 1 μM 11βHSD1 inhibitor PF915275 (Tocris/Bio-Techne, Minneapolis, USA) in phenol red-and FBS-free RPMI 1640 medium. After 2 h, the cells were incubated in phenol red-and FBS-free RPMI 1640 medium with 1 μM 11βHSD1 inhibitor PF915275 and 1 μM cortisone (Sigma Chemical Co., St. Louis, USA). After 24 h, cortisol concentration in the medium was measured using Cortisol Enzyme Immunoassay Kit (Arbor Assays, Ann Arbor, MI, USA) by Jaica (Shizuoka, Japan). Cytokine antibody array As cortisol, hydrocortisone (HC) was purchased from MP Biomedicals (Solon, OH, USA). To assess the secretion levels of cytokines, we treated A549 and LK2 (1 × 10 5 cells/ml) with ethanol as control or 100 nM HC for 24 h in the RPMI 1640 medium containing 10% FBS in six-well plates. After that, we removed the medium, washed the cells with PBS and incubated the cells in phenol red-and FBS-free RPMI 1640 medium for 24 h. The conditioned medium was used as samples. We used the Human Cytokine Antibody Array 5 (RayBiotech, Norcross, GA), which can detect 80 cytokines. The membranes were spotted with cytokinespecific antibodies and were analysed following the instructions from the manufacturer. The signal was detected using the Image Lab TM software (BIO-RAD Laboratories, Inc., CA, USA). PBMC migration assay Migration assays were performed using the Chemotaxicell containing membranes with 5-μm pore size (Kurabo, Osaka, Japan) and 24-well plates. After a treatment of LK2 (5 × 10 4 cells/ml), which yielded the highest protein expression of GR and 11βHSD1 and the lowest expression of 11βHSD2, with 100 nM HC for 48 h in the RPMI 1640 medium containing 10% FBS, we washed the cells with PBS and exchanged the medium to phenol red-and FBS-free RPMI 1640 medium. After 24 h, we used the medium as a conditioned medium in the lower chamber. The PBMCs were plated in the upper chambers (1.5 × 10 5 cells/well) in phenol red-and FBS-free RPMI 1640 medium. After 24 h of incubation, we counted the number of migrated cells in the lower chamber using TC 20 TM Automated Cell Counter (Bio-Rad Laboratories, Inc., CA, USA). In order to further evaluate the cell population of migrated cells, we subsequently performed cytological examination. Migrated cells in the lower chamber were fixed in 95% ethanol or 10% formalin for 10 min, and Papanicolaou staining or immunostaining for CD3 and CD8 were performed, respectively. The ratio of the number of CD3-or CD8-positive T cells to the total cells was assessed in randomly selected three spots per microscopic field (1 mm 2 ) using the software of HALO Area Quantification ver. 1.0 (Indica Laboratories, Corrales, NM). Statistical analysis Statistical analysis was performed using IBM SPSS Statistics 23 (IBM Corporation, New York, USA). Comparisons between two groups of immunohistochemical analyses were performed using t test, χ 2 tests and Pearson's or Spearman's correlation analysis. Statistical analyses of the in vitro study were performed using ANOVA or Tukey's test. Statistical significance was set at p < 0.05 in this study. RESULTS The correlation between the 11βHSD1, 11βHSD2 and/or GR and clinicopathological factors of the patients Both 11βHSD1 and 11βHSD2 were detected in the cytoplasm of carcinoma cells, and GR was detected in the nuclei of these cells (Fig. 1a-c). The correlations between the immunoreactivities for 11βHSD1 or 11βHSD2 and the histological types are summarised in Table 1. 11βHSD1 H score tended to be higher in squamous cell carcinoma than adenocarcinoma (p = 0.063). Immunopositivity for 11βHSD2 was observed in only 21 cases of adenocarcinoma (p = 0.002 vs squamous cell carcinoma) with low H scores (7.4 ± 21.2). Therefore, we cautiously classified the cases into two groups: the Fig. 1 The results of in vivo analysis. a-c Representative images of 11β hydroxysteroid dehydrogenase (HSD) 1 (a), 11βHSD2 (b) and glucocorticoid receptor (GR) (c) immunoreactivity in non-small-cell lung carcinoma (NSCLC) specimens (bar: 100 μm). Immunoreactivity for 11βHSD1 and 2 was detected in the cytoplasm of carcinoma cells, and that for GR was detected in the nucleus of carcinoma cells. d Haematoxylin and eosin (HE) stain of NSCLC specimens (bar: 100 μm). Intratumoural TILs were assessed by the number of TILs infiltrating into the tumour cell nests (defined as within one tumour cell diameter from the tumour nests), which were inside of the areas surrounded by lines. e, f Immunohistochemistry for CD3 (e) and CD8 (f) (bar: 100 μm). Intratumoural CD3-or CD8-positive T cells were assessed in the same way as above. g-i The correlation between immunoreactivity of 11βHSD1 and intratumoural immune cell infiltration levels in total cases (g), cases with adenocarcinoma (h) and cases with squamous cell carcinoma (i). 11βHSD1 immunoreactivity was significantly and negatively correlated with intratumoural TILs and CD3-positive T cells. It tended to correlate with intratumoural CD8-positive T cells in total cases and cases with squamous cell carcinoma negatively. In 11βHSD2-negative cases, 11βHSD1 immunoreactivity in total cases was significantly associated with intratumoural CD8-positive T cells, and it tended to be associated with them in cases with adenocarcinoma. j, k The correlation between mRNA level of 11βHSD1 and overall survival in cases with adenocarcinoma (j) and cases with squamous cell carcinoma (k). The high mRNA level of 11βHSD1 was independently and significantly associated with poor overall survival of patients with adenocarcinoma. There was no significant correlation between mRNA level of 11βHSD1 and overall survival of patients with squamous cell carcinoma. H score histological score (modified). *p value < 0.05, significant. The correlations between 11βHSD1 immunoreactivity and the examined patient's clinical and pathological parameters are summarised in Table 2. A significant positive association was detected between 11βHSD1 immunoreactivity and age in total cases and in cases with adenocarcinoma (p < 0.001 and p = 0.001, respectively). The immunoreactivity of 11βHSD1 was significantly correlated with the smoking index, which was calculated by the number of cigarettes smoked per day multiplied by the number of years of smoking history of individuals, in cases with squamous cell carcinoma (p = 0.031). The correlation between 11βHSD1 status and the efficacy of IC blockade therapy Clinicopathological characteristics of the cases examined are summarised in Table 3. The correlation between 11βHSD1 immunoreactivity in carcinoma cells and therapeutic efficacy is summarised in Table 4. The cases were tentatively classified into low and high 11βHSD1-expression groups. We applied the average of 11βHSD1 H score (119.4) to the cut-off of these two groups. High 11βHSD1-expression group had significantly higher progressive disease than low 11βHSD1-expression group (p = 0.038). In addition, the progressive disease group tended to harbour a higher 11βHSD1 H score than the partial remission group, but the difference did not reach statistical significance (p = 0.059). The correlation between the mRNA levels of 11βHSD1 and the overall survival of the patients As summarised in Fig. 1j (Fig. 1k). The expression of GC synthesis-associated enzymes and receptors in NSCLC cell lines NSCLC cell lines other than PC9 had relatively abundant levels of 11βHSD1 protein expression compared with 11βHSD2 and had high expression of GR (Fig. 2a). Induction of cortisol synthesis through 11βHSD1 in NSCLC cells The results of a cortisol synthesis assay using LK2 are shown in Fig. 2b. Cortisol concentration was significantly increased by 1 µM cortisone disposure for 24 h (p < 0.001). This increase was significantly suppressed by 1 µM 11βHSD1 inhibitor treatment (p < 0.001). The effect of GC on the migration ability of PBMC The HC treatment (100 nM, 48 h in RPMI 1640 medium containing FBS) significantly inhibited the migration ability of PBMC in LK2 (p = 0.032) (Fig. 3f, g). The migrated cells consisted of lymphocytes, monocytes and a few granulocytes in the light microscopic examination on Papanicolaou stained slides (Fig. 3h). HC treatment did not alter the ratio of the number of CD3-and CD8-positive T cells to the total migrated cells (p = 0.101, p = 0.714, respectively) (Fig. 3h, i). DISCUSSION In this study, we initially demonstrated that 11βHSD1 was involved in the in situ activation of GC, and was significantly and inversely correlated with the number of TILs and T cells in NSCLC tissues. In addition, cortisol was shown to reduce the expression of cytokines, such as CCL5, IL-8 and IL-6, resulting in the inhibition of migration of monocyte, including T cells and CD8positive T cells without changing their population. All of these results indicated that cortisol exerted inhibitory effects on the antitumour immune response via the inhibition of T-cell migration into the tumours of NSCLC. This is also the first report to clarify the significance of intratumoural GC synthesis in NSCLC, as this predicts the efficacy of IC blockade therapy and the clinical outcome of patients. The tissue concentration of cortisol, synthesised intratumourally in the de novo pathway, has not been reported previously. However, it was reported that the intratumoural concentration of sex steroids in tumours harbouring high expression levels of synthesising enzymes, was significantly higher than that in normal tissues. 20,21 In addition, according to our in vitro study as shown in Fig. 2b, significant increase in intratumoural 11βHSD1-dependent cortisol production in 11βHSD1-expressing NSCLC cells was confirmed. Therefore, the tumours demonstrating relatively abundant expression of 11βHSD1 are reasonably assumed to produce glucocorticoid. This results in a high intratumoural cortisol concentration. However, these findings await further investigation, such as direct measurement of intratumoural levels of cortisol in NSCLC. The level of 11βHSD1 was inversely correlated with the amount of infiltrating immune cells in the tumour as shown in Fig. 1g-i. These results suggested that intratumourally synthesised biologically active GC cortisol, converted by 11βHSD1 in tumour cells, could inhibit the immune response in areas adjacent to the tumour cells via GR activation, in an autocrine fashion. We considered that this inhibitory effect on the infiltration of CD8positive T cells was reduced in the presence of 11βHSD2, which was involved in the inactivation of intratumourally synthesised cortisol. This hypothesis was supported by the fact that the correlation between 11βHSD1 immunoreactivity in tumour cells and intratumoural CD8-positive T cells was more marked in 11βHSD2-negative cases among total cases as well as adenocarcinoma cases (Fig. 1g, h). GR immunoreactivity in tumour cells was not significantly associated with the amount of infiltrating immune cells in the tumour, although the correlation between GR status and the intratumoural immune microenvironment has not been examined in any human malignancy. Therefore, the presence of ligand is more important than GR expression for the detection of GR action on the intratumoural immune microenvironment. In addition, it is well-known that the infiltration of immune cells, especially the intratumoural CD8-positive T cells, is an important predictor of the effective response to IC blockade therapy. 3,4 Accordingly, it is considered that 11βHSD1 is a new indicator for poor response to IC blockade therapy, and the additional use of 11βHSD1 inhibitor may increase the efficacy of IC blockade therapy in NSCLC. This hypothesis was supported by the results of our present study that a significantly inverse correlation was detected between 11βHSD1 immunoreactivity in tumour cells and the efficacy of IC blockade therapy (Table 4). However, it is also true that we examined only 18 cases. Therefore, the predictive value of 11βHSD1 on the efficacy of IC blockade therapy has not necessarily been established and it awaits further investigations for clarification. Furthermore, some previous reports have stated that immune cell infiltration seemed to suppress tumour cells and improve the long-term survival in several tumour types, including NSCLC, small-cell lung cancer, breast cancer and renal-cell carcinoma. [22][23][24][25][26] Based on these reports and our immunohistochemical analysis, it was assumed that the intratumoural levels of 11βHSD1 could predict poor prognosis of NSCLC. Of particular interest are the results of our prognostic analysis using the KM plotter. These results were also consistent with these predictive findings in lung adenocarcinoma. We performed in vitro studies in order to validate the immunohistochemical results and to further clarify the mechanism of cortisol effects on the intratumoural immune microenvironment. The results revealed that active GC suppressed the migration of immune cells via the inhibition of cytokines expression such as CCL5, IL-8 and IL-6 as shown in Fig. 3. CCL5 is a well-known chemotactic cytokine for T cells, monocytes and others, and CCL5 has been reported to attract CD3-and CD8positive T cells into various tumours as well as NSCLC resulting in good outcome. 27,28 Therefore, the results of our present study did support our hypothesis based on those of immunohistochemical analyses in clinical materials, i.e., an association of 11βHSD1 in tumour cells with the inhibition of recruitment of CD3-and CD8positive T cells. The well-known chemokine IL-8, induces the migration of neutrophils and lymphocytes. IL-8 acts directly or indirectly on the immune cells through the regulation of other cytokines such as IL-2 and IL-10. 29,30 Previous studies have reported an association between IL-8 and T-cell induction in several types of human malignancies. This includes malignant melanoma, oesophagus adenocarcinoma, ovarian cancer and so on. [31][32][33] Several reports describe the effect of IL-6 on intratumoural T-cell induction. 34,35 However, IL-6 is a multifunctional cytokine with both immunosuppressive and immune-promotive effects. 36,37 Further investigation is required regarding the clarification of its actions in this area. Although our study focused on the effects of GC on GR in tumour cells, it was shown that GC could directly act on the GR of lymphocytes. This resulted in the inhibition of proliferation, and T-cell receptor signalling, as well as the induction of apoptosis in lymphocytes. 38,39 Therefore, we must consider the possibility that the intratumourally produced GC reduced T-cell infiltration via the induction of lymphocyte apoptosis. Further investigation is required to elucidate the complex mechanisms of the inhibitory effect of GC on immune responses. Data are presented as mean ± standard deviation. 11βHSD1 11β hydroxysteroid dehydrogenase 1, NSCLC non-small-cell lung carcinoma, H score histological score (modified), PR partial remission, PD progressive disease. *p value <0.05, significant. 11β hydroxysteroid dehydrogenase 1: a new marker for predicting response. . . R Saito et al. In conclusion, we initially demonstrated the significant negative impact of cortisol, including that intratumourally synthesised through 11βHSD1, on the intratumoural immune microenvironment in NSCLC. It is also true that further studies, such as tumour-killing assays or experiments using animal models of spontaneous lung cancer, are required for clarification, but the results of our present study could provide new insights into therapeutic strategies, and these results may also improve the predictive accuracy of outcomes in IC blockade therapy.
5,275.2
2020-04-27T00:00:00.000
[ "Biology", "Medicine" ]
On Solitary Wave Solutions for the Camassa-Holm and the Rosenau-RLW-Kawahara Equations with the Dual-Power Law Nonlinearities The nonlinear wave equation is a significant concern to describe wave behavior and structures. Various mathematical models related to the wave phenomenon have been introduced and extensively being studied due to the complexity of wave behaviors. In the present work, a mathematical model to obtain the solution of the nonlinear wave by coupling the classical Camassa-Holm equation and the Rosenau-RLW-Kawahara equation with the dual term of nonlinearities is proposed. The solution properties are analytically derived. The new model still satisfies the fundamental energy conservative property as the original models. We then apply the energy method to prove the well-posedness of the model under the solitary wave hypothesis. Some categories of exact solitary wave solutions of the model are described by using the Ansatz method. In addition, we found that the dual term of nonlinearity is essential to obtain the class of analytic solution. Besides, we provide some graphical representations to illustrate the behavior of the traveling wave solutions. Introduction In the study of nonlinear wave phenomena, the nonlinear partial differential equations are one of the great mathematical models to investigate the problems. A variety of the mathematical theory for the wave equations has been achieved theoretically and numerically, arising in empirical applications on ion-acoustic and magnetohydrodynamics waves in plasma, longitudinal dispersive waves in elastic rods, pressure waves in liquid-gas bubble mixtures, and rotating flow down a tube. For instance, the various phenomena of shallow-water waves are led by nonlinear partial differential equations such as the Korteweg-de Vries (KdV) equation [1][2][3][4][5][6][7], the Benjamin-Bona-Mahony (BBM) equation [8][9][10][11], the Symmetric Regularized Long Wave (SRLW) equation [12][13][14][15], the Kawahara equation [16][17][18][19], and the Rosenau equation [20][21][22][23]. For further understanding of nonlinear behaviors of shallow-water waves, the generalized Rosenau-RLW equation was introduced in the following: where p ≥ 1 and β are a constant. Equation (1) is an extension of the Rosenau equation by adding a viscous term −u xxt and replacing the nonlinear term with a general power of nonlinearity u p u x . If p = 1 and β = 1, then equation (1) is called usual Rosenau-RLW equation. When p = 2, then equation (1) is called the modified Rosenau-RLW equation. For numerical study for the Rosenau-RLW equation, we refer to [24][25][26][27]. Later, many models related to the Rosenau and the Rosenau-RLW equations have been studied and become an essential topic in the study of shallow-water wave behavior. In [28], the solitons and periodic solutions for the Rosenau-KdV and Rosenau-Kawahara equations were It has been a growing interest in computation nonlinear wave equations. In [32], He and Pan initially studied a second-order three-level linearly implicit difference scheme which is energy-conserved and unconditionally stable. In [33], two conservative high-order accurate finite difference schemes for the periodic initial value generalized Rosenau-Kawahara-RLW equation were introduced and extensively studied. For more related nonlinear wave equations, readers can refer to [34][35][36][37][38][39][40][41][42][43]. As furthermore consideration of the unidirectional shallow-water waves, one of the equations is a Camassa-Holm (CH) equation which can be founded: where κ is a constant. The equation has been derived by Camassa and Holm [44] in 1993 and has a solitary peaked solution which discontinuity in the first derivative. For the significance of κ, it was shown in [45] that for all κ > 0, there are smooth solitary wave solutions, and for κ = 0, it has peaked soliton solution (peakon). A classification of weak traveling wave solution of Camassa-Holm equation was given in [46]. Furthermore, Kalisch and Lenells have investigated the kind of traveling wave solution, smooth traveling waves, cusped traveling waves, and composited traveling waves [47]. The orbital stability of the peakons and the solitons of the smooth solitary wave of CH equation were shown in [46,48,49]. In 2010, Lai [50] established the existence and uniqueness of a local solution of the CH equation in Sobolev space H s ðℝÞ, and the well-posedness was established by Li and Olver [49]. Very recently, Nanta et al. [51] obtained the numerical study of the generalized Camassa-Holm equation involving dual-power law nonlinearities. Other studies of CH-related equation are also reported by various publications [52][53][54][55][56][57]. In this paper, our purpose is to investigate the coupling of the original CH equation and the Rosenau-RLW-Kawahara equation with the dual-power law nonlinearity: with the initial condition where u 0 ðxÞ is a known smooth function, κ, η ∈ ℝ, μ > 0. The function f ðuÞ = Au + Bu m represents the dispersive nonlinear terms in both low and high-order nonlinearity, where A, B ∈ ℝ, and m ∈ ℕ indicates the power law nonlinearity. Moreover, the solitary wave solution and its derivatives have the following asymptotic values: u ⟶ 0 as x ⟶ ±∞, and for n ≥ 1, Note that equation (4) To study the nature of solutions, researchers have attempted to find the exact solution of the Rosenau-type equation. Many methods were introduced and developed to explore the analytical solution corresponding to nonlinear partial differential equations. By using the sech and trigonometric function method, Esfahani [58] (Esfahani and Pourgholi [59]) studied solitary wave solutions to the generalized Rosenau-KdV and Rosenau-RLW equation, respectively. The solitons and shock waves were discussed by Razborave et al. [60] by applying a semi-inverse variational method. In [29], Wongsaijai and Poochinapan used the sine-cosine method to find the exact solution of the Rosenau-RLW-KdV equation. He and Pan [32] also used the sine-cosine method to obtain the solitary solution for the generalized Rosenau-Kawahara-RLW equation, and the solution for the Rosenau-Kawahara-RLW equation with, notably, the generalized Novikov type perturbation was solely derived by He [38]. The solution of (2 + 1) dimensional of nonlinear wave equation using modified exponential function method and Ansatz function technique with symbolic computation was proposed in [61]. In [62], solitary wave solution for Ablowitz-Kaup-Newell-Segur water wave equation was obtained by using the simple equation method and modified simple equation method. The generalized extended tanh method and the Fexpansion method were used to derive exact solutions for the Kadomtsev-Petviashvili and the modified Kadomtsev-Petviashvili dynamical equations [63]. In addition, readers can refer to [64,65] for more methods to find analytic wave solutions. The paper has been organized as follows. In Section 2, the fundamental energy-preserving property of the initial boundary value problems is proved. By applying the energy method, the well-posedness of the new model is obtained in the solution space H 2 0 ðΩÞ. In addition, the traveling wave solutions of the equation were employed by the Ansatz method, which determines solitary solutions and periodic 2 Abstract and Applied Analysis solutions. Finally, concluding remarks are reported in the last section. Solution Properties We first state that the solution of equations (4)-(6) satisfies the following energy conservative property. Theorem 1. If the solution of equations (4)-(6) u and its derivatives ∂ x u, ∂ 2 x u go to zero when jxj ⟶ ∞, then equations (4)-(6) have the following global conservation law: for all t ∈ ½0, T. Using the integration by parts and the assumption u and its derivatives ∂ x u, Therefore, EðtÞ is a constant function, that is, which yields EðtÞ = Eð0Þ for all t ∈ ½0, T, as desired. By assumption (6), problem (4) can be set up in a compact subset of ℝ, namely, Ω = ½x L , x R . Thereby, we consider the following initial-boundary value problem (4) with the initial condition (5) and the boundary conditions For a nonnegative integer k, let H k ðΩÞ denote the usual Sobolev space of real valued functions defined on the interval Ω. We define the following Sobolev space: The solutions of equations (4) and (5) with the boundary condition (11) satisfy the following energy conservative property. ☐ Theorem 2. Suppose u 0 ∈ H 2 0 ðΩÞ; then, the solution of equations (4), (5), and (11) satisfies the following: for all t ∈ ½0, T. It should be pointed out that the invariant function EðtÞ indicates the energy conservation for equations (4) and (5). Next, we provide the well-posedness of problems (4) and (5) with the boundary condition (11) on the solution space Before providing the well-posedness, we first state the existence, which can be proved by the standard energy method. By combining the local existence and uniqueness with Theorem 2, we obtain the global existence. Therefore, we leave the proof. Proof. First, we let u 1 and u 2 are two solutions of (4) and (5) with the boundary condition (11) satisfying the initial conditions u 0,1 and u 0,2 , respectively. Let ε = u 1 − u 2 ; then, by substituting, δ corresponds to the following equation: with the initial conditions and boundary conditions where t ∈ ½0, T and x ∈ ½x L , x R . By the standard energy method, we introduce the following energy function By similar arguments as that in the proof of Theorem 1, we have Noting that the first nonlinear term can be estimated as where Theorem 1 and the Cauchy-Schwarz inequality are used. For the second term, we see that For the term M 1 , by Theorem 1 and the Cauchy-Schwarz inequality, we have Next, by simple calculations, we can estimate the term M 2 as Abstract and Applied Analysis where Theorem 1, the Cauchy-Schwarz inequality, and the Sobolev's inequality are used. Substituting equations (19)-(22) into equation (18) gives which yields E * ðtÞ ≤ e CT E * ð0Þ for all t ∈ ½0, T. Obviously, the uniqueness is consequently obtained when the initial conditions for u 1 and u 2 are the same. Moreover, if εðx, 0Þ < δ, ε x ðx, 0Þ < δ, and ε xx ðx, 0Þ < δ, then we have for all t ∈ ½0, T. That is, the solution is continuously dependent on the initial condition. Since the existence and uniqueness are obtained by Lemma 3, therefore equations (4) and (5) with the boundary condition (11) are well-posed as required. ☐ Solitary Wave Solutions Next, we focus on problems (4) and (5). By introducing ξ = x − ct, we see that equation (4) reduces to that is, where f ðuÞ = Au + Bu m . The solitary wave Ansatz method admits the used assumption Setting the coefficients of each term of sec h j ðμξÞ to zero, we have the following system: Abstract and Applied Analysis Solving system (30), we obtain the set of parameters. For sA < 0, we can obtain the following solitary wave solutions for equation (4): Additionally, the following periodic wave solutions for equation (4) can be obtained when sA > 0 Concluding Remarks In this paper, we successfully studied the nonlinear wave equation by coupling the classical Camassa-Holm equation and the Rosenau-RLW-Kawahara equation in the case of asymptotic boundary conditions. Based on the boundary conditions, we obtained that the equation possesses the conservative energy, which was used to derive the well-posedness in H 2 0 ðΩÞ. Moreover, to seek the analytic solution in H 2 0 ðΩÞ, we applied the Ansatz method to derive the solitary wave solution class by balancing linear and nonlinear terms. One can see that the dual term of nonlinearity f ðuÞ = Au + Bu m is essential to derive the class of analytic solutions. In view of Theorem 4, the order of the highest-order derivative appearing in equation (4) is five, but there are six boundary conditions as defined in equation (11), which seems that it is overdetermined for the problem on a bounded interval. It should be pointed out that the boundary condition (11) is logical to study under the solitary wave conditions, that is, u and its derivative approach to zero when |x | ⟶∞ (see equation (6)). However, there are many qualitative differences in the behavior of solutions depending on the number of boundary conditions used. Therefore, this question should be of interest in the future. Data Availability No data were available in the manuscript. Conflicts of Interest No conflict of interest exists. We wish to confirm that there are no known conflicts of interest associated with this publication, and there has been no significant financial support for this work that could have influenced its outcome.
2,853.2
2021-07-19T00:00:00.000
[ "Mathematics", "Physics" ]
Hyperglycemia and hyperinsulinemia induced hepatocellular autophagy in male mice Egyptian Academic Journal of Biological Sciences is the official English language journal of the Egyptian Society for Biological Sciences ,Department of Entomology ,Faculty of Sciences Ain Shams University . Histology& Histochemistry Journal include various morphological, anatomical, histological, histochemical, toxicological , physiological changes associated with individuals, and populations. In addition, the journal promotes research on biochemical and molecularbiological or environmental, toxicological and occupational aspects of pathology are requested as well as developmental and histological studies on light and electron microscopical level, or case reports. www.eajbs.eg.net Provided for non-commercial research and education use. The aim of the present study is to investigate the role of hyperglycemia and hyperinsulinemia in autophagy induction in the liver of male mice.Autophagy is a catabolic cellular process that recycles the aged or damaged cellular organelles and inclusions under certain circumstances.Hyperglycemia is induced by a single dose of alloxan IP injection (180 mg/kg) and hyperinsulinemia is induced by high fat diet together with glucose feeding for short period (2 weeks) and long period (3 months).Hyperglycemia and hyperinsulinemia were estimated by measuring blood glucose level by glucometer and insulin level by specific ELISA kit, respectively.Autophagy induction was investigated morphologically by electron microscopy examination and biochemically by immunodetection of microtubular associated light chain protein 3 (LC3) conversion from LC3I to LC3II form and by immunodetection of the phosphorylated and nonphosphorylated forms of mammalian target of Rapamycin (mTOR).Our results revealed that hyperglycemia and hyperinsulinemia independently induced hepatocellular autophagy as indicated by the accumulation of autophagosomes and autolysosomes in EM examination and by the increase of the level of LC3II and decrease of the phosphorylated form of mTOR in western blot analysis.This study throw the light on the autophagy of hepatocytes as a cellular mechanism induced under diabetic conditions which may contribute in better understanding of our knowledge concerning nutrients metabolic disorders. It also may occur in the face of nutrient deprivation, growth factor withdrawal or other stressors (Lum et al, 2005;Heymann, 2006;Keith and John, 2008).Three major forms of autophagy have been described in mammalian cells: macroautophagy, microautophagy and chaperone-mediated autophagy (Kim and Klionsky, 2000).In mammals, regulation of autophagy appears to be highly complicated.In the first step of autophagosome formation cytoplasmic constituents, including organelles, are sequestered by a unique membrane called the phagophore which is a very flat organelle like a Golgi cisterna.Complete sequestration by the elongating phagophore results in formation of the autophagosome, which is typically a double-membrane organelle.This step is a simple sequestration and no degradation occurs.Since autophagosomes are generated, it is called the "preautophagosomal structure (PAS)" (Kim et al., 2001;Suzuki et al., 2001;Suzuki and Ohsumi, 2007).In the last step the autophagosome fuses with lysosome to form the autophagolysosome or autolysosome where the included material is degraded by lysosomal enzymes (Mizushima, 2007).Autophagy is initiated by many diverse signals including amino acids, glucose and growth factors (Jewell and Guan, 2013).It is now believed that the endocrine system, particularly insulin, manages autophagy regulation in vivo, as example, it was found that liver autophagy is suppressed by insulin and enhanced by glucagon (Mortimore and Pösö, 1987).Diabetes mellitus (DM) is a group of metabolic diseases characterized by hyperglycemia resulting from defects of insulin secretion and/or increased cellular resistance to insulin.Chronic hyperglycemia and other metabolic disturbances of DM lead to long-term tissue and organ damage as well as dysfunction involving many organs and systems.Type 1 diabetes used to be called juvenile diabetes and insulindependent diabetes mellitus.It is an autoimmune disease in which the immune system mistakenly destroys the insulin-making β-cells of the pancreas.Non-insulin-dependent diabetes mellitus also referred to as type II diabetes is the most common of all metabolic disorders.Type II diabetes currently affects about 6-7% of the US population, with a cumulative risk of 17% by age 80 (Warram et al, 1995).The association between liver disease and DM is well known; DM itself may be a cause of liver disease via non-alcoholic fatty liver disease (NAFLD), nonalcoholic steatohepatitis (NASH), cirrhosis and ultimately hepatocellular carcinoma.It was found that Post-transplantation DM is a major cause of morbidity and mortality in subjects following liver transplantation (Simona et al, 2007).Autophagy is important for proper β-cells function and viability.Originally, autophagy was reported to be activated in β-cells upon stress induction as a protective mechanism (Las and Shirihai, 2010).Autophagic cell death contributes to loss of pancreatic β-cells mass in diabetes.Beta cells apoptosis plays a major role in reducing their mass in diabetes, autophagic cell death can also contribute to loss of β-cell mass (Ze-fang et al, 2011).Here we investigated the induction of autophagy in the liver of two different cases of metabolic disorders.In alloxan induced type I diabetes characterized by hyperglycemia and hypoinsulinemia and in high fat diet feeding characterized by normoglycemia and hyperinsulinemia. Materials Alloxan Animals and experimental design Thirty five adult male Swiss albino mice weighing 25-30 g. were used in the present work.They were purchased from and maintained in Assiut University Joint Animal Breeding Unit.Suitable temperature of almost 23 ± 2 °C and 12 hours of light /dark cycle were also into consideration.All animals were given free access to standard chow and tap water.All experimental procedures were conducted in strict compliance with the guide of National Institute of Health for the Care and Use of Laboratory Animals.Mice were categorized into four groups.Control group (cnt) fed with normal rodent chow (8% energy from fat), allox group injected intraperitoneally with freshly prepared single dose of alloxan monohydrate (180 mg/kg) to become diabetic, short period high fat diet feeding group (shfd) fed with high fat diet rodent chow ( 46% energy from fat) together with orally administration of glucose (0.5 ml of 25% glucose every 6 hours) for 2 weeks and long period high fat diet group (lhfd) fed as shfd group but for 3 month. Histological and histochemical preparations For histological preparation of pancreas and histochemical examination of general proteins, immediately after sacrifice, pieces of organs were fixed in 10% of neutral buffered formalin pH 7.2, dehydrated in ascending series of alcohols, cleared in cedar wood oil and embedded in paraffin wax.Paraffin sections of 5 micrometer in thickness were prepared and then stained routinely with Harris haematoxylin and eosin stain.For histochemical examination of general proteins, liver sections were stained with bromophenol blue as described (Mazia et al, 1953).The colour intensity was estimated by Image J software and expressed as arbitrary units.Morphometric analysis for the area of islets of Langerhans was done using Image J software.For each section, 5 fields were examined; at least 10 different sections for each treatment were calculated. Electron microscopy For electron microscopy, small slices of liver were fixed in 2.5 % glutaraldehyde in cacodylate buffer.The specimens were washed in cacodylate buffer (0.1 M, PH 7.2) for 1-3 hours and then post fixed in 1 % osmium tetraoxide for 2 hours.The specimens were placed in propylene oxide for 1 hour, then in pure epon 812 and incubated in a special polymerization incubator (one day at 37 °C, second day at 45 °C and then three days at 60°C).Ultrathin section (50 nm) were mounted in copper grids and stained with uranyle acetate, lead citrate and examined by ''Jeol TEM" in the electron microscopic unit, Assiut University. Fasting Blood glucose and insulin levels measurements Mice were fasted for almost 8 hours and then glucose levels were measured by hand-held glucose test monitor (Lifescan, Johnson and Johnson) from whole tail vein blood and expressed as mg/dl.Serum insulin was quantified using mice insulin ELISA kit (Crystal Chem, USA) according to manufacturer protocol. Western blot analysis Liver tissues were lysed in 500 l of RIPA lysis buffer supplemented with 1 mM phenylmethylsulfonyl fluoride (PMSF) and protease inhibitor cocktail and homogenized slowly by hand held homogenizer at 4°C.Tissue debris was removed by centrifugation at 10000 x g for 5 min at 4°C.Supernatant was collected and protein concentration was determined.Aliquots containing 40 g proteins were subjected to 12% SDS-PAGE, and then transferred to nitrocellulose membranes.Blocking of active sites was carried out with 5% skim milk in TBS with 0.05% Tween 20 and incubated with primary antibodies (overnight, 4 °C) and HRP-conjugated secondary antibodies (1 h, room temperature) in blocking solution.Target proteins were visualized by chemiluminescent substrate kit.Anti- actin goat polyclonal antibody was used for equal loading confirmation.Estimation of each band optical density was carried out using Image J software and neutralized to the corresponding  actin band. Statistical analysis Data were presented as mean ± SD.Statistical analyses were performed using ANOVA.P value less than 0.05 was considered significant. Fasting blood glucose and insulin levels The fasting blood glucose levels (fbgl) significantly increased in alloxan treated animals compared with control (Fig. 1 A).The fbgl slightly decreased in shfd and lhfd groups compared with control.These decreases were statistically non-significant (Fig. 1 A).Because shfd and lhfd groups did not show any significant increase of fbgl, so they may have a mechanism for reducing blood glucose.Accordingly, we measured serum insulin level in different experimental groups (Fig. 1 B).Serum insulin was reduced in allox group compared with control group.High insulin level was detected in high fat diet feeding groups with the highest level recorded in shfd group.We decided to examine Langerhans islets looking for any change in their structure which may indicate the change of insulin level.It was found that the islet area significantly decreased in alloxan treated group compared with control (Fig. 1 C and D).The mean values were (79±3.2µm2,140.1±9.5µm2)for allox and cnt groups, respectively.Inversely, the islet area significantly increased in shfd and lhfd groups compared with control.The mean values were (3592±151.3µm2, 392.5±35.5 µm2) respectively.It is clear that high fat diet for short period enhanced the enlargement of islets and then the islet area reduced with continuous high fat feeding for long time.This may indicate the compensatory hyperinsulinemia upon high fat and glucose feeding. Induction of autophagy by hyperglycemia or hyperinsulinemia The conversion of the lipidated LC3І form to the cytosolic LC3ІІ form significantly increased in allox, shfd and lhfd groups as indicated by LC3II/LC3I ratio (Fig. 2 A and B).This result indicates the elevated level of autophagy intensity in hyperglycemia or by short or long period of high fat diet feeding.To confirm the autophagy induction in hyperglycemia or hyperinsulinemia another autophagic marker was used; mTOR and its phosphorylated form pmTOR. Immuno-detection of mTOR and pmTOR revealed that level of pmTOR decreased in allox, shfd and lhfd groups compared with control (Fig. 2 A).The pmTOR/mTOR ratio showed low levels in above mentioned groups compared with control (Fig. 2 C). Autophagy morphology by EM Electron microscopic examination of normal hepatocyte revealed normal appearance of organelles including rounded nucleus with normal distribution of euchromatin.In addition, the hepatocyte contains round and elongate mitochondria which were found intermingled with cisternae of rough endoplasmic reticulum.Lysosomes and abundant glycogen are also observed (Fig. 3, upper left panel).Ultrastrctural examination of allox, shfd and lhfd groups showed the presence of autophagy process.Numerous autophagosomes of varying sizes bounded by double membranes could be observed in the majority of the examined hepatocytes of these groups.Some of autophagosomes contain glycogen granules and assortment of organelles of variable electron density (mitochondria, endoplasmic reticulum and ribosomes) of various stage of degradation (Fig. 3, upper right panel and lower panels).In addition, the cisternae of rough endoplasmic reticulum especially around the nucleus were separated by numerous areas of rarified cytoplasm.In shfd group large numbers of rarefied areas in cytoplasm containing electron dense myelinated figures which are end products of autophagy process were observed (Fig. 3, lower left panel).These myelinated figures appeared abundantly in shfd and rarely in lhfd, which indicates the high intensity of autophagy process in these groups. General protein depletion in hyperglycemia and hyperinsulinemia induced autophagy. In control liver tissue, the hepatocytes showed an intensive reaction for total proteins as indicated by a dense blue colour (Fig. 4 A).In allox, shfd and lhfd groups, marked and significant depletion in protein content was observed in hepatocytes as indicated from the optical density measurement (Fig. 4 B).In shfd group, Protein inclusions of the liver were the most reduced one compared with other groups (Fig. 4 B); this may indicate the intensity of autophagy in shfd group compared with allox and lhfd groups. DISCUSSION In the current study, autophagy was initiated in hyperglycemia induced by alloxan and in normoglycemic hyperinsulinemic mice fed with high fat and glucose diet for short and long periods.Autophagy is not one way process, but it is a complex process initiated in many cellular conditions.In the following, we try to throw the light on the mechanism responsible for autophagy initiation under the effect of hyperglycemia and hyperinsulinemia.The fasting blood glucose levels increased in alloxan treated group and decreased non-significantly in shfd and lhfd groups when compared with cnt group.Alloxan, a β-cytotoxin, is known to induce chemical diabetes in a wide variety of animal species by damaging the insulin secreting cells resulting in increased plasma levels of glucose and a fall in liver glycogen (Rajathi andDaisy, 2011 andAdeyi et al., 2012).Recently Ankur and Shahjad, (2012) reported that DNA fragmentation took place in the βcells exposed to alloxan which stimulates poly ADP-ribosylation, a process participating in DNA repair.The nonsignificant change happened in fasting blood glucose levels in shfd and lhfd groups versus that of control group may be resulted from the compensatory increase of islet of Langerhans area "mainly β-cells" to increase insulin secretion and return the blood glucose within normal levels.In present study, measurements of the area of islets of Langerhans in pancreatic tissue showed that the islet area decreased in allox group when compared with cnt group.Whereas, islet area was highly increased in shfd and lhfd groups when compared with cnt group.Nermeen et al., (2010) observed apparent reduction of size and number of islets in pancreas of diabetic mice treated by alloxan.Islets hyperplasia was observed in mice infused continuously with glucose (Kinash and Haist, 1954) or high carbohydrate diet (Barberà et al., 2003) and in male wistar rats fed on high carbohydrate and high fat diet (Panchal et al., 2011).James et al., (2001) showed that Neonatal rats fed a high carbohydrate formula by gastrostomy were hyperinsulinemic but normoglycemic.They cited that the rapid increase in islet cell mass that occurs in late fetal and neonatal life in the rat may explain why pancreatic morphology is so sensitive to nutritional insult at this time.Li et al., (2006) explained that the expansion of β-cell mass in response to insulin resistance is found in many animal models, and obese humans have an increased islet mass compared with lean individuals.The signal for expansion of islet mass is not clear but likely involves a response to increased glucose flux and may be dependent on intact insulin signaling pathways within the βcell.In allox, shfd and lhfd groups marked conversion of cytosolic LC3-Ι into autophagosome specific isoform LC3-ΙΙ were observed which indicate the presence of autophagy process in the liver.The autophagy induction by hyperglycemia was reported in streptozotocin-induced diabetes in rats (Satoru et al., 2012).The induction of autophagy in hyperglycemia may be related to the increased plasma glucagon level and/or the decreased insulin which is corrected by insulin therapy (Amherdt et al., 1974). Recently, another explanation for autophagy induction in hyperglycemia was introduced by Claudio et al., (2011).They reported that there are diverging upstream proapoptotic signals in both types of diabetes, as a result of mitochondrial dysfunction and production of reactive oxygen species (ROS).Elevation in ROS is essential for autophagy to proceed because their presence may control the activity of Atg4, a gene that is necessary for autophagosome formation.In the current study, hyperinsulinemia also induced autophagy.However, hyperinsulinemia is accompanied by normoglycemia, so it is clear that autophagy is initiated by different mechanism in this case.The high fat and glucose feeding may cause temporary hyperglycemia, which in turn induced autophagy induction regardless of hyperinsulinemia.Our suggestion is enforced by the work of Nirmala et al., (2011); they observed the upregulation of autophagy is reversed when diabetic animals are treated with insulin.Brinda et al., (2010) cleared that several signaling pathways seem to regulate autophagy in mammalian cells.Similar to yeast, the classical pathway involves the mammalian target of rapamycin (mTOR).It is the mammalian ortholog of the yeast protein kinase TOR that negatively regulates autophagy.In this study, Immunoblot detection of mTOR and pmTOR in the liver of allox, shfd and lhfd groups showed low level of pmTOR in comparison to mTOR.The pmTOR/mTOR ratio decreased compared with that in control group which indicate the presence of autophagy process .This result agree with the explanation of Dos et al., (2005) who cleared that mTOR pathway is a key regulator of cell growth and proliferation and increasing evidence suggests that its deregulation is associated with human diseases, including cancer and diabetes.Nakatsu et al., (2010) cited that mTOR is a multifunctional serine/threonine kinase that regulates cell growth and survival.When nutrients are abundant, mTOR is phosphorylated and promotes protein synthesis.On the other hand, in energy depletion, autophagy is induced by dephosphorylation of mTOR.Proud, (2006) reported that insulin stimulates protein synthesis and cell growth by activation of the protein kinase B and mTOR.General protein staining in the current study revealed that hyperglycemia is accompanied by depletion of protein content in liver tissue.Also, in shfd and lhfd groups are accompanied by depletion of protein content.So it is suggested that insulin is not the main agent which controls protein synthesis as previously reported by (Proud, 2006).It is suggested that autophagy is the most responsible process for protein depletion in allox, shfd and lhfd groups of the current study. Here we introduced evidences for the induction of autophagy in two different cases of metabolic disorders, hyperglycemia and hyperinsulinemia.Our findings may throw the light on the mechanism and importance of autophagy in diabetes and its complications. Fig. 2 : Fig. 2: immunodetection of LC3 and mTOR: LC3I & II and mTOR & pmTOR were detected by western blot in liver tissues of different treatments, representative figures out of at least three independent experiments are shown (A).Estimation of band optical density was carried out by Image J software and normalized to the corresponding actin band and expressed as arbitrary units (B and C).Data representing the mean ± SD, * P < 0.05 compared with control. Fig. 1 : Fig. 1: Fasting blood glucose and insulin levels in different treatments: fasting blood glucose level (fbgl) and serum insulin were estimated as described in method section (A and B).Measurement of Langerhans islets area (C).Representative photomicrographs of pancreas sections stained with H & E of different experimental groups showing Langerhans islets (D), Bar equals 50µm.Data representing the mean ± SD, * P < 0.05 compared with control. Fig. 3 : Fig. 3: Electron micrographs of hepatocytes of different experimental groups.Ultra-thin sections (50 nm) were prepared for TEM examination as described in method section.Electron micrographs (X 6700) showing autophagosomes (arrows), nucleus (N), mitochondria (M), rough endoplasmic reticulum (RER), lysosomes (L) and glycogen granules (G).In shfd and lhfd groups there are many electron dense myelinated figures (arrowhead) representing residual bodies of late lysosomes indicating extensive autophagy process in rarified areas in the cytoplasm (asterisk). Fig. 4 : Fig. 4: General protein contents of liver in different treatments.Representative photomicrographs of liver sections stained with bromophenol blue of different experimental groups (A), bar equals 50µm.Estimation of colour optical density was carried out by Image J software and expressed as arbitrary units (B).Data representing the mean ± SD, * P < 0.05 compared with control.
4,484.2
2015-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Not the sum of their parts: understanding multi-donor interactions in symmetric and asymmetric TADF emitters † A pair of thermally activated delayed fluorescence (TADF) emitters with symmetric and asymmetric D–A–D structure are investigated. Despite displaying near-identical photoluminescence spectrum and quantum yields, the symmetric material possesses significantly better delayed fluorescence characteristics and OLED performance. Building on a previous study of analogous D–A materials we are able to explain these differences in terms of different strengths of electronic interactions between the two donor units. This interaction lowers the energy of the TADF-active triplet state in the asymmetric molecule, increasing its singlet–triplet energy gap and leading to worse performance. This result therefore demonstrates a new strategy to selectively control the triplet states of TADF molecules, in contrast to established control of singlet states using host environment. These results also show that multi-donor TADF emitters cannot be understood simply as the sum of their isolated parts; these parts have different electronic interactions depending on their relative positions, even when there is no scope for steric interaction. Introduction Due to tremendous research efforts in recent years, purely organic thermally activated delayed fluorescence (TADF) materials have proven their potential for optoelectronic applications. 1 Not only have TADF materials found utilization in highly efficient organic light emitting diodes (OLEDs), 2-4 but their emissive and triplet-management properties have also enabled crossdisciplinary applications in fluorescence sensing and imaging, 5 optical temperature sensing, 6 and catalysis. 6 The widely acknowledged success of TADF emitters is primarily due to their near optimal quantum efficiency in electroluminescent devices; 100% values can be achieved 7 in comparison to the 25% limit for conventional fluorescence emitters. Additional benefits include their largely reduced cost, lowered toxicity, and potential ability to achieve deep blue emission -each of which are intractable challenges for pre-existing rare-or heavy-metal containing organometallic phosphorescent emitters. 8 These merits have brought TADF emitters to the forefront of materials science, and intense research directed towards deeper understanding of the underlying mechanism and development of novel compounds continues presently. The TADF mechanism is based on a second-order spinvibronic coupling between a charge transfer triplet state ( 3 CT) and a local excited triplet ( 3 LE) to mediate the up-conversion reverse intersystem crossing (rISC) of the coupled 3 LE/ 3 CT triplet(s) to the emissive charge transfer singlet ( 1 CT) state. 9,10 In turn, achieving fast rISC directly depends on the minimization of the singlet-triplet energy gap (DE ST ) -an essential, but not sufficient condition for the observation of TADF. Much work has been carried out to discover chemical motifs that minimize the DE ST gap, and correspondingly maximize rISC. [11][12][13][14][15][16] As a result of this multidisciplinary work, generic design rules for successful TADF emitters have emerged. 17 Primarily, bridging of sterically hindered electron donor (D) and acceptor (A) groups in a twisted D-A architecture commonly results in weakly overlapping highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO). Consequently, such materials frequently possess charge transfer (CT) states with low electron exchange energy. [18][19][20] In recent years through-space (exciplex-like) D-A interactions and non-overlapping single-molecule multiresonant electronic structures have also been shown to deliver unexpected and outstanding TADF performance. [21][22][23][24][25][26] The choice of specific D and A chromophores and any structural modulation of the dihedral angle between them is often the foremost tool for tuning of the CT character, emission energy, and reduction of DE ST towards engineering efficient rISC and TADF. However, additional undesirable effects such as red-shifting and broadening of the emission as well as severe reduction of the oscillator strength 27 can also occur somewhat unpredictably. In attempts to realize narrow TADF emission of a target colour and high efficiency, several investigations into more subtle aspects of molecular design have been reported. Recently, binding the D and/or A chromophores through various linking topologies (ortho-, meta-, para-) was shown to be an effective strategy for fine-tuning the energy levels and couplings of the moieties. 13,26,[28][29][30][31][32][33] For instance, in our previous work we have attributed the differences in the performance of isomeric TADF emitters to various resonance and inductive effects around the acceptor unit's aromatic p-system. 29 This work showed that control over the dihedral angle alone is insufficient for fast rISC and efficient TADF, and that chemically identical donors can induce different TADF properties purely based on the position in which they are installed. Despite different conjugation strengths at the meta-and para-positions (expected to lead to varying extents of molecular planarization) the dimethylacridine (DMAC) donor was also found to have the same dihedral angle at either position. This surprising result was explained in terms of the DMAC donor self-regulating its steric environment with the C-H bonds at the 1-and 8-positions. This final conclusion stands in contrast to other more compact donors such as carbazole, 34,35 with dihedral angles that are more susceptible to external influences. This property also makes DMAC an ideal donor for comparing more subtle aspects of molecular design, with the influence of dihedral angle variation largely controlled. Another popular strategy in TADF material design involves introduction of additional donors, resulting in D-A-D or D-A-D 0 molecular architectures. A plethora of multicolour TADF emitters have been developed using this approach. [36][37][38][39][40][41][42][43][44] Even a number of white emitters with D-A-D 0 structure have been reported, [45][46][47] although it remains unclear how to correlate the properties of the D-A-D 0 materials from those of the individual D-A and D 0 -A analogues. 48 An early advance in the D-A-D approach was made by Adachi et al., who introduced multiple donors with mutual steric interactions. For instance, 4CzIPN is a high-performance green TADF emitter based on the multi-donor approach which has received sustained research attention. [49][50][51] While originally it was believed that introduction of multiple donors ensured fixed dihedral angles between the D and A, the cumulative electronic effect of the donors was more recently attributed to the sum of the donating fragments 52 in a 'bottom up' investigation. Similar recent reports into multi-carbazole systems have also attempted to explain findings in terms of influences of individual donor units 53,54 on the larger electronic system. This multi-donor approach has also inspired a number of subsequent works. [55][56][57] For instance, Oh et al. focused on acceptor substitution pattern in a series of isomeric multidonor TADF emitters, comprising carbazole and 2,4-diphenyl-1,3,5-triazine as the donor and acceptor, respectively. 56,58 Their thorough theoretical and experimental approach allowed them to unravel the complexity of the steric interactions between the donors. From the photophysical analysis, the authors concluded that 2-/3-and 2-/6-substitutions of the donors feature decreased energy gaps and shortened delayed fluorescence lifetimes by means of large dihedral angle of the donors. Such a dihedral effect allowed for a degree of control over the energy gap and a rISC rate, resulting in OLEDs with correlated efficiencies and roll-offs. While significant attention in this work was dedicated to the investigation of the steric effects (which dominate dihedral angles for carbazole donors 34,35 ), many questions regarding the electronic communication in multi-donor TADF emitters remained unanswered. Building on the previous findings of our group 29,56 and aiming to better understand the connection between analogous D-A and D-A-D molecules, we investigate two isomeric D-A-D TADF emitters comprised of a benzonitrile acceptor and acridine donors attached at the 2,5-or 2,6-positions of the acceptor. Comparison to previously reported D-A materials (facilitated by the self-regulating dihedral angle of DMAC 59 ) allows us to compare these systems with minimal additional complexity introduced by the second D unit. Using a combination of experimental and theoretical methods, we demonstrate that electronic interaction between the donating moieties -modulated by the relative position of each -alters the 3 LE energy and thus also DE ST and TADF performance. We therefore demonstrate a viable strategy of selectively controlling the LE triplet energy in TADF multi-donor emitters, without altering the CT singlet energy. This provides a counterpart to the commonly-employed host tuning strategy that minimise DE ST by external action on the polarity-sensitive CT singlet state. 3,60 Furthermore, this work establishes that multi-donor TADF emitters cannot be understood simply as the sum of the donating fragments, or as perturbations of analogous D-A materials. Instead, emergent inter-donor interactions must be taken into account, which immediately disqualify such bottom-up approaches. Additional thermal, electrochemical, and crystallographic properties of the two molecules are also included in the ESI, † demonstrating their near-identical physical properties -including equal electron affinity and ionisation potentials. Photophysical properties In anticipation of their applications in OLEDs and guided by optimised doping concentrations reported for similar DMAC containing TADF materials, 13,61 the optical properties of (o,m)ACA and (o,o)ACA were investigated primarily in 25% v/v co-doped evaporated films using bis[2-(diphenylphosphino)phenyl]ether oxide (DPEPO) as host. Fig. 1a shows the UV-vis absorption, photoluminescence (PL), and time-resolved low temperature phosphorescence spectra (PH) of the films. Also shown are comparisons of the PL (Fig. 1b) and PH spectra ( Fig. 1c) with those of oDA and mDA (10% w/w drop cast films in DPEPO), the single D and A analogues. The similarities in optical properties between (o,m)ACA and (o,o)ACA are striking, with UV-vis and PL spectra nearly identical (Fig. 1a). This trend is also preserved in a range of different solvents ( Fig. S9 and S10, ESI †). oDA and mDA also have very similar singlet energies to each other in DPEPO (taken from PL onset wavelength), although with mDA marginally higher in energy than oDA and with broader PL band. This trend is consistent with what was previously reported for these D-A materials in polymer host zeonex, and arises from differences in electron-hole separation in the CT excited state 62 as well as differences in acceptor strength at different locations around the central benzonitrile ring. 29 Key photophysical properties are presented in Table 1, with the similarities in singlet energy and PLQY strongly indicating that both (o,m)ACA and (o,o)ACA emit through the same CT state, formed by the donor unit orthoto the acceptor unit. Conversely, the meta-donor unit in (o,m)ACA is expected to form a higher-energy CT state (as it does in mDA compared to oDA), and thus must have limited influence on the singlet state properties in (o,m)ACA -evidenced by its identical PL spectrum to (o,o)ACA, which does not possess this structural feature. Both (o,m)ACA and (o,o)ACA have significantly lower singlet state energies than the D-A materials, despite both lacking the tBu donor substituents that make it more strongly electron donating. 61 We note that due to this structural difference the energies of the D-A and D-A-D materials should not be compared directly -only the trends within each pair. Although the absence of the tBu groups would typically lead to weaker CT strength and blue-shifted emission, the opposite observation here hints at cooperative effects between the two donors yielding a stronger overall CT state than each D can generate alone. Similar effects are likely responsible for the different emission colours of 4CzIPN/2CzIPN, 4CzPN/2CzPN (both pairs green/ blue 49,52 with additional/fewer Cz units), and other multicarbazole systems. 56 However, this comparison is complicated by the potential for steric interactions between neighbouring carbazoles. 34 Such steric interactions can be disregarded for the well-spaced and intrinsically perpendicular DMAC donors in the present materials though, giving clearer insight into the purely electronic effects associated with different substituent positions. In terms of triplet energies the materials show more noticeable differences. The triplet energy of mDA (2.95 eV, from PH onset wavelength) is higher than that of oDA (2.91 eV), which was previously explained in terms of different D-A coupling strengths and conjugation at different positions relative to the A, arising from the effects of electronic resonance structures. The triplet energy of (o,m)ACA (2.81 eV) is surprisingly significantly lower than that of (o,o)ACA (2.86 eV). In both materials the common oDA sub-unit appears to control the lowest energy triplet state, but it is not readily apparent why (o,m)ACA has a significantly lower triplet energy (50 meV) than (o,o)ACA. Indeed, the only structural difference is the presence of the mDA sub-unit in (o,m)ACA, which has a higher intrinsic triplet energy. Combined with near-identical singlet energies, this lower triplet energy therefore also leads (o,m)ACA to have a significantly larger DE ST gap. Although it is not immediately clear how this intrinsically higher triplet energy structural subunit could lead (o,m)ACA to have a lower overall triplet energy, the consequences of this difference are immediately evident in subsequent measurements. The constant throughout the prompt fluorescence time regime (PF, 1-100 ns), later undergoing a slight spectral redshift through the delayed fluorescence (DF, 100 ns onward). This behaviour is typical of TADF materials with C-N linkages, arising from structural relaxation and/or dispersion of rISC rates associated with a distribution of molecular geometries and CT energies. Overall the decay kinetics of (o,o)ACA and (o,m)ACA are much more alike than those of mDA and oDA. In particular the similar PF decay lifetimes ( Table 1) strongly suggest that the emission emerges from the same CT state in both materials (i.e. that formed between the acceptor and an ortho-donor). In contrast the PF lifetimes are considerably different in mDA and oDA, reflecting the different CT states which give rise to these emission regimes in those two materials. In the DF regime the delayed emission is significantly stronger and more rapid for oDA than for mDA, previously explained due to the different donor-acceptor electronic couplings at the different positions around the benzonitrile ring. While the DF emission is both weaker and slower in (o,m)ACA than in (o,o)ACA, the differences between the D-A-D materials are much less pronounced than for the D-A materials. Nonetheless, this inferior DF performance in (o,o)ACA presumably results from smaller relative rates of ISC (controlling DF intensity) and smaller rates of rISC (controlling DF decay rate) extracted by kinetic fitting of the decays. 63 In (o,m)ACA and (o,o)ACA the established differences in DE ST are able to explain these differences, as both processes rely on near-isoenergetic electronic states to make the otherwise spin-forbidden rISC process proceed at appreciable rates. OLED performance The electroluminescence performances of (o,o)ACA and (o,m)ACA were investigated in a previously optimised 13,61,64-66 device architecture consisting of ITO|NPB (40 nm)|TSBPA (10 nm)|emitter:DPEPO x vol% (30 nm)|DPEPO (10 nm)|TPBi (40 nm)|LiF (1 nm)|Al (100 nm). The concentration of emitter in the emissive layer was optimised at 20% for (o,o)ACA, with this concentration then also used for (o,m)ACA. The key electroluminescence properties of the devices are presented in Fig. 3, and in Table 2. This device architecture relies on DPEPO for electron transport through the emissive layer, with the DMACcontaining emitter providing hole transport. Consequently, (o,o)ACA devices using hole-transporting mCP as the emissive layer host (with no material capable of providing electron transport) displayed slightly blue-shifted emission spectra but much lower efficiencies (typically o10% EQE max , Fig. S8, ESI †). This is despite the triplet energies of the mCP (B2.97 eV) 67 curves also indicate near identical charge transport properties as well. We note that the emission colour is not as deep-blue as similar materials reported by Noda et al., 68 confirming that the analogous diphenylacridine D unit is a weaker electron donor than DMAC. The maximum external quantum efficiencies (EQE max ) of the two emitters are both in line with their similar PLQYs and different rISC rates, which govern OLED performance in the low-driving regime where rISC competes favourably with other quenching mechanism. At higher driving voltages the performance of the (o,m)ACA device suffers considerably, as the same quenching processes that rISC competes with at low driving conditions scale strongly with current and exciton density. Normalised EQE curves are presented in Fig. S8 (ESI †) to facilitate comparison of this efficiency roll-off behaviour. Accordingly, the maximum brightness that the (o,m)ACA device can achieve is also lower. All of this behaviour is consistent with its lower rate of rISC, leaving it unable to harvest triplet states fast enough to avoid multi-exciton or charge-exciton quenching and degradation processes at larger driving currents. While the (o,o)ACA device also suffers quenching at higher driving, it is able to resist these processes more effectively due to its faster rISC rate. View Article Online The device performance is therefore entirely in line with expectations built upon the preceding optical results. We note that alongside the following physical insights arising from comparison of (o,m)ACA and (o,o)ACA, these results also establish both materials as objectively high-performance cyan TADF emitters. Such emitters with good TADF but less-thanideal emission colour are currently enjoying expanded utility as sensitisers for deep-blue hyperfluorescence OLEDs. 53,65,[69][70][71][72][73] Discussion and DFT calculations From a materials design perspective, the interesting question is why the triplet energies are so different, despite their similar chemical subunits and identical singlet energies. This question is especially difficult to answer as it goes against the trend established in the simpler oDA and mDA materials. This outcome means that the underlying cause must somehow be an emergent property arising from the presence of both donors and their resulting interactions -interactions absent in the D-A materials. If this were not the case, (i.e. if these materials behaved simply as the sum of their D-A analogues/fragments) we would expect the triplet energies to either be identical, or to follow the same trend as seen for the D-A materials with the mDA fragment leading to higher overall triplet energy in (o,m)ACA -in conflict with observation. To better understand this behaviour we turn to DFT calculations. Calculations were performed with the Gaussian 09 package 74 using isolated molecules optimized at the rBMK/6-31G(d) level in gas phase. The spectroscopic properties of the molecules and their excited states were calculated by means of time-dependent DFT (TD-DFT) 75 also employing the 6-31G(d) basis set. The BMK functional was chosen as it has shown to be adequately reliable for the description of the low energy excited states in D-A CT compounds (including oDA and mDA), both by us 29,61,[76][77][78] and by others including in benchmarking studies. [79][80][81] Fig . 4 shows the NTOs and energies calculated for relevant triplet and singlet states in (o,m)ACA and (o,o)ACA. By inspecting the singlet NTOs in (o,m)ACA we first note that the CT singlet associated with the ortho-donor (S 1 ) is lower in energy than that associated with the meta-donor CT state (S 2 ). This is in agreement with expectations and the trends established for oDA and mDA, while the similarities between the donor/acceptor-centred NTOs here and donor/acceptor-centred HOMO/LUMO distributions previously reported for oDA and mDA confirm that these are predominantly CT transitions. In (o,o)ACA the S 1 and S 2 states are much closer in energy, and each involves both of the ortho-donor units. We suggest that these represent symmetric (S 1 ) and antisymmetric (S 2 ) combinations of otherwise degenerate CT states associated with either the left or right donor individually, and that the involvement of both donor units may contribute to the slightly faster k f and higher PLQY observed for (o,o)ACA. This is analogous to the formation of symmetric (bonding) and antisymmetric (antibonding) molecular orbitals from combinations of degenerate atomic orbitals (Fig. 5a). For the first two triplet states of CT nature similar trends are observed. The first calculated triplet state of LE nature is T 3 , centred on the A unit in both materials and with nearly identical NTOs. This LE triplet state is the one relevant to vibronic coupling and rISC, and corresponds to the same triplet state identified by phosphorescence measurements in the previous sections (labelled in that section as T 1 , with CT triplet states frequently non-emissive). To discount the alternative assignment (i.e., PH from CT states), we note that although the PH spectra are not structured, this alone is not enough to assign CT character to the PH state. Ultimately in the discussions below we present a mechanism that can cause the LE triplet sates to be significantly different in energy while leaving CT singlet states unaffected -as is observed experimentally. In contrast, we are not aware of any mechanism that could explain different CT triplet states in (o,m)ACA and (o,o)ACA while leaving CT singlet states unaffected. Interestingly, the calculated T 3 energies of (o,m)ACA and (o,o)ACA are in the opposite order as found experimentally, with about the same difference in triplet energies in both cases (B50 meV). In the following discussion we propose a mechanism that explains the observed triplet energy ordering and why this is not reflected in calculations. We note that the reason for the experimental (o,m)ACA triplet energy being lower than (o,o)ACA cannot be due to the combination of individual couplings of the A to the two D units. If this were the case we would expect the two materials to either have identical lowest triplet energies (from coupling between the A and the ortho-D in each material), or for (o,m)ACA to have a higher triplet energy than (o,o)ACA (due to coupling between A and meta-D, which is intrinsically higher in energy as in mDA). Any such state-mixing between LE and CT states is also unlikely to be a contributing factor, due to the forbidden nature of mixing these states with different orbital symmetries. 82,83 Instead we propose that the LE T 3 states in both (o,m)ACA and (o,o)ACA interact with higher-lying LE states delocalised across both donor units (D-D states). A representative state diagram is presented in Fig. 5b, showing how these unoccupied electronic states would form. A similar explanation was recently employed to explain the performances of a series of differently connected multicarbazole TADF materials, although that study invoked the active participation of delocalised multi-D or multi-A states in the formation of CT states. The conclusions of that work are also complicated by the potential for additional 53 These factors are avoided here by the use of well-spaced donors, and the ability of DMAC to manage its own steric environment. 29 The proposed D-D states are (to first approximation) formed by linear combinations of the individual donor LE states (Fig. 5b), and so one of these D-D states (the symmetric combination) is expected to be the lowest-energy LE singlet state in each molecular system. This expectation is supported by the absorption spectra of the two materials, discussed in more detail below and presented in Fig. S9 (ESI †), which show the first major absorption band at a wavelength consistent with DMAC. 84 Accurately accounting for such interactions with unoccupied states would instead require more advanced multireference or complete active space ab initio methods, which are impractical for molecules of this size. Applying molecular orbital theory in the symmetric and asymmetric D-A-D systems, we can infer several properties of the D-D states and how they would differ. Due to different conjugation strengths across the linker unit for (o,m)ACA (pD-D state) than for (o,o)ACA (mD-D state), we would expect the pD-D state to be lower in energy and have larger electron density on the central bridge region (Fig. 5b). This would subsequently lead to a larger orbital overlap between the pD-D state and the 3 LE state associated with the A unit ( 3 LE A ) in (o,m)ACA as compared to mD-D in (o,o)ACA. The resulting state mixing between D-D and 3 LE A states lowers the observed phosphorescence energies in both materials compared to calculations, which cannot account for interactions with unoccupied orbitals. Due to increased orbital overlap the state mixing with the 3 LE A is more extensive for pD-D than for mD-D, leading to a yet lower triplet energy in (o,m)ACA and the observed ordering of experimental phosphorescence energies (Fig. 5c). While other higher-energy LE states would also be influenced by interactions with the D-D states, none of these higher LE states are measured or expected to influence the TADF properties. Due to differences in the shapes of their excited state wavefunctions leading to zero overlap integral, the CT states are not expected to interact with the D-D states, and so are totally unaffected both in calculations and experiment (identical PL spectra and S 1 energies). Supporting these expected properties of the D-D states, similar trends in excited state energy in other para-or meta-linked bichromophores are commonly reported, 67,85,86 for example the higher triplet energy of mCBP (2.8 eV) compared to para-linked CBP (2.6 eV). Donor interactions of a similar nature over para-linkages may also be responsible for lower energy emission in recently reported multi-resonance materials using para-bichromophore designs, 26 or when decorated with additional carbazoles. 33 This explanation is also entirely consistent with the effects of donor position on excited-state energies previously reported for oDA and mDA 29 and by others in analogous systems. 32 The lower triplet energy in (o,m)ACA is therefore identified as an emergent property of the pair of donors. This lowering of triplet energy is irrelevant to the analogous oDA or mDA materials, and is impossible to predict by considering these fragments in isolation. These results therefore demonstrate that a 'bottom-up' approach to understanding TADF materialsrecently espoused for 4CzIPN 52 -is simply untenable as it cannot account for these kinds of emergent higher-order effects. We also suggest that much of the complex photophysics of 4CzIPN is more likely attributable to the presence of persistent dimer species. 34,50,51 Although based on well-established principles of molecular orbital theory, much of the previous explanation is speculative. Nonetheless some evidence for the existence of the proposed D-D states can be found in the experimental absorption spectra ( Fig. S9, ESI †). In DPEPO films and a range of solvents we consistently observe a redshift in the main absorbance band (peak at B275 nm, attributed to DMAC) in (o,m)ACA compared to (o,o)ACA. We suggest that this redshift is due to the presence of a weak underlying band associated with excitation of the pD-D singlet state. In (o,o)ACA the mD-D state is expected to exist at higher energies, and therefore remains subsumed by the main donor DMAC absorption band. These D-D states then go on to influence the relevant LE triplet energy in each material. Furthermore, the absorbance spectra also show the same weak direct CT absorption bands in both (o,o)ACA and (o,m)ACA at B375 nm. In each material this band corresponds to two closely spaced (unresolved) CT state absorptions, consistent with the DFT calculations and prior understanding of the oDA or mDA materials. In both cases this indicates that formation of the CT state involves only a single donor, and is unimpacted by the presence of the other (consistent with both materials sharing the same PL spectrum). Because (o,m)ACA and (o,o)ACA introduce minimum additional complexity compared to oDA or mDA, there are few other explanations aside from D-D interactions that can potentially explain the trends seen here. While an intuitively satisfying example of basic physical chemistry principles in action, these results also demonstrate a new method of control in TADF materials. In contrast to external host-tuning of CT singlet states to minimise DE ST , 3 multi-donor interactions may in future be used as a tool to selectively tune triplet states. These results also firmly demonstrate that 'bottom-up' approaches to understanding TADF materials are overly simplistic, and that understanding the properties of D-A-D materials purely in terms of their D-A subunits may not be a generally achievable goal. Conclusion Two D-A-D TADF materials were compared with analogous D-A compounds. Despite displaying near-identical singlet energies and PLQYs, the triplet energies -and subsequent TADF performances -were markedly different and showed opposite trends to the D-A materials. We suggest that molecular orbital interactions with higher energy multi-donor LE states are responsible for these unexpected changes in triplet energy, with interaction strength modulated by the linkage patterns of the two donor subunits. The identification of these emergent multi-donor effects -not complicated here by any additional impacts of steric environment changes -demonstrates that bottom-up approaches to understanding TADF behaviour are unlikely to succeed. This includes the previously coveted ability of extrapolating D-A-D properties from those of smaller D-A fragments. These multi-donor effects nonetheless demonstrate a new approach for selectively tuning molecular triplet states, which may work in tandem with more wellestablished host-tuning of excited singlet states. Conflicts of interest There are no conflicts to declare.
6,529.6
2021-11-18T00:00:00.000
[ "Materials Science" ]
The Influence of Dividend Payments on Share Price in Manufacturing Firms Quoted on the Nigerian Stock Exchange This paper examined the influence of dividend payments on the price of share of quoted manufacturing companies in Nigeria employing panel data with 125 data observations spanning from 2014-2018. A purposeful sampling technique was used to select twenty-five manufacturing companies investigated from the Nigerian stock market. A linear regression model was specified and was further broken down into a bivariate regression model and the method of least square regression was adopted for data analysis. The outcome of the panel regression indicated that, dividend per share has a positive influence on the price of shares of high and low geared manufacturing firms; earnings per shares positively influence the shares price of both dividend and non-dividend paying manufacturing companies; dividend yield show an adverse effect on the share price of new and old manufacturing companies; credit risk was found to positively impact share price of big manufacturing companies, but adversely affect the share price of small manufacturing companies in Nigeria. In view of the outcomes of the analysis, the study therefore recommended that a conducive and favorable business environment should be created by the government for both old and new manufacturing companies in Nigeria to thrive. Also, credit risk should be effectively and efficiently managed by small manufacturing companies in particular in order to eliminate its adverse influence on their share price. I. Introduction The basic concept of dividend payout as well as its policy has remained one of the major issues generating controversy in corporate finance. Since the inception of joint stock firms, the payment of dividend by firms has become an interesting issue in financial literature. Over the years, the modeling as well as the evaluation of corporate dividend payout and returns had been engaged by financial economists as they influence the stock price of firms in Nigeria (AlQudah et al., 2015). In a simple form, dividend is the apportionment of returns or earnings in real assets amidst the company's shareholders in relation to their share ownership. The payment of dividend is quite sacrosanct for the effective and efficient management of a business operation to ensure its survival and it has been viewed as one of the most vital tools for assessing the existence and performance of corporate organizations. A great value is performed by dividends to restore confidence to the shareholders and it is extremely significant as a result of its negative influence on the values of share. A stable policy on dividend is anticipated to bring about a higher price of share due to the high level investors' confidence regarding the potential of the firm to make higher profit in the future. However, the firm still needs to be assessed in a wider scope. Basically, at the end of every financial period, firms assess their financial performance by determining whether earnings had been actualized or not. The payment of dividend continues to be viewed as one of the major significant financial policies not just only from the firms' outlook, but as well from that of the employees, the consumers, shareholders, regulatory bodies as well as the government (Jakata and Nyamugure, 2015). It is generally specified as a nominal value percentage of the ordinary share capital of firm or as a stable amount per share. In all, when the market is greatly influenced either positively or negatively, it may as well pose a similar influenced on the payment and policies of dividend. AlQudah et al. (2015) posited that, the rationale behind the dividend payout of firms is the need for cash and/or to avoid the cost of agency and minimizing the insecurity of investors. Sharma (2011) indicated that, earnings and dividend per share are key components that provide vital information regarding the share price's value in the market. Previous studies conducted in Nigeria on dividend payout appeared to have revealed that, the dividend payment trend had been inconsistent in various sectors of the economy. Jakata and Nyamugure (2015) came up with the conclusion that, the association of dividend payment with equity on shares prices has produced conflicting results based on the sector the study is conducted. However, considering the manufacturing industry, the payment of dividend has been inconsistent. Thus, the challenges identified prompted the researcher to carry out this study with the main purpose of ascertaining the influence of dividend payment on share prices of the quoted manufacturing firms. II. Literature Review Based on this theory, Lintner (1956) and Gordon (1959) formally contended that, there is need for the investors to actualize their wealth for the purpose of consumption and hence prefer cash dividends to capital gains. However, it was theoretically opposed by Miller and Modigliani (1961) in their seminar paper revealing that dividends and capital gains are substituted for each other. Again, 'home-made dividends' could be produced by the investors by issuing stock if that is what they have decided to do. This theory is majorly adopted by firms for justifying the essence of having a well-established dividend policy in operation. Traditionally, the Bird in Hand Theory posits that, the share prices of firms can be influenced via variation in their policies of dividend. The theory further asserts that, dividend is preferred by the investors to capital gain for that 'A bird in the hand is worth more than one in the bush'. That is to say, dividend today is more preferred than capital gain that is not certain in the future (Gordon, 1963). Several empirical studies had been conducted home and abroad on the association of dividend payment and policy with share price as to whether there exists a positive or negative association between the variables. Few of such studies are empirically reviewed below: A study was conducted in Nigeria by Augustine et al. (2019) investigating the association of the ratio of dividend payout with the value of brewery and beverage companies quoted on the Nigerian Stock Exchange (NSE). The study also examined other factors that influence the value of firm. However, the variables (cash holding, profitability, size of company as well as leverage and dividend policy ratio) were regarded as the factors influencing the value of companies. OLS regression analysis was adopted to analyze the secondary data collected from the firms spanning from 2007-2016. It was established that, profitability and leverage ratio have significant and positive influence on the companies' value. This implies that, only the variables of Firm Leverage, and Profit after Tax are significant factors that drive firm value in both breweries and beverages companies among listed companies in Nigeria. Hence, the work suggested that, policies which will optimize the leverage ratio of companies should be put in place and that companies which wish to optimize their values should ensure that profit after tax is maximized. Adopting a panel least square regressions method, Alfred et al. (2019) appraised the dividend policy influence on the prices of stock of ten consumer goods companies listed on the Nigerian stock exchange. The secondary data adopted were collected from the financial statements of the firms investigated spanning Volume 10 No 2 (2020) | ISSN 2158-8708 (online) | DOI 10.5195/emaj.2020.196 | http://emaj.pitt.edu Samson Ogege Emerging Markets Journal | P a g e 65 from 2011 to 2015 and established that dividend yield influences market price share insignificantly and adversely; earnings per share and dividend pay-out ratio influenced market price share significantly and positively while net asset per share show non-significant positive impact on market price share. In view of the outcomes of the analysis, it was inferred that the policy of dividend has the potential to impact on the prices of stock in the consumer goods sector pointing out that the irrelevancy theory of dividends does not take effect as the case may be in Nigeria. By investigating about two hundred and twenty-eight quoted companies in the Amman Stock Exchange, Muhannad et al. (2018) obtained the data spanning from 2010 to 2016 to ascertain the influence of dividend policy on the fluctuation of stock price of the study sample. With the adoption of Pearson correlation analysis and the estimation of panel GMM to investigate the association between the observed variables, it was established that dividend policy and payout, dividend yield have adverse significant causality with stock price fluctuation. It therefore means that the more the companies increase their dividend payout and dividend yield, there will be more reduction in the fluctuation of the price of stock which invariably brings about a more stable stock price. Hence, the study recommended that dividend policy that is favorable to both the current and future investors should be formulated and maintained by the companies quoted on the Amman Stock Exchange. Between 2006 and 2015, a similar study was conducted in Nairobi for a case of six insurance firms quoted on the Nairobi Securities Exchange and it was revealed based on the regression analysis adopted by Joseph and Symon (2017) that earnings per share, dividend yield and inflation significantly and positively influenced the share price value. Based on the outcomes established, it is inferred that dividend policies should be thoroughly and accurately considered by the insurance companies as a result of their potentials to impact the price of share by making the price of stock to either increase or decrease based on the dividends declared by the firms' management. Thus, it is highly required by the management to be honest and responsive in dividends declaration. To further ascertain how the payment of dividend affects the share value, Akram (2017) sourced for data from 44 companies quoted on Istanbul Stock Exchange spanning from 2007-2015 and a fixed effect analysis was employed. The result from the analysis indicated that, the payment of dividend has a significant and positive association with companies' value. Thus, the outcomes of this study upheld the agency cost theory inferred that, the irrelevance hypothesis of dividends is not valid considering the companies quoted on the ISE. Ahmed et al. (2017) conducted a study on the association of dividend policy with the stock prices belonging to a firm in the banking sector. The data between 2005 and 2014 regarding the financial structure as well as the basic dividend policies of the firms investigated were sourced from the financial statement of five banks selected and the websites of State Bank of Pakistan and Karachi Stock Exchange. The outcomes indicated that, a sound dividend policy plays a significant function in alluring potential investors as well as making a substantial contribution towards enhancing the financial structure of companies. Furthermore, it was revealed by the findings that dividend policies might pose a significant and positive effect on the prices of stock if considered and executed after thorough investigation of financial structure as well as the dividend policies of various companies. III. Methodology Secondary data were employed in this study and had been collected from the financial statement and average share prices per year of the chosen manufacturing companies quoted on the Nigerian stock exchange spanning from 2004-2018. The study covers manufacturing companies that specialize in consumer goods, industrial goods, technological development, oil and gas, health care and basic materials. Panel Ordinary Least Square Technique was adopted for analyzing the panel data in order to assess the influence and the association between the variables observed in this research. On the basis of this literature, the model was formulated as follows: Regression Analysis for High Geared Manufacturing Firms Investigate the influence of dividend per share on the price of share of high geared manufacturing firms in Nigeria. Restatement of Objective Two Understand the impact of earnings per share on the price of share of listed non-dividend paying manufacturing companies in Nigeria. Analysis for Old Manufacturing Firms The firms presented in the data presentation were regrouped into old and new manufacturing firms based on their year of operation as manufacturing firms in Nigeria. Restatement of Objective Three Find out the influence of dividend yield on the price of share of quoted old manufacturing firms in Nigeria. Restatement of Objective Three Find out how dividend yield influences the share price of quoted new manufacturing firms in Nigeria. Restatement of Objective Four Assess how credit risk influences the share price of quoted big and small manufacturing firms in Nigeria. Restatement of Objective Four Assess how credit risk affects the share price of quoted small manufacturing firms in Nigeria. IV. Discussion of Findings This study appraises the influence of dividend payment on share prices of the quoted manufacturing firms in Nigeria. In response to Table 1, dividend per share was revealed to positively influence share price of high geared manufacturing firms in Nigeria. This implies that, 1% rise in DPS will bring about a rise in SP. Similarly, Table 2 revealed that dividend per share positively influence share price of low geared manufacturing firms in Nigeria. This implies that, 1% (2018) although it did not indicate if the firms were highly geared or not. Table 3 EPS was revealed to positively influence SP of dividend paying manufacturing firms in Nigeria. This signifies that for every 1% rise in EPS, there will be a similar increase in SP. This outcome corroborates with the result of the study conducted by Iqbal et al. (2015) inferring that, EPS determines significantly the share prices and the availability of future funds for dividend payment and reinvestment. Regarding the non-dividend paying manufacturing firms, it was revealed in Table 4 that, EPS positively influences SP. This signifies that for every 1% rise in EPS, there will be a similar increase in SP. This outcome is consistent with the study of Ishfaq (2018). It therefore implies that, EPS has the potential of raising future capital for reinvestment in non-dividend paying firms. As regard to Table 5, DY was revealed a have an adverse effect on SP of old manufacturing firms in Nigeria. This implies that when DY increases by 1%, the SP will decrease by 1%. However, this outcome does not corroborate with the result of Freshia and Pauline (2016) where in this study, there is positive association of DY with SP. Considering new manufacturing firms, Table 6 established that, DY adversely influences SP. This implies that when DY increases by 1%, the SP will decrease by 1%. However, this outcome does not corroborate with the result of Freshia and Pauline (2016) where in this study, there is a positive association between DY and SP. Table 7 reveals that, CR positively influences SP of big manufacturing firms in Nigeria. This implies that when CR increases by 1%, the SP will increase by 1%. Literature does not empirically emphasize the effect of credit risk on share prices of manufacturing firms in Nigeria. However, Table 8 revealed that CR negatively influences SP of small manufacturing firms in Nigeria. This implies that when CR increases by 1%, the SP will decrease by 1%. The variables used in the study are dividend per share, earnings per share, dividend yield and credit risk. In as much as these variables used in the study are important, some are given optimum attention in investment decisions. In finance, dividend per share had been given optimum attention because of its cash flow nature. Only dividend announcement had caused great influence in share price in any capital market. Finance analysts over time have used dividend announcement in the form of insider information to manipulate the share price. The arguments of dividend relevance surpass the argument of dividend irrelevance. However, dividend argument had led to dividend theory. Most academic studies on dividend payment included dividend per share as relevant independent variable. V. Conclusion and Recommendations Based on the findings from the analysis, it can be observed that all the variables adopted in the model employed are significant. Hence, this led the researcher to draw a conclusion that, the findings achieved the research objectives. This study makes a significant contribution to the body of knowledge by empirically revealing the evidence on how dividend payment influences the price of share of manufacturing companies in Nigeria. In view of the study's analysis, the coefficient of determination reveals more than average percent signifying that the dependent variable captured the level of significance of the independents variables. On this note, we can draw our conclusion that the manufacturing companies that are highly geared have a better model than the low geared companies, while dividends paying manufacturing companies have a better model compared to non-dividend paying manufacturing companies. Inclusively, the new and old manufacturing companies have a fair model implying that in totality, the manufacturing companies in Nigeria are not operating at the optimal level. Big and small manufacturing companies are progressing as the model of big manufacturing companies out-way the model of small manufacturing, as revealed by the coefficient of determination (R-square). Parallel to the findings discovered in this work, it is therefore recommended that credit risk should be effectively and efficiently managed by small manufacturing companies in particular in order to eliminate its adverse influence on their share price. Also, the manufacturing companies that are not paying dividend with adequate earnings per share should be dividend in order to make the sector attractive to the investors. The government should provide the enabling environment for manufacturing companies to thrive and survive in Nigeria. Finally, well-reputable companies should be eager to pay out smooth dividends rather than investing more on the growth opportunities
4,034.8
2021-04-27T00:00:00.000
[ "Business", "Economics" ]
Secure Localization in Wireless Sensor Networks with Mobile Beacons We present a scheme, called SLMB, for secure sensor localization in WSNs in which we propose to use a mobile beacon node with the goal of reducing the overall energy consumption in sensor nodes during sensor localization. In the SLMB scheme, a mobile beacon node traverses through the network, collects information from unknown sensor nodes, figures out position relationship with these nodes, and sends the information to the base station where analysis and location calculation is carried out to relieve unknown sensor nodes from energy-consuming computation. The proposed SLMB scheme is also designed to resist wormhole attacks, and localization is developed based on a mathematical model to design a path for the mobile beacon node to traverse in order to cover the entire sensor network. To evaluate our scheme, we have performed simulations to demonstrate that the SLMB scheme can improve the success rate and the accuracy of sensor localization compared to other sensor localization schemes in hostile environments. Our simulation results also show that the SLMB scheme consumes much less energy than traditional distributed sensor localization schemes, which is an important metric in measuring the effectiveness and usefulness of any schemes targeted for applications in WSNs. Introduction Sensor localization in wireless sensor networks (WSNs) is a fundamental technical issue, for it is critical for monitoring applications and for most location-based routing protocols and services. Therefore, in recent years, sensor localization has generated a great deal of interest in which researchers have considered various technical issues, such as efficiency [1], accuracy [2], and security [3] during sensor localization. Methods for the localization of wireless sensor nodes are generally classified into two types: range-based localization and range-free localization. The first type includes schemes in which positions of the unknown sensor nodes are calculated using measurement means to derive relevant information about distances and angles between sensor nodes [4]. The second type includes schemes in which positions of the unknown sensor nodes are estimated using connectivity information as well as multihop routing information to derive relevant information between sensor nodes [5]. In real applications, however, there may be other types of localization methods owing to different application scenarios. Therefore, specific localization methods in real applications need to be continuously developed and improved based on orientation methods in order to adapt basic localization schemes to different network scenarios. Consequently, in order to develop effective sensor localization methods, we should analyze and understand the main characteristics of specific networks and develop proper performance metrics that can be used to measure the performance of localization schemes. Meanwhile, we should also consider the main constraints of wireless sensor networks such as constrained energy supply of the sensor nodes as well as the complexity of network environments in the development of effective localization methods. Most existing localization algorithms, whether range-free or range-based, are distributed in nature in which unknown sensor nodes need to get position information about nearby beacon nodes so that they can calculate their own positions. The calculated position results are then sent to the base station or a central server to be used in real applications. One major drawback of such distributed localization algorithms is that it makes energy-constrained unknown sensor nodes 2 International Journal of Distributed Sensor Networks bear all the responsibility of communication and computation, resulting in high energy consumption in the sensor nodes. Another problem with such distributed algorithms is the increased security risks due to frequent communication between the sensor nodes. In this paper, we propose a secure centralized sensor localization scheme by using a mobile beacon node (SLMB) to address the above-mentioned critical issues for WSNs and by developing some secure mechanisms to resist wormhole attacks in sensor localization. The proposed SLMB scheme has the following general features. (1) It uses a mobile beacon node to travel along a calculated path in the network to collect information about the position relationship with nearby unknown sensor nodes. The collected information is then sent to the base station where the positions of the unknown sensor nodes are calculated, which can greatly lower the communication cost for the unknown sensor nodes. (2) It takes a centralized approach so as to reduce the amount of calculation in the unknown sensor nodes by transferring the calculation work to the base station, which is a node in the network that is considered to be free from resource constraint. (3) It calculates a reasonable mobile path for the beacon node to traverse so as to cover the entire network to ensure that every unknown sensor node can get connected to the mobile beacon node at some point of time so that necessary information can be collected for position calculation. The development of the mobile path follows the design principle of the cellular network and includes a quantitative method for the determination of efficient and necessary points for the mobile beacon node to visit for information collection. (4) It includes some secure mechanisms to fight against wormhole attacks, thus improving the security of the centralized sensor localization algorithm in general. The rest of this paper is organized as follows. In Section 2, we review some related work on sensor localization in WSNs. In Section 3, we present our centralized sensor localization scheme and describe some implementation and application issues. In Section 4, we describe the experiment we have performed to evaluate the proposed SLMB scheme and show some favorable simulation results in comparison to other localization methods. Finally, in Section 5, we conclude this paper in which we also discuss some future work. Related Work Existing sensor localization schemes can be generally classified into two types, that is, distributed and centralized schemes based on where calculation of sensor positions is performed in the localization process. In distributed localization, the unknown sensor nodes collect position information about nearby beacon nodes and calculate their own coordinates by themselves [6,7]. That is, unknown sensor nodes are responsible for position calculation. In contrast, in centralized localization, beacon nodes collect the position information about unknown sensor nodes and send the information to the base station for data integration and position calculation [8,9]. That is, the base station is responsible for position calculation. Although distributed localization schemes have been widely popular, since in most WSNs, the number of beacon nodes is usually too limited, and the status of such nodes is too static to meet the needs of large WSNs. For these reasons, if an unknown sensor node wants to use beacon information more effectively, it may need to get the beacon information through multihop data transmission. In [10], the authors proposed a self-positioning algorithm that can run efficiently and independently at individual sensor nodes based on locally collected information. However, the requirement on the distance measurement error is quite strict. In [11], the authors proposed an algorithm and showed that even when only the connectivity information was given, the Euclidean distance between the estimated and the correct position of every unknown sensor node can be bounded and would decay at a rate that is inversely proportional to the radio range. However, this scheme incurs a larger amount of calculation in the unknown sensor nodes. In [12], the authors proposed a classic distributed localization scheme called DV-Hop based on distance vector routing. In DV-Hop, each unknown node needs to get the hopcount to the beacon nodes which estimate the average size for one hop between nodes in the network. Then the unknown nodes calculate their positions using the obtained information about the distances between the beacon nodes and themselves. DV-Hop can provide approximate positions for the nodes in a network where only a small fraction of nodes have self-positioning capability, but it requires more message exchanges between nodes in the network. Due to energy constraint and thus limited life of sensor nodes, many researchers have proposed some centralized localization methods to reduce energy consumption through lowering computation and communication cost for the sensor nodes. Such localization approaches can bring significant benefits to applications, for it can extend the life of sensor nodes since most computations will now be completed at the central server or base station. In [13], the authors presented a multihop localization technique for WSNs by exploiting the strength indications of received signals. The proposed scheme aims at providing a solution for the localization of sensor nodes in static WSNs. In [14], the authors made some major modifications to improve the performance of the simulated annealing-based localization algorithm to increase localization accuracy. However, this type of localization schemes requires a large number of beacon nodes and involves complicated localization algorithms in order to complete the localization of all unknown sensor nodes in the network. In order to overcome the shortcomings of requiring a large number of beacon nodes, in [15], some schemes based on mobile beacon nodes were proposed to transfer beacon information to help unknown sensor nodes in International Journal of Distributed Sensor Networks 3 performing self-localization. The problem is that some of the methods cannot be easily integrated into the centralized framework and some others lack methods for concise calculation of effective mobile beacon path. In [16], the authors demonstrated a range-free localization mechanism based on the location information from mobile beacons and on the principles of elementary geometry. But all the position calculation is still completed in the unknown sensor nodes, which makes it more like a distributed localization scheme. In [17], the authors proposed a novel mobile beacon-assisted localization algorithm based on network density clustering for WSNs by combining node clustering, incremental localization, and mobile beacon assisting together. Although this scheme is suitable for clustering large networks, it may not be suitable for networks that require faster convergence. Some more research work has also be carried out to address security issues in sensor localization for WSNs. In [18], the authors improved the security and accuracy of sensor localization using location-based key distribution. In [19], the authors presented a novel defense mechanism against attacks in the DV-Hop localization algorithm. However, the security mechanisms proposed in these algorithms are not applicable to the mobile beacon scenario in WSNs. In this paper, we introduce a mobile beacon node into centralized localization while improving the security of the scheme. In order to keep the computation cost and therefore the energy cost low for the sensor nodes, we propose a specific centralized localization scheme in which the mobile beacon node traverses the entire network following a welldesigned path during which it stops at every collection point to collect position information from nearby unknown sensor nodes before moving to the next collection point. The mobile beacon node sends a position request at each collection point to nearby unknown sensor nodes, estimates the position relationship to these unknown sensor nodes based on received information and sends the information along with its own current position to the base station. It is the base station that will eventually complete all the position calculation. The proposed SLMB scheme has the following obvious advantages. (1) It can balance energy consumption of sensor nodes in the network, for it would prevent the sensor nodes that are closer to the base station from consuming excessive energy to deliver position information to the base station from far away sensor nodes in a multihop manner. (2) It can improve localization accuracy as well as success rate compared to other similar schemes. (3) It can improve the security of localization since securing only the beacon node should be much easier than securing a large number of unknown sensor nodes in the network. (4) It can effectively reduce the communication overhead for the sensor nodes and overall transmission delay. The Network Model. There are three types of nodes in the network model for our SLMB scheme. The first type includes the base station, which is capable of managing and integrating data for the entire network including the calculation of positions of unknown sensor nodes and the application of the results in real applications. The second type includes the mobile beacon nodes, which is capable of positioning themselves, traversing the network to collect information from unknown sensor nodes, and transmitting the collected information to the base station for position calculation. In addition, beacon nodes are mobile nodes that are assumed to have unlimited energy supply. The third type includes the unknown sensor nodes whose positions or locations in the network need to be determined through calculation based on collected information. The Localization Model. The scheme that we propose is appropriate for applications and networks in which there is not enough stationary beacon nodes as position references for the unknown sensor nodes but localization still needs to be finished in time. In the proposed SLMB scheme, the information about the distribution of the unknown sensor nodes in the network can be obtained by using a mobile beacon node, and the positions of the unknown sensor nodes can be calculated quickly by the base station. In addition, in the SLMB scheme, we use a mathematical model to make the mobile beacon node follow a designated path to cover the entire network so as to improve the effectiveness and efficiency of sensor localization. Following are the main steps of our centralized sensor localization algorithm, that is, the SLMB scheme. (1) The mobile beacon node moves along a calculated path, sending position requests at every collection point to nearby unknown sensor nodes, collecting responses from unknown sensor nodes, and sending the collected information along with its current position to the base station. (2) The mobile beacon node moves to the next collection point after completing the work at a previous collection point until it completes the traversal of the whole path to cover the entire network. The mobile beacon node can decide to aggregate information collected at more than one collection point before sending the collected information to the base station to further improve the performance of communication although energy consumption is not an issue under consideration. (3) The base station integrates all the information received from the mobile beacon node and calculates the positions of all the unknown sensor nodes. The Mobile Path Model. Mobility of the beacon node is required in our SLMB scheme. Consequently, the path that the mobile beacon node travels is very important for the performance of the scheme. 4 International Journal of Distributed Sensor Networks The purpose of using a mobile beacon node is to collect position information from unknown sensor nodes. Therefore, the path for the mobile beacon node to travel needs to meet the following two requirements. (1) It must cover the entire network. Since sensor nodes in the network may be deployed randomly, the beacon node needs to connect to as many unknown sensor nodes as possible in order to improve the efficiency of localization. (2) It must complete localization quickly. The path for the mobile beacon node to travel along should support efficient localization and make the number of collection points as minimal as possible. The area that the mobile beacon node can effectively cover at anytime is modeled by a round area or circle with its present position as the center point and the signal transmission range as the radius. We can thus build a mathematical model to optimize the path that the mobile beacon node should follow as it traverses the entire network, which can be viewed and solved as the area coverage problem. We assume that all sensor nodes in the network are deployed within a rectangular area, and the size of the area as well as the communication radius of the sensor nodes are known in advance. Our objective is to have the circles of the beacon node cover the entire rectangular area as it traverses through the network while keeping the overlapping regions of the circles as minimal as possible. This requires that with a collection of points {(x c1 , y c1 ), (x c2 , y c2 ), . . . , (x cn , y cn )} that the mobile beacon node stops during its journey, for any arbitrary point (x o , y o ) that represents the position of an unknown sensor node, the following condition must be met: if (x o , y o ) is located in the rectangular network area, it must be covered by at least one circle of the mobile beacon node with a collection point of the mobile beacon node as the center and the signal transmission range as the radius. From the above analysis, we can see that the circular areas that the mobile beacon node generates as it moves along a path could have some parts overlapping with each other in order to cover the entire rectangular area. Therefore, we have to make sure that the circles would cover each and every unknown sensor node deployed in the network while making the overlapping parts as small as possible, which is the basic principle in the design of the mobile path for the mobile beacon node to traverse and cover the entire network. When the overlapping areas of different circles are the same, the polygon that is constructed with the chords of each circle becomes straight polygons. We can thus transform the original area coverage problem of using circles to cover a rectangular area to the problem of using the polygons to cover a rectangular area. Proof. We assume that the rectangle is covered by one or more straight polygons each of which has p edges (p ≥ 3). If α is the interior angle of the polygon, then α = (180 • (p − 2))/ p. Let q be the total number of polygons to which a vertex belong. Then, q = 360 • /α. Since q must be a natural number and q can be calculated using Thus, p can be calculated using From formula (2), we can get the following results: Therefore, the number of edges p can only be 3, 4, or 6. The three specific coverage situations are demonstrated in Figure 1. In the figure, S c shows the overlapping part between two adjacent circles. Let S s and S t denote the area of the sector and that of the triangle, respectively. Thus, S c = 2(S s − S t ). We can then use formula (4) to get S s , S t , and S c , respectively, in which χ is the degree of the central angle of the sector, and x is the percentage of S c over the circle. We can then derive Thus, we get when p = 3, 4, 6, χ = 120 • , 90 • , 60 • , and x = 0.39, 0.18, 0.06, respectively. It is thus clear that using straight hexagons to cover a rectangular area can achieve the highest efficiency, which coincides with the core idea of the honeycomb network principle. We design the path for the mobile beacon node to travel as follows. First, we need to determine the minimum number of circles to cover the rectangle with hexagons. Let's deduce the formulas for solving this problem. Suppose the size of a rectangular area is M * N and the communication radius of the wireless nodes is R. Let m be the number of circles in one odd-numbered horizontal line, n be the number of circles in one vertical line. Let l and d be the distances shown in Figure 1. We can thus calculate l and d by using International Journal of Distributed Sensor Networks Then, n can be calculated using formulas (8) or (9) when it is an odd or an even number, respectively. ⎛ ⎜ ⎜ ⎜ ⎝ 1 2 + 1 + 1 + 2 + 1 + 2 + · · · + 1 + 2 + 1 And m can be calculated using We are now ready to compute the minimum number of circles that can cover the entire rectangular network area through using formula (11) in which P is the total number of collection points. , n is an odd number. n is an even number. The path thus derived for the mobile beacon node to traverse and cover the entire network is shown in Figure 2. Position Calculation. The calculation of the position of each and every unknown sensor node is performed by the base station in our SLMB scheme, which is different from the traditional range-based localization methods in order to reduce the convergence time of localization as well as the cost of information collection by the mobile beacon node. Most existing range-based localization methods need multiple Unknown nodes Collection points for the beacon nodes measurement points to measure the distances to unknown sensor nodes, whether they are based on the means of arrival time, signal strength, or angle. In the SLMB scheme, we combine the measurements of angle and arrival time to determine the distances so as to reduce the requirement on the number for collection points. As shown in Figure 3, the mobile beacon node can sense the directional angle θ of received messages from an unknown sensor node using an antenna array and, at the same time, measure the distance to the same node using time information in the messages. Then, the position of the unknown sensor node can be calculated using both pieces of information. Since there are only a limited number of collection points, the measurement in the proposed SLMB scheme may incur errors. As shown in Figure 3 in which the angle error is ±Δθ and the range error is ±Δd, we take the centroid of the area with angle interval {(θ − Δθ), (θ + Δθ)} and length interval {(d − Δd), (d + Δd)} as the position of the unknown sensor node. The Security Mechanism. Wormhole attacks are the primary type of attacks that can be launched without compromising any cryptographic keys. It can cause serious consequences to localization, especially when the beacon node wakes up the neighboring unknown sensor nodes through a localization request and when an unknown sensor node responds to the request. A communication channel between two attackers is shown in Figure 4 from which we can see that attack1 can transmit a request from B1 to the unknown sensor nodes that are outside of the coverage area of B1. The communication channel can also be used to replay the response of U to B1. In order to detect information that is replayed from outside of the normal communication range, when the beacon node receives some information from the same unknown sensor node at different collection points, the beacon node should check whether one position information has been received repeatedly from the same exit of a wormhole, then compare the distances of the repeated positions d with the threshold T. If d ≤ T, it means that this is a normal error caused by the overlapping area of the two collection points. If d > T, it could mean an attack. If the wormhole attack is launched against just one node, the beacon node is not able to determine the location of the attacker. However, if the wormhole attack is launched towards multiple nodes, the attacker could be detected according to the wormhole attack filtering principle that is based on the same exit. In addition, the beacon node may also receive messages with the same ID of an unknown sensor node since the replayed information is within the same communication radius. According to the signal transmission characteristics, we will only accept the first received information, discard the latter ones, and include it in the blacklist since the replayed information couldn't arrive at the object earlier than the original signal with the same transmission power. 3.6. Application Issues. The SLMB scheme has been designed to make sure that the mobile beacon node fully covers the entire deployment area, thus making it suitable for static WSNs. In dynamic WSNs in which the location of a sensor node may change from time to time due to mobility or the network environment, the SLMB scheme can be enhanced so that the mobile beacon node will periodically traverse the network to calculate and update the information on sensor locations. The interval between repeated SLMB applications can follow a strategy that can be determined based on application requirements as well as network environments. In addition, we can also adapt the basic SLMB scheme for huge WSNs by dividing the sensor deployment area into multiple regions and then deploying multiple mobile beacon nodes in the area with each for a different region to meet the real-time requirement of sensor localization. The model allows us to derive satisfactory localization results by making each beacon node cover minimum number of collection points with particular time constraints to achieve desired performance for sensor localization. As it has been widely known, the application of WSNs has now spread to a lot of different areas including those in harsh environments such as battlefields and wildlife monitoring as well as many emerging applications in our daily life. Both distributed and centralized sensor localization schemes have their distinctive strengths and weaknesses to deal with different application scenarios. In a harsh environment where it is almost impossible for human beings to get near the sensors, a remotely controlled wireless mobile device can be used to traverse the deployment area acting as the beacon node to accomplish the functionality of sensor localization. If there are mountains and hills in the deployment area, we can manage to map the three-dimensional area into our twodimensional model and thus still use a wireless flying device to collect location information from the sensor nodes. If sensor nodes are deployed in a well-developed area, a vehicle can be operated to move along a designated route to cover the entire deployment area to collect location information from the sensor nodes. In extreme situations where it is not feasible to use a mobile device as the beacon, distributed sensor localization algorithms should be considered as a complementary scheme. As WSNs find more and more diverse applications ranging from traditional applications to the Internet of Things scenario, there are certainly many applications in which our SLMB scheme can be used to perform sensor localization to achieve a wide variety of performance objectives. We also would like to note that the proposed SLMB scheme is appropriate for WSNs that do not have a too high requirement on the accuracy of localization. To improve the accuracy of localization though, we can increase the number of collection points for the mobile beacon node to collect more position information about unknown sensor nodes and calculate the positions of the unknown sensor nodes through maximum likelihood estimation, which is a future work in our research in which we will demonstrate how accuracy improves along with the increase in the number of collection points. This is a tradeoff between accuracy and required completion time in addition to some other considerations such as the cost of communication and computation. the proposed SLMB scheme on sensor localization. The network configuration for our first simulation is set up as follows: there are 50 unknown sensor nodes and a mobile beacon node deployed randomly in an area of 800 × 800 m 2 . The transmission range R of the wireless nodes is set up to be 100 m. The distance error and angle error between the mobile beacon node and any unknown sensor node are set up in the range of 0-0.05 and 0-0.05 * π, respectively. Simulation and Analysis We compare localization error between our proposed SLMB scheme and a localization scheme based on the general mobile path (LBGM). Localization error is an important metric to measure the performance of sensor localization in WSNs, which is the distance between localization coordinates and the actual coordinates calculated using (12) in which (x U , y U ) and (x U , y U ) denote the measured and the actual coordinates of unknown sensor node U, respectively. The simulation results on localization error for 50 unknown sensor nodes are shown in Figure 5 from which we can see that there are several unknown sensor nodes that have an localization error of infinite value, which means that these nodes cannot be located using the LBGM scheme. Our proposed SLMB scheme is shown to be more effective, for it can improve the success rate of localization of unknown sensor nodes for about 20% while reducing localization errors in general. Since there are a variety of applications that need the location information about deployed sensor nodes but sensors may be different, it is worthwhile to investigate the performance of the proposed SLMB scheme for different network sizes in terms of coverage area and for different transmission ranges of the sensor nodes. We hereby use the notion of average localization error in evaluating our SLMB scheme using (13) in which N denotes the number of unknown sensor nodes in a network as follows: We first investigate the effect of network size on sensor localization. In the evaluation, 100 unknown sensor nodes and a mobile beacon node are deployed in the network, the network is set up to cover an area of 500 * 500 m 2 , 600 * 600 m 2 , 700 * 700 m 2 , 800 * 800 m 2 , 900 * 900 m 2 , and 1000 * 1000 m 2 , respectively, and R is set up to be 100 m. The distance error and angle error between the mobile beacon node and any unknown sensor node are also set up in the range of 0-0.05 and 0-0.05 * π, respectively. The average localization error of the unknown sensor nodes using the proposed SLMB scheme and that using the LBGM scheme are shown in Figure 6 and the success rates of localization of these two schemes are shown in Figure 7. From these two figures, we can see that our SLMB scheme is more effective in covering the entire network area and in improving the accuracy of localization of unknown sensor nodes. We then investigate the effect of transmission range of the nodes on localization. In the evaluation, 100 unknown sensor nodes and a mobile beacon node are deployed in a network area of 600 * 600 m 2 , and the transmission range of the wireless nodes is set up to be 50 m, 60 m, 70 m, 80 m, 90 m, and 100 m, respectively. The distance error and angle error between the mobile beacon node and unknown sensor nodes are also set up in the range of 0-0.05 and 0-0.05 * π, respectively. The average localization errors for the 100 unknown sensor nodes using the proposed SLMB and the LBGM schemes as well as the localization success rates are shown in Figures 8 and 9, respectively. From these two figures, we can see that the SLMB scheme can achieve better performance both on localization accuracy and on localization success rate with very stable results. The reason for the small difference shown in Figure 8 in localization accuracy between SLMB and LBGM is that it only includes the simulation results of those nodes that can be successfully located. Finally, we investigate the performance of the SLMB scheme in terms of its ability of resisting against wormhole attacks. We randomly distribute two pairs of wormhole attackers in the experiment environments that we set up above for various network sizes and different transmission radiuses. The average localization errors of the unknown sensor nodes under these two environments are shown in Figures 10 and 11, respectively, from which, we can see that the SLMB scheme is able to fight against wormhole attacks, thus improving the localization accuracy for WSNs compared to normal localization by using a mobile beacon (NLMB). Performance on Energy Consumption. Batteries are usually used to supply power in the sensor nodes in WSNs, and a sensor node is considered to be no longer functional when the battery in the node is exhausted. Therefore, the efficiency of energy usage must be considered in any protocol design for WSNs. The energy consumption of a sensor node mainly consists of energy consumption for data transmission and that for data processing. We now analyze the performance of SLMB with respect to energy consumption and compare it to DV-Hop [12], a classic distributed sensor localization method. First, let us develop an energy consumption model for the proposed SLMB scheme. In our model, the operations in each sensor node that consumes energy include data transmission, data reception, and position calculation, and the energy consumed for each of these operations is denoted as E s , E r , and E c in which it is widely recognized that E s and E r are normally much higher than E c . The total amount of energy consumed by each sensor node can then be calculated using Formula (14) in which E s and E r can be calculated using formulas (15) and (16), respectively. In all the formulas, k 1 and k 2 denote the number of bits that have been sent and received, respectively, during sensor localization and E 0 denotes energy consumption for sending or receiving a single bit of data. Energy consumption for sending a message includes two parts; one is calculated based on the amount of data sent and the other is on the distance between the sender and the receiver that we denote as E s1 and E s2 , respectively, and d is the distance and x a constant multiplier. In our SLMB scheme, we assume that the amount of data is fixed and is the same for every message sent and received and that any data that is sent by a node can be received by all the neighboring nodes within the radius of the communication of the sending node. We now compare the performance on energy consumption of SLMB to that of DV-Hop. In DV-Hop, an unknown sensor node needs to transmit localization information through multiple hops and calculates its position coordinates by itself. The network configuration for our simulation on energy consumption is set up as follows: 500 unknown sensor nodes are deployed randomly in an area of 500 × 500 m 2 ; the transmission range R of the wireless nodesis assumed to be 50 m. We can then get E 0 = 50 nj/bit and x = (0.1 nj/bit)/m 2 . The localization results may also need to be updated in some applications. Thus, we evaluate the performance on energy consumption for multiple applications of sensor localization, and the results for the accumulative energy consumption are shown in Figure 12. We now investigate the energy consumption for varying numbers of unknown sensor nodes in the network, and the results are shown in Figure 13. We can see from the above evaluation that energy consumption in SLMB is much smaller than that in DV-Hop. SLMB can keep energy consumption at a very low level with various numbers of unknown sensor nodes, especially for some networks in which multiple applications of sensor localization are needed. Under both circumstances, our proposed SLMB scheme achieves a much better performance on energy consumption. Another obvious advantage of the SLMB scheme is that it not only can lower energy consumption in each sensor node, but it can also keep energy consumption evenly across all the unknown sensor nodes in the network, thus preventing some unknown sensor node from exhausting energy prematurely and becoming unusable before some others and, as a result, prolonging the life of the network. The main factors that lead to the improved performance are that the SLMB scheme has been designed to achieve the goals of reducing the number of data transmissions, making the unknown sensor nodes in the network transmit messages with same amount of data and same signal strength, all contributing to significant reduction in the total amount of energy consumed in unknown sensor nodes for the functionality of sensor localization. Conclusions In this paper, we presented a secure centralized localization scheme by using a mobile beacon node. In the scheme, the mobile beacon node is responsible for collecting information about position relationship with unknown sensor nodes and for sending the information to the base station where the positions of the unknown sensor nodes are calculated. The scheme can greatly reduce the computation cost compared to distributed localization algorithms and lower the communication overhead for sending position information to the base station compared to some other centralized localization algorithms for the unknown sensor nodes. In the scheme, most work on collecting and sending information is done by the mobile beacon node, thereby also reducing the security risks in sensor localization. Specifically, the proposed scheme is designed to resist wormhole attacks in localization to improve the security. The scheme also includes a mathematical computation model to determine the collection points for the mobile beacon node to completely and efficiently cover the entire sensor network. The proposed scheme only requires that the beacon node to have an antenna array. In the future, we will extend our secure localization scheme to improve the security of localization in the presence of other kinds of malicious attacks without incurring too much computational overhead and communication cost. We will also investigate the performance of sensor localization schemes that use different mobile beacon paths, different types of deployment, and different transmission radius for the sensor nodes.
8,712.2
2012-10-01T00:00:00.000
[ "Computer Science" ]
Piezoelectric Motor Using In-Plane Orthogonal Resonance Modes of an Octagonal Plate Piezoelectric motors use the inverse piezoelectric effect, where microscopically small periodical displacements are transferred to continuous or stepping rotary or linear movements through frictional coupling between a displacement generator (stator) and a moving (slider) element. Although many piezoelectric motor designs have various drive and operating principles, microscopic displacements at the interface of a stator and a slider can have two components: tangential and normal. The displacement in the tangential direction has a corresponding force working against the friction force. The function of the displacement in the normal direction is to increase or decrease friction force between a stator and a slider. Simply, the generated force alters the friction force due to a displacement in the normal direction, and the force creates movement due to a displacement in the tangential direction. In this paper, we first describe how the two types of microscopic tangential and normal displacements at the interface are combined in the structures of different piezoelectric motors. We then present a new resonance-drive type piezoelectric motor, where an octagonal plate, with two eyelets in the middle of the two main surfaces, is used as the stator. Metallization electrodes divide top and bottom surfaces into two equal regions orthogonally, and the two driving signals are applied between the surfaces of the top and the bottom electrodes. By controlling the magnitude, frequency and phase shift of the driving signals, microscopic tangential and normal displacements in almost any form can be generated. Independently controlled microscopic tangential and normal displacements at the interface of the stator and the slider make the motor have lower speed–control input (driving voltage) nonlinearity. A test linear motor was built by using an octagonal piezoelectric plate. It has a length of 25.0 mm (the distance between any of two parallel side surfaces) and a thickness of 3.0 mm, which can produce an output force of 20 N. Piezo-Walk-Drive In a piezo-walk-drive mechanism, there are at least two sets of actuator arrangements embedded in the structure and they operate one after the other.The main reason that these motors are called piezo-walk-drive is because the moving sequences in these motors resemble two or four feed walking actions.In these motors, a step motion is realized in two ways. In one way, there are normal and tangential microscopic displacements to the moving direction of a sliding element.Each actuator in a motor that generates a microscopic displacement has one single task, which is either to perform a "clamp" or "move" action.If an actuator has the task of performing a "clamp" action, the displacement is normal to the slider moving direction and the ultimate function of these actuators is to increase or decrease normal force and thus, to hold friction force.If an actuator is required to perform a "move" action, the displacement generated by the actuator is tangential to the moving direction of the sliding element.Expansion and shrinkage of this actuator creates a microscopic movement of the slider.After the structure proposed by Brisbane in 1965 [17], some other structures operated on the basis of the piezo-walk-drive principal [18][19][20][21].Typically, these structures consist of three actuators, where two of them are responsible for the clamping and one is responsible for the moving action.In these motors, the required displacements in the normal and tangential directions, with respect to a sliding element, can be generated by actuators that use longitudinal, shear, transverse, and planar coupling of piezoelectric materials.In the early structures, the piezoelectric elements used in the piezo-drive type motors were in bulk forms, but in many of the commercialized structures, the actuators are manufactured in multilayer forms to generate sufficient displacement at a relatively low-driving voltage. Assuming that the activation makes the length, or the diameter, decrease and release, which causes an actuator to return to its rest position, the motion sequence as seen in Figure 1 can be started by activating one clamping actuator (A).When the moving actuator (C) is also activated, shrinkage of the moving actuator generates a half step.At this moment, the clamping actuator (A) is released so that it can clamp and maintain the holding force.In the following step, the second clamping actuator (B) is activated and the moving actuator (C) is released.When the second clamping actuator (B) is released, one motion sequence is finished. Piezo-Walk-Drive In a piezo-walk-drive mechanism, there are at least two sets of actuator arrangements embedded in the structure and they operate one after the other.The main reason that these motors are called piezo-walk-drive is because the moving sequences in these motors resemble two or four feed walking actions.In these motors, a step motion is realized in two ways. In one way, there are normal and tangential microscopic displacements to the moving direction of a sliding element.Each actuator in a motor that generates a microscopic displacement has one single task, which is either to perform a "clamp" or "move" action.If an actuator has the task of performing a "clamp" action, the displacement is normal to the slider moving direction and the ultimate function of these actuators is to increase or decrease normal force and thus, to hold friction force.If an actuator is required to perform a "move" action, the displacement generated by the actuator is tangential to the moving direction of the sliding element.Expansion and shrinkage of this actuator creates a microscopic movement of the slider.After the structure proposed by Brisbane in 1965 [17], some other structures operated on the basis of the piezo-walk-drive principal [18][19][20][21].Typically, these structures consist of three actuators, where two of them are responsible for the clamping and one is responsible for the moving action.In these motors, the required displacements in the normal and tangential directions, with respect to a sliding element, can be generated by actuators that use longitudinal, shear, transverse, and planar coupling of piezoelectric materials.In the early structures, the piezoelectric elements used in the piezo-drive type motors were in bulk forms, but in many of the commercialized structures, the actuators are manufactured in multilayer forms to generate sufficient displacement at a relatively low-driving voltage. Assuming that the activation makes the length, or the diameter, decrease and release, which causes an actuator to return to its rest position, the motion sequence as seen in Figure 1 can be started by activating one clamping actuator (A).When the moving actuator (C) is also activated, shrinkage of the moving actuator generates a half step.At this moment, the clamping actuator (A) is released so that it can clamp and maintain the holding force.In the following step, the second clamping actuator (B) is activated and the moving actuator (C) is released.When the second clamping actuator (B) is released, one motion sequence is finished.Later, we can see examples where clamping actuators are attached to the end of feeding actuators, where at least two identical sets are used in each motor structure [22].Assuming that the activation makes the length of an actuator increase and release, making an actuator return to its rest position, motion sequence of the piezo-walk-drive, as seen in Figure 2, can be started by activating clamping actuators (1C) in the first set (step 1).A half step can be made when the moving actuators in the first and in the second set (1F and 2F) are activated (step 2).At this moment, a pushing force is Later, we can see examples where clamping actuators are attached to the end of feeding actuators, where at least two identical sets are used in each motor structure [22].Assuming that the activation makes the length of an actuator increase and release, making an actuator return to its rest position, motion sequence of the piezo-walk-drive, as seen in Figure 2, can be started by activating clamping actuators (1C) in the first set (step 1).A half step can be made when the moving actuators in the first and in the second set (1F and 2F) are activated (step 2).At this moment, a pushing force is generated by the moving actuators (1F) in the first set.When the clamping actuators (2C) in the second set are activated (step 3), all moving and clamping actuators are in an active state.The second half of the step is started by releasing the clamping actuators (1C) in the first set (step 4).After the moving actuators (1F and 2F) in the first and in the second sets are released (step 5), the second half of the step is completed.A new sequence can be started by activating clamping actuators (1C) in the first set, and releasing the clamping actuators (2C) in the second set.In order to obtain smoother motion, or for the purpose of better controllability, an overlap of activation and deactivation timings is possible for both clamping and moving actuators [19,20]. generated by the moving actuators (1F) in the first set.When the clamping actuators (2C) in the second set are activated (step 3), all moving and clamping actuators are in an active state.The second half of the step is started by releasing the clamping actuators (1C) in the first set (step 4).After the moving actuators (1F and 2F) in the first and in the second sets are released (step 5), the second half of the step is completed.A new sequence can be started by activating clamping actuators (1C) in the first set, and releasing the clamping actuators (2C) in the second set.In order to obtain smoother motion, or for the purpose of better controllability, an overlap of activation and deactivation timings is possible for both clamping and moving actuators [19,20].In other types of structures, two sets of actuators placed next to each other make "clamp-push and release" actions in sequence [23,24].All actuators in the structure are required to perform "clamppush and release" actions."Clamp-push and release" actions can be fulfilled if motion generated by an actuator has an oblique or elliptical trajectory.Oblique or elliptical trajectory is generated because each actuator is electrically divided into two sections in the longitudinal direction so driving signals cause the actuator to elongate and deflect at the same time.Both actuator sets (a pair) perform a "clamp-push and release" action one after the other, with a time delay or a phase shift in the same period cycle. Motion sequence is started by activating one set.This activation makes the moving element clamp due to elongation, and motion is created due to deflection.At the moment when the other set is initially activated, the actuator set in clamp position is then deactivated.Because the second set has been activated, the first actuator set deactivates.During this time, the sliding element does not move back, but rather advances another step. Inertia-Drive In inertia-drive piezoelectric motors, only the tangential component of the back-and-forth movement, at the interface between a slider and a stator within one period, generates a movement.In one direction of the tangential movement, the stator element is activated slowly.During this activation time, the inertia force acting on the slider is smaller than the friction force; the slider sticks to the contact area of the stator and moves with it.In the opposite direction of the tangential movement, the stator is deactivated faster, relative to its initial position.During this time, the inertia force acting on the slider is greater than the friction force, so the slider slips and stays behind the contact area of the stator element.At the end of one cycle, the sliding element makes a microscopic step.The accumulation of these microscopic steps creates macroscopic movement. Deformation of the piezoelectric element in various modes, such as longitudinal, transverse or shear, is either directly transferred to a moving element or through a coupling element, where the motion generated by the piezoelectric element is converted into a tangential direction at the interface.Depending on the structure, while converting a deformation into tangential motion, a leverage mechanism could amplify the deformation.The amplified deformation could also have a normal component [25,26].Nevertheless, having a normal component at the interface could make a motor In other types of structures, two sets of actuators placed next to each other make "clamp-push and release" actions in sequence [23,24].All actuators in the structure are required to perform "clamp-push and release" actions."Clamp-push and release" actions can be fulfilled if motion generated by an actuator has an oblique or elliptical trajectory.Oblique or elliptical trajectory is generated because each actuator is electrically divided into two sections in the longitudinal direction so driving signals cause the actuator to elongate and deflect at the same time.Both actuator sets (a pair) perform a "clamp-push and release" action one after the other, with a time delay or a phase shift in the same period cycle. Motion sequence is started by activating one set.This activation makes the moving element clamp due to elongation, and motion is created due to deflection.At the moment when the other set is initially activated, the actuator set in clamp position is then deactivated.Because the second set has been activated, the first actuator set deactivates.During this time, the sliding element does not move back, but rather advances another step. Inertia-Drive In inertia-drive piezoelectric motors, only the tangential component of the back-and-forth movement, at the interface between a slider and a stator within one period, generates a movement.In one direction of the tangential movement, the stator element is activated slowly.During this activation time, the inertia force acting on the slider is smaller than the friction force; the slider sticks to the contact area of the stator and moves with it.In the opposite direction of the tangential movement, the stator is deactivated faster, relative to its initial position.During this time, the inertia force acting on the slider is greater than the friction force, so the slider slips and stays behind the contact area of the stator element.At the end of one cycle, the sliding element makes a microscopic step.The accumulation of these microscopic steps creates macroscopic movement. Deformation of the piezoelectric element in various modes, such as longitudinal, transverse or shear, is either directly transferred to a moving element or through a coupling element, where the motion generated by the piezoelectric element is converted into a tangential direction at the interface.Depending on the structure, while converting a deformation into tangential motion, a leverage mechanism could amplify the deformation.The amplified deformation could also have a normal component [25,26].Nevertheless, having a normal component at the interface could make a motor have direction-dependent performance parameters, such as generated force and velocity, which should be compensated with magnitude and timing of a driving signal. In inertia-drive motors, tangential movement can be generated on a slider or on a stator depending on where a piezoelectric element is embedded.If a piezoelectric element is embedded into the moving element, the motor can be considered a moving actuator type.If the piezoelectric element is embedded into the stator, the motor can be considered a fixed actuator type. Even if there are some structures that could be considered as inertia-drive piezoelectric motors [27,28], the first practical inertia-drive (stick-slip) structure was proposed by Pohl in 1986 [29], where a piezoelectric cylinder is embedded into a four-bar mechanism (Figure 3).One side of the four-bar was attached to a base and a sliding mass is attached to the parallel bar.When the actuator is driven with a saw-tooth waveform signal, the attached mass (m) moves together with the bar.This occurs during the slow expansion period of the saw-tooth signal (sticking).This is left behind during the fast contraction period of the signal (slipping).Repeating the cyclic movements makes the attached mass move continuously.When the electric field makes the piezoelectric element expand quickly and contract slowly, the attached mass moves in the opposite direction.This mechanism was applied to precise multi-degree motion positioning device applications for an atomic force microscope.Because the piezoelectric element is embedded into the stator structure, this motor could be considered as a fixed actuator type. Actuators 2018, 7, 2 4 of 15 have direction-dependent performance parameters, such as generated force and velocity, which should be compensated with magnitude and timing of a driving signal. In inertia-drive motors, tangential movement can be generated on a slider or on a stator depending on where a piezoelectric element is embedded.If a piezoelectric element is embedded into the moving element, the motor can be considered a moving actuator type.If the piezoelectric element is embedded into the stator, the motor can be considered a fixed actuator type. Even if there are some structures that could be considered as inertia-drive piezoelectric motors [27,28], the first practical inertia-drive (stick-slip) structure was proposed by Pohl in 1986 [29], where a piezoelectric cylinder is embedded into a four-bar mechanism (Figure 3).One side of the four-bar was attached to a base and a sliding mass is attached to the parallel bar.When the actuator is driven with a saw-tooth waveform signal, the attached mass (m) moves together with the bar.This occurs during the slow expansion period of the saw-tooth signal (sticking).This is left behind during the fast contraction period of the signal (slipping).Repeating the cyclic movements makes the attached mass move continuously.When the electric field makes the piezoelectric element expand quickly and contract slowly, the attached mass moves in the opposite direction.This mechanism was applied to precise multi-degree motion positioning device applications for an atomic force microscope.Because the piezoelectric element is embedded into the stator structure, this motor could be considered as a fixed actuator type.In a structure proposed by Higuchi in 1986 [30], a moving mass (m1) and a weight (m2) are attached to both ends of a piezoelectric element.The whole structure is placed on a base plate and held by frictional force acting between the base plate and the moving mass (Figure 4).When an electric field is applied to the piezoelectric element, rapid expansion of the piezoelectric element creates an acceleration, causing the moving mass to overcome the static friction (slip) so both masses move in opposite directions.During the slow contraction time of the piezoelectric element, the acceleration on the moving mass cannot overcome the static friction, so only the weight moves (stick).At the end of one motion cycle, a microscopic movement is obtained.Because the piezoelectric element is embedded into the moving element, this motor could be considered as a moving actuator type.Many of the inertia-drive type motors developed afterwards operate according to the abovementioned initial structures [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45].We can also find literature that proposes ideal driving signals with detailed analysis of motion at the interface [46,47].In a structure proposed by Higuchi in 1986 [30], a moving mass (m 1 ) and a weight (m 2 ) are attached to both ends of a piezoelectric element.The whole structure is placed on a base plate and held by frictional force acting between the base plate and the moving mass (Figure 4).When an electric field is applied to the piezoelectric element, rapid expansion of the piezoelectric element creates an acceleration, causing the moving mass to overcome the static friction (slip) so both masses move in opposite directions.During the slow contraction time of the piezoelectric element, the acceleration on the moving mass cannot overcome the static friction, so only the weight moves (stick).At the end of one motion cycle, a microscopic movement is obtained.Because the piezoelectric element is embedded into the moving element, this motor could be considered as a moving actuator type.Many of the inertia-drive type motors developed afterwards operate according to the above-mentioned initial structures [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45].We can also find literature that proposes ideal driving signals with detailed analysis of motion at the interface [46,47]. Resonance-Drive In a resonance-drive piezoelectric motor, there are two types of microscopic motions: oblique and elliptical (Figure 5).Excitation of a single mode on a vibrator is enough for generating an oblique motion.Because oblique motion can have components in the tangential and normal directions, a net microscopic motion is obtained when a stator contact point touches a rotor or a slider contact point.On the other hand, excitations of two orthogonal modes are needed for generating an elliptical motion.In a multi-mode excitation piezoelectric ultrasonic motor, there is elliptical movement at the interface, which means that there are movements in two orthogonal directions at the contact point and phase shift between these movements causes the trajectory to be elliptical at the contact point.This elliptical trajectory causes the sliding element to perform microscopic motion in every cycle.Among the piezoelectric motors, resonance-drive (or ultrasonic) motors were the actuators that were studied the most.The first idea of converting electrical oscillation into mechanical movement dates back to 1927 [48], and we can see various attempts to obtain longer, mechanical motion using the inverse piezoelectric effect [49][50][51][52][53][54][55].We have classified resonance-drive piezoelectric motors based on various criteria such as, type of microscopic motion at the interface, and the method of direction change.The interested reader can refer to our previous paper [6]. In literature, we can see piezoelectric motors operated at resonance [56][57][58][59][60], where the generated modes are not in orthogonal directions.Because the generated movement at the contact point is in the tangential direction, these motors should be considered as inertia-drive types.There are also piezo-walk-drive motors that are operated at resonance [61,62]. In the following section, we introduce a newly developed resonance-drive type piezoelectric motor structure, where the excited microscopic movements at the interface are defined by parameters such as the frequency, magnitude and phase shift of the driving signals, and not mechanical dimensions, or resonance modes of a vibrator.With these parameters, not only oblique or elliptical, but more complex microscopic movements at the interface are also possible. Resonance-Drive In a resonance-drive piezoelectric motor, there are two types of microscopic motions: oblique and elliptical (Figure 5).Excitation of a single mode on a vibrator is enough for generating an oblique motion.Because oblique motion can have components in the tangential and normal directions, a net microscopic motion is obtained when a stator contact point touches a rotor or a slider contact point.On the other hand, excitations of two orthogonal modes are needed for generating an elliptical motion.In a multi-mode excitation piezoelectric ultrasonic motor, there is elliptical movement at the interface, which means that there are movements in two orthogonal directions at the contact point and phase shift between these movements causes the trajectory to be elliptical at the contact point.This elliptical trajectory causes the sliding element to perform microscopic motion in every cycle. Resonance-Drive In a resonance-drive piezoelectric motor, there are two types of microscopic motions: oblique and elliptical (Figure 5).Excitation of a single mode on a vibrator is enough for generating an oblique motion.Because oblique motion can have components in the tangential and normal directions, a net microscopic motion is obtained when a stator contact point touches a rotor or a slider contact point.On the other hand, excitations of two orthogonal modes are needed for generating an elliptical motion.In a multi-mode excitation piezoelectric ultrasonic motor, there is elliptical movement at the interface, which means that there are movements in two orthogonal directions at the contact point and phase shift between these movements causes the trajectory to be elliptical at the contact point.This elliptical trajectory causes the sliding element to perform microscopic motion in every cycle.Among the piezoelectric motors, resonance-drive (or ultrasonic) motors were the actuators that were studied the most.The first idea of converting electrical oscillation into mechanical movement dates back to 1927 [48], and we can see various attempts to obtain longer, mechanical motion using the inverse piezoelectric effect [49][50][51][52][53][54][55].We have classified resonance-drive piezoelectric motors based on various criteria such as, type of microscopic motion at the interface, and the method of direction change.The interested reader can refer to our previous paper [6]. In literature, we can see piezoelectric motors operated at resonance [56][57][58][59][60], where the generated modes are not in orthogonal directions.Because the generated movement at the contact point is in the tangential direction, these motors should be considered as inertia-drive types.There are also piezo-walk-drive motors that are operated at resonance [61,62]. In the following section, we introduce a newly developed resonance-drive type piezoelectric motor structure, where the excited microscopic movements at the interface are defined by parameters such as the frequency, magnitude and phase shift of the driving signals, and not mechanical dimensions, or resonance modes of a vibrator.With these parameters, not only oblique or elliptical, but more complex microscopic movements at the interface are also possible.Among the piezoelectric motors, resonance-drive (or ultrasonic) motors were the actuators that were studied the most.The first idea of converting electrical oscillation into mechanical movement dates back to 1927 [48], and we can see various attempts to obtain longer, mechanical motion using the inverse piezoelectric effect [49][50][51][52][53][54][55].We have classified resonance-drive piezoelectric motors based on various criteria such as, type of microscopic motion at the interface, and the method of direction change.The interested reader can refer to our previous paper [6]. In literature, we can see piezoelectric motors operated at resonance [56][57][58][59][60], where the generated modes are not in orthogonal directions.Because the generated movement at the contact point is in the tangential direction, these motors should be considered as inertia-drive types.There are also piezo-walk-drive motors that are operated at resonance [61,62]. In the following section, we introduce a newly developed resonance-drive type piezoelectric motor structure, where the excited microscopic movements at the interface are defined by parameters such as the frequency, magnitude and phase shift of the driving signals, and not mechanical dimensions, or resonance modes of a vibrator.With these parameters, not only oblique or elliptical, but more complex microscopic movements at the interface are also possible. Structure of the Vibrator and Motor Operating Principle A resonance-drive type piezoelectric motor is an octagonal piezoelectric plate, which has a thickness of 3.0 mm and a length (the distance between any of the two parallel side surfaces) of 25 mm.This was introduced and is used as the stator of the motor [63].Metal electrodes divide the main surfaces of the plate into two equal regions.The electrodes on one main surface are arranged perpendicular to the electrodes on the other main surface (Figure 6a).Two alumina eyelets, used as the friction contact elements, are attached symmetrically at the center of the top and bottom surfaces (Figure 6b). Structure of the Vibrator and Motor Operating Principle A resonance-drive type piezoelectric motor is an octagonal piezoelectric plate, which has a thickness of 3.0 mm and a length (the distance between any of the two parallel side surfaces) of 25 mm.This was introduced and is used as the stator of the motor [63].Metal electrodes divide the main surfaces of the plate into two equal regions.The electrodes on one main surface are arranged perpendicular to the electrodes on the other main surface (Figure 6a).Two alumina eyelets, used as the friction contact elements, are attached symmetrically at the center of the top and bottom surfaces (Figure 6b).The driving signals on the stator are applied between the two surface electrodes on both faces.When a signal is applied between the two top electrodes, we can assume that one electrode has A cos (w 1 t) and the other has −A cos (w 1 t) signals.Similarly, a second signal is applied between the two bottom electrodes, when one bottom electrode has B cos (w 2 t − ϕ), and the other has −B cos (w 2 t − ϕ) signals (Figure 6a).Even if the applied signals are between the two surface electrodes, the resulting electric fields are between the top and the bottom electrodes in the thickness direction.Assuming that the thickness of the octagonal plate is d, we can write the electric field for the four quarter regions as follows. Actuators 2018, 7, 2 7 of 14 When the magnitudes A, B and thickness d are normalized to 1, the frequencies (w 1 and w 2 ) of the signals on the top and on the bottom electrodes are equal and the phase angle (ϕ) is set to 9 degrees.Because of the "sum to product trigonometric identity", the electric field equations can be rewritten as: Since sin (45 • ) = cos (45 • ) = 0.707, the magnitude of the normalized electric fields are ±1.414,which means that the plate is exposed to a 42% higher electric field.Moreover, the phase difference of the normalized electric field between any of the two adjacent regions is still 90 degrees (Figure 7).Since sin(45°) cos(45°) 0.707, the magnitude of the normalized electric fields are 1.414, which means that the plate is exposed to a 42% higher electric field.Moreover, the phase difference of the normalized electric field between any of the two adjacent regions is still 90 degrees (Figure 7). When the phase shift ( ) is 0 degrees, one can easily find that section I and III have zero electric fields, but the magnitude of the normalized electric field in section II and IV is 2. Similarly, when phase shift ( ) is 180 degrees, section II and IV have zero electric fields and the magnitude of the normalized electric field in section I and III is zero.For phase shift ( ) of other values, the electric fields in these regions change accordingly.)) are applied in between the two top electrodes, only the modes in the y-axis direction are excited.The corresponding first and second inplane mode shapes in the y-axis direction are shown in Figure 9.Because the electrodes on the main surfaces are identical but orthogonal, the excited in-plane modes are also identical but orthogonal. When the driving signals are applied at the same time by setting the phase shift ( ) to 90 degrees, the resulting movement is nothing but a hula-hoop trajectory.Indeed, the microscopic movement on the surface of the center eyelets can be controlled fully by configuring the magnitude and phase shift of the top and the bottom signals.When the phase shift (ϕ) is 0 degrees, one can easily find that section I and III have zero electric fields, but the magnitude of the normalized electric field in section II and IV is 2. Similarly, when phase shift (ϕ) is 180 degrees, section II and IV have zero electric fields and the magnitude of the normalized electric field in section I and III is zero.For phase shift (ϕ) of other values, the electric fields in these regions change accordingly. When only the bottom electrodes are excited with the signals (B cos (w 1 t − ϕ) and −B cos (w 1 t − ϕ)), leaving the top electrodes floating, the in-plane modes are excited only in the x-axis direction (note that frequencies of the signals between the top and bottom electrodes are assumed to be the same).The corresponding first and second in-plane mode shapes, which were calculated by using ATILA-GID software (ATILA-GID 2.0.0,Micromechatronics Inc, State College, PA, USA), are shown in Figure 8.When the signals (A cos (w 1 t) and −A cos (w 1 t)) are applied in between the two top electrodes, only the modes in the y-axis direction are excited.The corresponding first and second in-plane mode shapes in the y-axis direction are shown in Figure 9.Because the electrodes on the main surfaces are identical but orthogonal, the excited in-plane modes are also identical but orthogonal.When the driving signals are applied at the same time by setting the phase shift (ϕ) to 90 degrees, the resulting movement is nothing but a hula-hoop trajectory.Indeed, the microscopic movement on the surface of the center eyelets can be controlled fully by configuring the magnitude and phase shift of the top and the bottom signals.When the magnitudes (A and B) at the top and the bottom signals are set to 1 V, the calculated displacements are seen in Figure 10.These displacements were calculated using ATILA-GID software, on the side of the eyelet at the first in-plane resonance mode (at 68.7 kHz), and at the second in-plane resonance mode (at 154.5 kHz).Magnitude of the displacement under the same electric fields at the second in-plane resonance mode is larger than the displacement at the first in-plane mode.This result is expected because maximum displacement, as seen in Figures 8 and 9 at the first in-plane resonance, is at the side surface of the octagonal plate.When the magnitudes (A and B) at the top and the bottom signals are set to 1 V, the calculated displacements are seen in Figure 10.These displacements were calculated using ATILA-GID software, on the side of the eyelet at the first in-plane resonance mode (at 68.7 kHz), and at the second in-plane resonance mode (at 154.5 kHz).Magnitude of the displacement under the same electric fields at the second in-plane resonance mode is larger than the displacement at the first in-plane mode.This result is expected because maximum displacement, as seen in Figures 8 and 9 at the first in-plane resonance, is at the side surface of the octagonal plate.When the magnitudes (A and B) at the top and the bottom signals are set to 1 V, the calculated displacements are seen in Figure 10.These displacements were calculated using ATILA-GID software, on the side of the eyelet at the first in-plane resonance mode (at 68.7 kHz), and at the second in-plane resonance mode (at 154.5 kHz).Magnitude of the displacement under the same electric fields at the second in-plane resonance mode is larger than the displacement at the first in-plane mode.This result is expected because maximum displacement, as seen in Figures 8 and 9 at the first in-plane resonance, is at the side surface of the octagonal plate.When the magnitudes (A and B) at the top and the bottom signals are set to 1 V, the calculated displacements are seen in Figure 10.These displacements were calculated using ATILA-GID software, on the side of the eyelet at the first in-plane resonance mode (at 68.7 kHz), and at the second in-plane resonance mode (at 154.5 kHz).Magnitude of the displacement under the same electric fields at the second in-plane resonance mode is larger than the displacement at the first in-plane mode.This result is expected because maximum displacement, as seen in Figures 8 and 9 at the first in-plane resonance, is at the side surface of the octagonal plate.Movement trajectories on the side of the eyelet at 68.7 kHz for three driving conditions are seen in Figure 11.At this frequency, the displacement generated on the side surface of the coupling elements (eyelets) is circular.When the magnitude of the signal applied on the top and on the bottom electrodes is the same (A = B = 1 V) and the phase shift (ϕ) between the two signals is 90 degrees, the trajectory is a circle.When the magnitude of the applied signal in between the top electrodes is doubled (in this case: A = 2 V, B = 1 V), only the magnitude of the displacement in the y-axis direction is doubled.Similarly, When the magnitude of the applied signal between the bottom electrodes is doubled (in this case: A = 1 V, B = 2 V), only the magnitude of the displacement in the x-axis direction is doubled. Actuators 2018, 7, 2 9 of 15 Movement trajectories on the side of the eyelet at 68.7 kHz for three driving conditions are seen in Figure 11.At this frequency, the displacement generated on the side surface of the coupling elements (eyelets) is circular.When the magnitude of the signal applied on the top and on the bottom electrodes is the same (A = B = 1 V) and the phase shift ( ) between the two signals is 90 degrees, the trajectory is a circle.When the magnitude of the applied signal in between the top electrodes is doubled (in this case: A = 2 V, B = 1 V), only the magnitude of the displacement in the y-axis direction is doubled.Similarly, When the magnitude of the applied signal between the bottom electrodes is doubled (in this case: A = 1 V, B = 2 V), only the magnitude of the displacement in the x-axis direction is doubled.When the phase angle ( ) between the signals on the top and on the bottom electrodes is changed from 0 to 180 degrees by setting the magnitudes (A and B) to 1 V, the shape of the elliptical motion also changes (Figure 12).Note that when the phase angle is 0 degrees, the motion is in the oblique direction.This is because there are electric fields only in the two diagonal regions (II and IV as marked in Figure 6).In the other two diagonal areas (regions I and III), the electric fields are zero.When the phase angle is 180 degrees, the motion is again in the oblique direction.In this case, there are electric fields only in the two diagonal regions (I and III), and the other two diagonal regions (II and IV) have zero electric field.When the phase angle (ϕ) between the signals on the top and on the bottom electrodes is changed from 0 to 180 degrees by setting the magnitudes (A and B) to 1 V, the shape of the elliptical motion also changes (Figure 12).Note that when the phase angle is 0 degrees, the motion is in the oblique direction.This is because there are electric fields only in the two diagonal regions (II and IV as marked in Figure 6).In the other two diagonal areas (regions I and III), the electric fields are zero.When the phase angle is 180 degrees, the motion is again in the oblique direction.In this case, there are electric fields only in the two diagonal regions (I and III), and the other two diagonal regions (II and IV) have zero electric field. Structure of the Linear Motor In the test motor, the piezoelectric vibrating element was placed between two identical plastic holders made from PEEK material and placed together in a steel housing.The steel housing was guided by two linear bearings in the pre-stress direction.The two linear bearings were fixed on a base plate.A slider that has a U-shaped cross-section guided by another larger linear bearing was also fixed on the base plate.Two alumina rods were attached to both ends of the U-shaped slider. Structure of the Linear Motor In the test motor, the piezoelectric vibrating element was placed between two identical plastic holders made from PEEK material and placed together in a steel housing.The steel housing was guided by two linear bearings in the pre-stress direction.The two linear bearings were fixed on a base plate.A slider that has a U-shaped cross-section guided by another larger linear bearing was also fixed on the base plate.Two alumina rods were attached to both ends of the U-shaped slider.The side surfaces of the two eyelets on the vibrator and the two alumina rods on the slider were in contact tangentially (Figure 13).Two or three springs applied pressure against the slider.In the test motor, a position sensor was embedded in the base plate and a reflective surface was attached to the side surface of the U-shaped slider (Figure 14). Structure of the Linear Motor In the test motor, the piezoelectric vibrating element was placed between two identical plastic holders made from PEEK material and placed together in a steel housing.The steel housing was guided by two linear bearings in the pre-stress direction.The two linear bearings were fixed on a base plate.A slider that has a U-shaped cross-section guided by another larger linear bearing was also fixed on the base plate.Two alumina rods were attached to both ends of the U-shaped slider.The side surfaces of the two eyelets on the vibrator and the two alumina rods on the slider were in contact tangentially (Figure 13).Two or three springs applied pressure against the slider.In the test motor, a position sensor was embedded in the base plate and a reflective surface was attached to the side surface of the U-shaped slider (Figure 14). Structure of the Linear Motor In the test motor, the piezoelectric vibrating element was placed between two identical plastic holders made from PEEK material and placed together in a steel housing.The steel housing was guided by two linear bearings in the pre-stress direction.The two linear bearings were fixed on a base plate.A slider that has a U-shaped cross-section guided by another larger linear bearing was also fixed on the base plate.Two alumina rods were attached to both ends of the U-shaped slider.The side surfaces of the two eyelets on the vibrator and the two alumina rods on the slider were in contact tangentially (Figure 13).Two or three springs applied pressure against the slider.In the test motor, a position sensor was embedded in the base plate and a reflective surface was attached to the side surface of the U-shaped slider (Figure 14).Two phase sinusoidal waveform signals with independent magnitude and phase shift were generated by a "Multifunction Synthesizer" (WF1946B, NF Corp., Yokohama, Japan) and amplified by two power amplifiers (HAS 4011 and HAS4014, NF Corp.).Before applying the two signals to the test motor, two electromagnetic transformers, with their primary and secondary side turn ratio of 1:1, were used to shift the common grounds to a floating state.In this case, when the signal on one of the top electrodes was cos (w 1 t) , the other signal on the other top electrode would be −A cos(w 1 t).Magnitudes of these signals control the displacement in the normal direction.The second signal, with a phase shift of 90 degrees from the other power amplifier, was also applied first to an electromagnetic transformer and applied to bottom electrodes.The signals on the bottom electrodes (B cos (w 1 t − 90) and −(B cos (w 1 t − 90)) control displacement in the tangential direction. The slider position data were collected by a PC.The position data collected were used to calculate the motor control input and speed, and load characteristics. Speed Characteristics Conventionally, the speed of an ultrasonic motor for one-or two-phase driving type resonance piezoelectric motors is controlled by changing the magnitude of the driving signals, or the phase difference between the two signals, which results in microscopic movements in the normal and tangential directions at the interface, to increase or decrease at the same time.The disadvantage of both driving methods is nonlinearity, such as dead-zone and hysteresis, especially in the low-speed region.In this motor, microscopic motion at the contact points in the tangential and normal directions is generated by the driving signals on the top and on the bottom electrodes, respectively.When the corresponding electrodes that are responsible for generating tangential displacements are excited, displacement is generated only in a tangential direction.Similarly, when the corresponding electrodes that are responsible for generating normal displacements are excited, only normal displacement is generated.Since the magnitude of a normal direction displacement is responsible for the generated force and the magnitude of a tangential displacement is responsible for the slider speed, these two parameters in this motor are controlled independently. In order to clarify the of independent driving, the motor was driven in the following two different cases. Case 1: The control input in this case was the driving signals of which one was applied between the two top and the other was applied between the two bottom electrodes.The signals on the top and the bottom electrodes were changed with equal magnitudes at the same time while keeping the phase shift between them at 90 degrees.As can be seen in Figure 15, with the speed-driving voltage (control input) curve, the threshold voltage was relatively large, and with the increase of pre-stress, the threshold voltage further increased.When the pre-stress was 35 N, the slider started to move at 60 V.When the pre-stress was increased to 70 N, the starting voltage of the slider increased to 100 V. Case 2: The magnitude of the signal that is responsible for making normal displacement was at maximum level (160 V peak-to-peak).According to the actuator orientation, as seen in Figure 14, the signal that was applied between the two top electrodes generated displacement in the normal direction.The other signal that was applied between the two bottom electrodes generated tangential displacement, which was the control input in "Case 2" driving.Speed-control input curves were also obtained for the two different pre-stress conditions at 35 N and 70 N.As can be seen in Figure 15, case 2 driving had smaller dead-zones, or smaller threshold voltages.When the pre-stress force was 35 N, the control input threshold voltage decreased to 5 V from 60 V. Although, the pre-stress was increased to 70 N, the control input threshold voltage decreased to 10 V from 100 V. Load Characteristics In order to obtain load characteristics, the test motor was driven under different load conditions (1, 5, 10, 20 N) and a series of position data was collected by the embedded encoder.When the magnitude of the signal responsible for generating displacement in the normal direction was 160 V, and the magnitude of the signal responsible for generating displacement in the tangential direction was 140 V (peak to peak), the motor speeds were calculated from the measured position data.A typical load characteristic of the test motor is shown in Figure 16.Even though the main purpose in Load Characteristics In order to obtain load characteristics, the test motor was driven under different load conditions (1, 5, 10, 20 N) and a series of position data was collected by the embedded encoder.When the magnitude of the signal responsible for generating displacement in the normal direction was 160 V, and the magnitude of the signal responsible for generating displacement in the tangential direction was 140 V (peak to peak), the motor speeds were calculated from the measured position data.A typical load characteristic of the test motor is shown in Figure 16.Even though the main purpose in this study was to reduce speed-control input (driving voltage) nonlinearities, the test motor can produce a maximum force of 20 N and a maximum speed of 70 mm/s. Load Characteristics In order to obtain load characteristics, the test motor was driven under different load conditions (1, 5, 10, 20 N) and a series of position data was collected by the embedded encoder.When the magnitude of the signal responsible for generating displacement in the normal direction was 160 V, and the magnitude of the signal responsible for generating displacement in the tangential direction was 140 V (peak to peak), the motor speeds were calculated from the measured position data.A typical load characteristic of the test motor is shown in Figure 16.Even though the main purpose in this study was to reduce speed-control input (driving voltage) nonlinearities, the test motor can produce a maximum force of 20 N and a maximum speed of 70 mm/s. Conclusions Although there are piezoelectric motors with various structures, there are not many operating principles which cause them to be categorized as piezo-walk-drive, inertia-drive and resonance-drive types.In the case of the piezo-walk-drive, normal and tangential displacements (thus forces) at the interface are performing "clamp-release-move" or "while clamping push" steps.In the case of the inertia-drive, only a tangential displacement or a direction-dependent tangential force change at the interface is enough to obtain movement.For the resonance-drive, microscopic displacement at the interface has tangential and normal components, where magnitudes and phase difference between these displacements determine the direction of microscopic movements. The resonance-drive type piezoelectric motor introduced in this study covers both single and multi-mode operation types.Oblique motion at the interface is just a special case when the phase difference is either 0 or 180 degrees. Conclusions Although there are piezoelectric motors with various structures, there are not many operating principles which cause them to be categorized as piezo-walk-drive, inertia-drive and resonance-drive types.In the case of the piezo-walk-drive, normal and tangential displacements (thus forces) at the interface are performing "clamp-release-move" or "while clamping push" steps.In the case of the inertia-drive, only a tangential displacement or a direction-dependent tangential force change at the interface is enough to obtain movement.For the resonance-drive, microscopic displacement at the interface has tangential and normal components, where magnitudes and phase difference between these displacements determine the direction of microscopic movements. The resonance-drive type piezoelectric motor introduced in this study covers both single and multi-mode operation types.Oblique motion at the interface is just a special case when the phase difference is either 0 or 180 degrees. Microscopic movement at the stator-slider interface of the new resonance piezoelectric motor is fully controllable, with the phase shifts and magnitude of the surface driving signals on the octagonal plate.By driving the top and bottom electrodes with different frequencies (w 1 and w 2 ) at the interface, even complex movements, other than oblique or elliptical, are also possible. Figure 1 . Figure 1.Motion sequences of a piezo-walk-drive motor proposed by Brisbane in 1965 [17]. Figure 1 . Figure 1.Motion sequences of a piezo-walk-drive motor proposed by Brisbane in 1965 [17]. Figure 5 . Figure 5.In a resonance-drive type piezoelectric motor, there are elliptical or oblique motions at the stator-rotor (/slider) contact points. Figure 5 . Figure 5.In a resonance-drive type piezoelectric motor, there are elliptical or oblique motions at the stator-rotor (/slider) contact points. Figure 5 . Figure 5.In a resonance-drive type piezoelectric motor, there are elliptical or oblique motions at the stator-rotor (/slider) contact points. Figure 6 . Figure 6.(a) Metallization electrodes are oriented orthogonally on the main surfaces of the octagonal piezoelectric plate.(b) The coupling elements are two eyelets that are attached in the middle of the two main surfaces on the octagonal piezoelectric plate.The side surfaces of the eyelets (marked with red arrows) are in contact with the slider. Figure 6 . Figure 6.(a) Metallization electrodes are oriented orthogonally on the main surfaces of the octagonal piezoelectric plate.(b) The coupling elements are two eyelets that are attached in the middle of the two main surfaces on the octagonal piezoelectric plate.The side surfaces of the eyelets (marked with red arrows) are in contact with the slider. Figure 7 . Figure 7. Electric fields in the four regions of the octagonal piezoelectric plate when the magnitudes A, B and thickness d are normalized to 1. Frequencies ( and ) for both signals are assumed to be equal and T is the period (phase shift ( ) is 90 degrees).When only the bottom electrodes are excited with the signals ( cos( − ) and − cos( − )), leaving the top electrodes floating, the in-plane modes are excited only in the xaxis direction (note that frequencies of the signals between the top and bottom electrodes are assumed to be the same).The corresponding first and second in-plane mode shapes, which were calculated by using ATILA-GID software (ATILA-GID 2.0.0,Micromechatronics Inc, State College, PA, USA), are shown in Figure 8.When the signals ( cos( ) and− cos( )) are applied in between the two top Figure 7 . Figure 7. Electric fields in the four regions of the octagonal piezoelectric plate when the magnitudes A, B and thickness d are normalized to 1. Frequencies (w 1 and w 2 ) for both signals are assumed to be equal and T is the period (phase shift (ϕ) is 90 degrees). Figure 8 . Figure 8. In-plane first and second mode shapes when only the bottom surface electrodes are electrically excited.Deformations are in the x-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 9 . Figure 9. In-plane first and second mode shapes when only the top electrodes are electrically excited.Deformations are in the y-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 8 .of 15 Figure 8 . Figure 8. In-plane first and second mode shapes when only the bottom surface electrodes are electrically excited.Deformations are in the x-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 9 . Figure 9. In-plane first and second mode shapes when only the top electrodes are electrically excited.Deformations are in the y-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 9 . Figure 9. In-plane first and second mode shapes when only the top electrodes are electrically excited.Deformations are in the y-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 8 . Figure 8. In-plane first and second mode shapes when only the bottom surface electrodes are electrically excited.Deformations are in the x-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 9 . Figure 9. In-plane first and second mode shapes when only the top electrodes are electrically excited.Deformations are in the y-axis direction.(a) First in-plane mode at 68 kHz, (b) Second in-plane mode at 154.5 kHz. Figure 11 . Figure 11.Calculated motion trajectories, when the magnitude of the signals applied on the top or on the bottom electrodes is changed. Figure 11 . Figure 11.Calculated motion trajectories, when the magnitude of the signals applied on the top or on the bottom electrodes is changed. Figure 12 . Figure 12.Response of the trajectory to the phase shift between the signals on the top and bottom electrodes. Figure 12 . Figure 12.Response of the trajectory to the phase shift between the signals on the top and bottom electrodes. Figure 12 . Figure 12.Response of the trajectory to the phase shift between the signals on the top and bottom electrodes. Figure 13 . Figure 13.Perspective and end views of the vibrator-slider interface.The side surfaces of both eyelets were contacted tangentially to the alumina rods on the U-shaped slider.The distance between any of the two parallel side surfaces of the plate was 25 mm and the thickness was 3.0 mm. Figure 14 . Figure 14.Perspective views (3D CAD drawing and photo) of the test linear motor.The piezoelectric vibrating element was placed between two identical holders. Figure 13 . Figure 13.Perspective and end views of the vibrator-slider interface.The side surfaces of both eyelets were contacted tangentially to the alumina rods on the U-shaped slider.The distance between any of the two parallel side surfaces of the plate was 25 mm and the thickness was 3.0 mm. Figure 12 . Figure 12.Response of the trajectory to the phase shift between the signals on the top and bottom electrodes. Figure 13 . Figure 13.Perspective and end views of the vibrator-slider interface.The side surfaces of both eyelets were contacted tangentially to the alumina rods on the U-shaped slider.The distance between any of the two parallel side surfaces of the plate was 25 mm and the thickness was 3.0 mm. Figure 14 . Figure 14.Perspective views (3D CAD drawing and photo) of the test linear motor.The piezoelectric vibrating element was placed between two identical holders.Figure 14.Perspective views (3D CAD drawing and photo) of the test linear motor.The piezoelectric vibrating element was placed between two identical holders. Figure 14 . Figure 14.Perspective views (3D CAD drawing and photo) of the test linear motor.The piezoelectric vibrating element was placed between two identical holders.Figure 14.Perspective views (3D CAD drawing and photo) of the test linear motor.The piezoelectric vibrating element was placed between two identical holders. Figure 15 . Figure 15.Speed-driving voltage (control input) characteristics for two cases of driving.(a) Pre-stress in the normal direction: 35 N, (b) Pre-stress in the normal direction: 70 N. Figure 15 . Figure 15.Speed-driving voltage (control input) characteristics for two cases of driving.(a) Pre-stress in the normal direction: 35 N, (b) Pre-stress in the normal direction: 70 N. Figure 15 . Figure 15.Speed-driving voltage (control input) characteristics for two cases of driving.(a) Pre-stress in the normal direction: 35 N, (b) Pre-stress in the normal direction: 70 N. Figure 16 . Figure 16.Load characteristics of the test linear motor.Sinusoidal wave signals generating normal and tangential displacements have magnitudes of 160 and 140 V (peak-to-peak), respectively. Figure 16 . Figure 16.Load characteristics of the test linear motor.Sinusoidal wave signals generating normal and tangential displacements have magnitudes of 160 and 140 V (peak-to-peak), respectively.
12,860.4
2018-01-06T00:00:00.000
[ "Physics" ]
Time-of-flight and activation experiments on 147Pm and 171Tm for astrophysics The neutron capture cross section of several key unstable isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n,γ ) measurement, for which high neutron fluxes and effective background rejection capabilities are required. As part of a new program to measure some of these important branching points, radioactive targets of 147Pm and 171Tm have been produced by irradiation of stable isotopes at the ILL high flux reactor. Neutron capture on 146Nd and 170Er at the reactor was followed by beta decay and the resulting matrix was purified via radiochemical separation at PSI. The radioactive targets have been used for time-of-flight measurements at the CERN n TOF facility using the 19 and 185 m beam lines during 2014 and 2015. The capture cascades were detected using a set of four C6D6 scintillators, allowing to observe the associated neutron capture resonances. The results presented in this work are the first ever determination of the resonance capture cross section of 147Pm and 171Tm. Activation experiments on the same 147Pm and 171Tm targets with a high-intensity 30 keV quasi-Maxwellian flux of neutrons will be performed using the SARAF accelerator and the Liquid-Lithium Target (LiLiT) in order to extract the corresponding Maxwellian Average Cross Section (MACS). The status of these experiments and preliminary results will be presented and discussed as well. Introduction The s-and r-processes are the responsible for the formation in the stars of practically all the chemical elements heavier than iron.The phenomenological picture of the classical s process was formulated about 50 years ago in the seminal papers of Burbidge et al. [1] and of Cameron [2] in 1957, where the entire s-process panorama was already sketched in its essential parts.They explain how, in this process, the elements heavier than iron are produced by a continuous chain of neutron capture reactions and beta-decays that give rise to the heavy elements.The phenomenology of the s-process implies that the solar abundance distribution is composed of two parts, a main component, which is responsible for the mass region from Y to Bi, and a weak component, which contributes to the region from Fe to Sr.The main and weak components can be assigned to low mass stars (between 1 and 3 solar masses) and to massive stars (more than 8 solar masses), respectively.Accordingly, the Galactic enrichment with s-process material starts with the lighter s elements, because massive stars evolve much quicker.For a recent and comprehensive review see Ref. [3]. A quantitative description of the abundances arising from the s-process requires both, the neutron capture rates and the b-decay probabilities of all the isotopes involved.Along the s-process path, unstable nuclei with relatively long (y) and with very long (Gy) half-lives, known as branching point isotopes, become of utmost interest: their destruction via either beta decay or neutron capture depends on the conditions of the environment (density, temperature).Hence the importance of knowing the corresponding capture cross sections.Despite of their pivotal role, as of today, only the capture cross section of 2 out of a list of 21 important s-process branching points isotopes (see [3]) have been measured by neutron time-offlight. In this work, we add three more items to the list of measured isotopes.We have produced sizable quantities of 147 Pm, 171 Tm and 204 Tl inside the ILL high flux reactor, purified the material, made suitable targets out it [4], and measured the corresponding capture cross section by time-of-flight at the CERN n TOF facility [5,6] and by activation at LiLiT [7].Since the data analysis is ongoing, this paper does not include final results but a description of the experiments and an outlook of the analysis and expected results. Experiments 2.1. Production of the radioactive targets The isotopes 147 Pm, 171 Tm and 204 Tl have been produced by neutron irradiation at the high flux reactor at Institute Laue-Langevin (ILL), Grenoble of 98.2 mg of 146 Nd 2 O 3 enriched to 98.8%, 238 mg of 170 Er 2 O 3 enriched to 98.1%, and 263 mg of 203 Tl 2 O 3 enriched to 99.5%.The powder of each isotope was pressed into pellets, each of which was then encapsulated into a high purity quartz ampule sealed by a flame torch.These ampules were irradiated at ILL for a period of 55 days with an average neutron flux of 8.2 × 10 14 n/cm 2 /s and, after a cooling period of approximately 1.5 years, the samples were shipped to PSI where they underwent chemical processing. While the 204 Tl target was left inside the quartz ampule for the subsequent measurements due to its prohibitive dose rate, the irradiated Nd (150 GBq) and Er (3 GBq) pellets were chemically purified prior to making suitable targets.The material was electroplated onto 5 µm thick aluminum backings resulting in two high quality targets of 22 mm diameter with a total of 3.8 mg of 171 Tm and 85 µg of 147 Pm.A picture of one of the targets is shown in Fig. 1. Time-of-flight experiments at n TOF The CERN time-of-.flightfacility features two neutron beam lines: a shorter one with increased neutron flux at only 19 m [5] and a longer one with better energy resolution at 185 m [6].Both beam lines look at the neutrons produced by spallation induced by a pulsed 20 GeV/c proton beam impinging on a cylindrical lead block every (at best) 1.2 seconds. After traveling through the chosen beam line, a fraction of the neutrons incident on the target under study ( 147 Pm, 171 Tm and 204 Tl in this case) undergoes neutron capture reactions, and the subsequent γ -rays are detected by an array of four C 6 D 6 liquid scintillators [8].These are detectors with very low neutron sensitivity that allow one eliminating the background due to neutrons scattered in the target.After the corresponding background subtraction, the measured distributions of capture counts as a function of time-of-flight, i.e., neutron energy, are transformed into the capture yield applying the so-called Pulse Height Weighting Technique (PHWT) [9] and using the saturated resonance of 197 Au for an absolute normalization [10]. Activation experiments at LiLiT The Liquid Lithium Target (LiLiT) [7] installed at the SARAF facility (Israel) represents the most intense quasi-Maxwellian neutron beam worldwide.The SARAF accelerator provides a proton beam of 1-2 mA with an energy of ∼1.93 MeV (just above the threshold of the 7 Li(p,n) reaction) that is driven into a thin (1.5 mm) liquid lithium layer, hence providing the quasi-Maxwellian neutron energy distribution (see [11] for details).At LiLiT, Maxwellian Averaged Cross Sections (MACS) are measured via the activation technique.The targets are exposed to the neutron beam and the number of capture reactions is determined from the number of A+1 Z nuclei produced.Assuming that the A+1 Z isotope is radioactive, the number of unstable nuclei is quantified using a Ge detector looking at the associated emission of γ -rays.In this case, the MACS of 197 Au serves as a reference. Preliminary and expected results The time-of-flight measurements of both 171 Tm and 204 Tl were carried out at the n TOF long neutron beam line, EAR1.In both cases, despite of the severe background conditions arising for the high activity of the targets, we have obtained a nice data set that allows resolving capture resonances for the first time. In the case of 171 Tm the preliminary capture yield is displayed in Fig. 2, showing resonances up to 700 eV and illustrating the good resolution of n TOF-EAR1.This data set will provide a complete set of resonance parameters and the unresolved resonance region will be derived from these with the help of the Hauser-Feshbach statistical model.In the case of 204 Tl the observed resonance are actually in the keV region of interest in astrophysics.In the case of 147 Pm, the mass of the target is so small (85 µg) that it is at the detection limit of the n TOF short beam line, EAR2.In this case only a few resonances have been observed and therefore the data will not allow extracting a cross section value in the keV energy region of interest. In the case of the activation measurements, both 147 Pm(n,γ ) and 171 Tm(n,γ ) have been successfully measured at LiLiT.Indeed, due to the high neutron beam intensity at LiLiT we have significantly increased the ND2016 statistics achieved in the previous experiment and will therefore be able to use more γ -ray lines, improving the accuracy and increasing the reliability of our results.The results will be the measured MACS at 30 keV with an expected accuracy of 10%. Figure 1 . Figure 1.Picture of the 171 Tm target. Figure 2 . Figure 2. Experimental capture yield of the 171 Tm(n,γ ) measurement at CERN n TOF.
2,077.6
2017-09-13T00:00:00.000
[ "Physics" ]
The decomposition-based outer approximation algorithm for convex mixed-integer nonlinear programming This paper presents a new two-phase method for solving convex mixed-integer nonlinear programming (MINLP) problems, called Decomposition-based Outer Approximation Algorithm (DECOA). In the first phase, a sequence of linear integer relaxed sub-problems (LP phase) is solved in order to rapidly generate a good linear relaxation of the original MINLP problem. In the second phase, the algorithm solves a sequence of mixed integer linear programming sub-problems (MIP phase). In both phases the outer approximation is improved iteratively by adding new supporting hyperplanes by solving many easier sub-problems in parallel. DECOA is implemented as a part of Decogo (Decomposition-based Global Optimizer), a parallel decomposition-based MINLP solver implemented in Python and Pyomo. Preliminary numerical results based on 70 convex MINLP instances up to 2700 variables show that due to the generated cuts in the LP phase, on average only 2–3 MIP problems have to be solved in the MIP phase. A large collection of real-world MINLP problems can be found in MINLPLib [33]. In this paper, we consider a subclass of MINLP problems where the feasible set is defined by integrality restrictions and convex nonlinear functions. Most of the current MINLP deterministic solvers are based on the branch-and-bound (BB) algorithm [4,6], in particular branch-and-cut, like ANTIGONE [26], BARON [31], Couenne [1], Lindo API [23] and SCIP [32]. Other methods based on BB are branch-cut-and-price [9], branch-decompose-and-cut [30] and branch-and-refine [22]. Although these methods have found a lot of applications, they can be computationally very demanding, due to a rapidly growing global search tree, which may prevent the method to find an optimal solution in a reasonable time. In contrast to BB, successive approximation methods solve an optimization problem without using a single global search tree. The outer approximation (OA) method [10,12], the extended cutting plane (ECP) algorithm [36] and extended supporting hyperplane (ESH) algorithm [20] solve convex MINLPs by successive linearization of nonlinear constraints. A comparison of several solvers for convex MINLP [19] reveals that the SHOT (ESH-based) solver [20] and the AOA (OA-based) solver [18] have the best performance. Improvement of polyhedral outer approximations using extended formulations significantly reduces the number of OA iterations [25]. Generalized Benders Decomposition (GBD) [13,15] solves a convex MINLP by iteratively solving NLP and MIP sub-problems. The adaptive MIP OAmethod is based on the refinement of MIP relaxations by projecting infeasible points onto a feasible set, see [5,8]. by linearization of nonlinear functions. The key difference to these well-known approaches is that DECOA uses a decomposition-based cut generation, i.e. supporting hyperplanes are constructed only by solving small sub-problems in parallel. DECOA uses projection as a basic type of cut generation, i.e. infeasible points are projected onto the feasible set by solving small sub-problems. The algorithm also uses a line search procedure (like ESH) in order to generate additional supporting hyperplanes. A detailed description of DECOA is given in Sect. 3. Note that in Algorithm 3 of [29], a variant of DECOA has been presented, which, in contrast to DECOA, solves non-convex MINLPs by adapting break-points without using projection steps. DECOA is implemented as a part of the MINLP solver Decogo (Decomposition-based Global Optimizer). Preliminary results of the implementation are presented. Outline of the paper This paper is structured as follows. In Sect. 2, the definition of block-separable MINLP and the notation are given. Section 3 presents the new decomposition-based outer approximation (DECOA) algorithm. A proof of convergence is given in Sect. 4. In Sect. 5, the implementation of DECOA is briefly described. Preliminary results of DECOA on convex MINLPs of the MINLPLib are presented in Sect. 6. We summarize findings and discuss possible next steps in Sect. 7. Block-separable reformulation of MINLP DECOA solves convex block-separable (or quasi-separable) MINLP problems of the form min c T x s. t. x ∈ P, x k ∈ X k , k ∈ K (1) with where G k := {y ∈ R n k : g k j (y) ≤ 0, j ∈ [m k ]}, The vector of variables x ∈ R n is partitioned into |K | blocks such that n = k∈K n k , where n k is the dimension of the k-th block, and x k ∈ R n k denotes the variables of the k-th block. The vectors x, x ∈ R n determine the lower and upper bounds on the variables. The linear constraints defining the feasible set P are called global. The constraints defining the feasible set X k are called local. The set X k consists of the set G k of m k local nonlinear constraints, set P k of |J k | local linear constraints and set Y k of integrality constraints. In this paper, it is assumed that all the local nonlinear constraint functions g k j : R n k → R, j ∈ [m k ] are bounded, continuously differentiable and convex within the set [x k , x k ]. Global linear constraints P are defined by a j ∈ R n , b j ∈ R, j ∈ J and local linear constraints P k are defined by a k j ∈ R n k , b k j ∈ R, j ∈ J k . The set Y k defines the set of integer values of variables x ki , i ∈ I k , where I k is an index set. The linear objective function is defined by c T x := k∈K c T k x k , c k ∈ R n k . Furthermore, we define sets The block-sizes n k can have an influence on the performance of a decomposition algorithm. It is possible to reformulate a general sparse MINLP defined by factorable functions g k j as a block-separable optimization problem with a given maximum block-size n k by adding new variables and copy-constraints [27,31,32]. It has been shown that a MINLP can be reformulated as a separable program, where the size of all blocks is one. However, a reformulation may not preserve the convexity of constraints. A natural block-separable reformulation preserving the convexity of constraints is given by connected components of the Hessian adjacency graph, see (23). DECOA DECOA iteratively solves and improves an outer approximation (OA) problem, where the convex nonlinear set G is approximated by finitely many hyperplanes. In each iteration, the outer approximation is refined by generating new supporting hyperplanes. Due to the blockseparability of the problem (1), the sample points for supporting hyperplanes are obtained by solving low-dimensional sub-problems. DECOA consists of two parts: LP phase and MIP phase. In the LP phase, the algorithm initializes the outer approximation of set G by solving a linear programming outer approximation (LP-OA) master problem. In the MIP phase, the algorithm refines the outer approximation of set G by solving a mixed-integer programming outer approximation (MIP-OA) master problem. In the end, the final MIP-OA master problem is a reformulation of problem (1). In the following subsections we describe the master problems and sub-problems and outline the basic version of DECOA. In the end, we describe the full DECOA algorithm with all improvements. OA master problem DECOA obtains solution estimatex by solving an OA master problem defined by min c T x, where X k ⊇ X k is a polyhedral outer approximation of set X k . Note that X := k∈K X k . The polyhedral outer approximation G k ⊇ G k of convex nonlinear set G k is defined by whereǧ T k is a set of sample points andǧ k j (x) denotes a piecewise linear underestimator of function g k j . Supporting hyperplanes are defined by a linearization at sample pointŷ ∈ T k . Note that the linearizations are computed only for active nonlinear constraints at pointŷ ∈ T k , i.e. g k j (ŷ) = 0. Furthermore, we define G := k∈K G k . Note that OA (5) can be infeasible, if the given MINLP model (1) is infeasible, e.g. because of data or model errors. Since most MIP solvers, like SCIP, are able to detect the infeasibility of a model, a feasibility flag can be returned after solving (5), which can be used to stop DECOA, if the MINLP model (1) is infeasible. Basic DECOA In this subsection we describe the basic version of DECOA. The refinement procedure is performed only by solving projection sub-problems. Iteratively, the algorithm computes a solution estimatex by solving MIP-OA master problem (5) defined by After solving the MIP-OA master problem, projection sub-problem (9) is solved for each k ∈ Kŷ wherex k is the k-th part of the solutionx of MIP-OA problem (8). The solutionŷ k is used for updating the outer approximation G by generating new supporting hyperplanes as defined in (7). Algorithm 1 Basic DECOA 1: for k ∈ K do G k ← R n k 2: repeat 3:x ← solveMipOA(P, X ) 4: for k ∈ K do G k ← addProjectCuts(x k , P k , G k ) 5: until stopping criterion Algorithm 1 describes the basic version of DECOA. Iteratively it solves MIP-OA master problem (8) by calling procedure solveMipOA. Then the algorithm calls procedure addProjectCuts for the refinement of set G. It performs a projection from pointx onto the feasible set by solving sub-problems (9) and adds linearization cuts at solution pointsŷ k . The algorithm iteratively performs these steps until a stopping criterion is fulfilled. Theorem 1 proves that Algorithm 1 converges to the global optimum of problem (1). However, starting solving the MIP-OA (8) from scratch would be computationally demanding. In order to speed up the convergence, we design an algorithm which reduces the number of times a MIP-OA master problem has to be solved. The improved DECOA algorithm is presented in the two following subsections. The LP phase In order to generate rapidly an initial outer approximation G and to reduce the number of iterations in the MIP phase, DECOA iteratively solves the LP-OA master problem and improves it by solving small sub-problems. LP-OA master problem (5) is defined by To further improve the quality of set G, the following line search sub-problem can be solved for each k ∈ K (α k ,ŷ k ) = argmaxα, wherex k is the k-th part of the solutionx of LP-OA master problem (10) andx k is an interior point of set G k ∩ P k . The obtained solution pointŷ is an additional support point for improving outer approximation G. For solving line search sub-problems (11), one has to obtain an interior pointx. We consider the following NLP problem Note that problem (12) is convex, since the functions g k j (x k ) − s ≤ 0 are convex. Given that the original problem (1) has a solution, then problem (12) also has a solution, i.e. x ∈ P ∩ k∈K G k ∩ P k . It is important that pointx is contained within the interior of set P ∩ k∈K G k ∩ P k . If pointx lies on the boundary of set P ∩ k∈K G k ∩ P k , the solution of problem (11) will always be the same, i.e. supporting hyperplanes will always be the same. In practice, the interior pointx can be obtained by solving integer-relaxed NLP problem (1), where the objective function is a constant (zero), using an interior point-based NLP solver, such as IPOPT [34]. Algorithm 2 LP phase of DECOA 1: function OaStart 2: for k ∈ K do G k ← R n k 3: until no improvement 7:x ← solveNLPZeroObj(x, P, X ) 8: repeat 9: for k ∈ K do G k ← addProjectCuts(x k , P k , G k ) 10: for k ∈ K do G k ← addLineSearchCuts(x k ,x k , P k , G k ) 11:x ← solveLpOA(P, X ) 12: until no improvement 13: (x, G) ← addUnfixedNlpCuts(x, P, X ) 14: return (x,x, G) Algorithm 2 describes the LP phase of the DECOA algorithm for a rapid initialization of the polyhedral outer approximation. At the beginning, it solves the LP-OA master problem defined in (10) by calling procedure solveLpOA, and the projection sub-problems (9), and then adds linearization cuts at solution pointŷ. This loop, which is described in lines 3-5, is performed until there is no improvement, i.e. c T (x p −x p+1 ) < ε, where ε is a desired tolerance. Then, in order to conduct the line search, the algorithm finds the interior point x by calling the procedure solveNLPZeroObj. This procedure solves an NLP problem, obtained by relaxing the integrality constraints of problem (1), where the objective function is a constant (zero). Then the algorithm performs a similar loop as before, described in lines 7-10, with the procedure addLineSearchCuts(x,x). This procedure solves the line search sub-problems (11) between the LP-OA solution pointx and the interior pointx, and adds linearization cuts at solution pointŷ of the line search subproblems. Finally, the algorithm calls the procedure addUnfixedNlpCuts which computes a solutionx of integer-relaxed NLP problem (1) and adds linearization cuts at solution pointx. MIP phase Once a good initial outer approximation has been obtained through the LP phase, the algorithm considers the integer constraints Y k by defining the MIP-OA master problem (8). After the first solution estimatex has been obtained by solving MIP-OA master problem (8), DECOA computes a solution candidatex by solving NLP master problem with fixed integer variables defined by wherex is the solution of MIP-OA master problem (8) and I k is the set of integer variables in k-th block. Notice that if the outer approximation X is still not close to set X , (13) does not necessarily yield a feasible solution. Algorithm 3 Decomposition-based outer approximation algorithm ifx ∈ X and c Tx < v then 9: x * ←x 10: v ← c Tx 11: if v − c Tx < ε then 12: return (x, x * , G) 13: for k ∈ K do G k ←fixAndRefine(x k , P, X k ) 14: for If solution pointx of problem (13) improves the best solution candidate, i.e.x ∈ X and improves the upper bound of objective function value, then pointx is a new solution candidate of problem (1), which is denoted by x * . Moreover, if the objective function value c T x * is less than the current upper bound v, we set v to c T x * . In order to further refine outer approximation G by exploiting the block-separability property of problem (1), we consider partly-fixed OA problems which are defined similar to MIP-OA problem (8), but the variables are fixed for all blocks except for one, i.e. for all k ∈ K : wherex is a solution point of NLP problem (13). The solution points of problem (14) can be used for the refinement of outer approximation G as a base for solving projection sub-problem (9). Note that the solution of problem (14) provides us information about the fixation of integer variables in problem (13). If the fixations in problem (13) are feasible, then problem (14) has a feasible solution, otherwise problem (14) does not have a feasible solution, because global constraints P are not satisfied. Algorithm 3 describes DECOA which computes solution estimatex by solving MIP-OA master problem (8) and solution candidate x * by solving the NLP master problem with fixed integers (13). At the beginning, upper bound v of the optimal value of problem (1) and solution candidate x * are set to ∞ and to ∅, respectively. Since the goal is to reduce the number of MIP-solver runs, the algorithm calls procedure OaStart, described in Algorithm 2 for initializing a good outer approximation. The procedure solveMipOA computes a solution estimatex by solving MIP-OA master problem (8). When the first solution estimatex has been obtained, DECOA starts the main loop described in lines 5-18. At the beginning of the loop, procedure addFixedNlpCuts is called, which solves the NLP master problem with fixed integers (13). This procedure uses solution estimatex for integer variables fixations and returns solution pointx, which might not be feasible. If the pointx is feasible and the objective function value c Tx is lower than the current upper bound v, the solution candidate x * and the upper bound v are updated accordingly. Moreover, if the objective function gap between solution estimatex and solution candidate x * is small enough, i.e. v − c Tx < ε, the algorithm stops. These steps are described in lines 8-12. If the objective function gap between solution estimatex and solution candidate x * is not closed, DECOA improves the outer approximation G by generating new supporting hyperplanes. For refinement of set G, DECOA calls fixAndRefine which solves partlyfixed OA problem (14). The detailed description of this procedure is given in Algorithm 4. Like in Algorithm 2, in order to obtain sample points for new supporting hyperplanes, line search sub-problems (11) and projection sub-problems (9) are solved. The projection and line search sub-problems are solved using the solution pointx of MIP-OA master problem (8). After refinement of set G, DECOA calls solveMipOa for computing a new solution estimatex by solving the problem (8). If the gap between the pointsx and point x * is closed, DECOA terminates and returns solution estimatex, solution candidate x * and polyhedral outer approximation G, which is a reformulation of original problem (1). Algorithm 4 Cut generation per block until integer variables ofx are not changed 6: return (x, G) Algorithm 4 describes the function FixAndRefine which is used for refinement of set G. For each block k ∈ K , the function calls procedure solveFixMipOA which solves partlyfixed OA master problem (14). Then the obtained solution pointx is used for solving the projection sub-problems and adding linearization cuts by calling procedure addProject-Cuts. This procedure repeats until the integer variables of solution pointx are not changed. Proof of convergence In this section, it is proven that basic DECOA as depicted in Algorithm 1 either converges to a global optimum of (1) in a finite number of iterations or generates a sequence which converges to a global optimum. In order to prove the convergence, it is assumed that all MIP-OA master problems (5), (8) and the projection sub-problem (9) are solved to optimality. We also prove the convergence of improved DECOA as outlined in Algorithm 3. Due to the convexity, functionǧ k j (x) defined in (7) is an affine underestimator of function g k j and, therefore, set X p consisting of the corresponding hyperplanes at iteration p is an outer approximation of set X . Since basic DECOA adds new supporting hyperplanes in each iteration, it creates a sequence of sets X p with the following property Lemma 1 If DECOA described in Algorithm 1 stops after p < ∞ iterations and the last solutionx p of OA master problem (5) fulfills all constraints of (1), the solution is also an optimal solution of the original problem (1). Proof We adapt the proof of [20]. Since DECOA stops at iteration p,x p is an optimal solution of (5) andx p has the optimal objective function value of (1) within X p ∩ P. From property (15) it is clear that X p also includes the feasible set X . Sincex p also satisfies the nonlinear and integrality constraints, it is also in the feasible set, i.e.,x p ∈ P ∩ X . Becausê x p minimizes the objective function within X p ∩ P, which includes the entire feasible set, andx p ∈ P ∩ X , it is also an optimal solution of (1). In Theorem 1 we prove that Algorithm 1 generates a sequence of solution points converging to a global optimum. In order to prove this, we present intermediate results in . Proof Given thatx p / ∈ G, ∃(k, j) such that g k j (x p k ) > 0. This means that for the solution y k of (9)ŷ k =x p k . Note thatŷ k ,x p k ∈ P k . For this proof, we setG k := G k ∩ P k = {y ∈ R n k :g k j (y) ≤ 0, j ∈ [m k ],m k = |m k | + |J k |} and, in (9), replace G k ∩ P k byG k . Note that the linearization cuts of P k are not added, since they are the same as linear constraints P k . Hence, only linearization cuts of nonlinear constraints G k are added. Let A k be the set of indices of active constraints atŷ k ofG k , i.e.g k j (ŷ k ) = 0, j ∈ A k . According to the KKT conditions of projection sub-problem (9), ∃μ j ≥ 0, j ∈ A k , such that where μ correspond to constraintsG k . Multiplying (16) byx Given that μ j ≥ 0, j ∈ A k , there exists at least one j ∈ A k for which In Lemma 3 we show that if Algorithm 1 does not stop in a finite number of iterations, the sequence of solution points contains at least one convergent subsequence Since subsequence {x p i } ∞ i=1 is convergent, there exists a limit lim i→∞x p i = z. In Lemmas 4 and 5 , we show that z is not only within the feasible set of (1) but also an optimal solution of (1). Lemma 3 If Algorithm 1 does not stop in a finite number of iterations, it generates a convergent subsequence {x Proof We adapt the proof of [20]. Since the algorithm has not terminated, none of the solutions of OA master problem (5) are in the feasible set, i.e.,x p / ∈ P ∩ X for all p = 1, 2, . . . in the solution sequence. Therefore, all the points in the sequence {x p } ∞ p=1 will be distinct due to Lemma 2. Since {x p } ∞ p=1 contains an infinite number of different points, and all are in the compact set P, according to the Bolzano-Weierstrass Theorem, the sequence contains a convergent subsequence. Lemma 4 The limit z of any convergent subsequence {x p i } ∞ i=1 generated in Algorithm 1 belongs to the feasible set of (1). Proof Letx andŷ p j is the sample point obtained by solving projection sub-problem (9) with pointx p j k . Consider the following equality Consider the setG k of the proof of Lemma 2 containing the set of all constraints. Let A k be the set of indices of active constraintsG k atŷ p j k , i.e.g ki (ŷ p j k ) = 0, i ∈ A k . Note that only linearization cuts of G k are added. Since Algorithm 1 adds for each active nonlinear constraint i ∈ A k the cut ∇g ki (ŷ we have ∇g ki (ŷ Using the KKT multipliers in (16) yields Since ||x By Lemma 3 sequence {x Since the sequence {ŷ p j } ∞ j=1 ∈ G and the sequence {x p j } ∞ j=1 ∈ P ∩ Y , and these sequences have common limit point z, then point z is feasible, i.e. z ∈ P ∩ X . Lemma 5 The limit point of a convergent subsequence is a global minimum point of (1). Proof We adapt the proof of [20]. Because each set X p is an outer approximation of the feasible set X , c Tx p i gives a lower bound on the optimal value of the objective function. Due to property (15), sequence {c Tx p i } ∞ i=1 is nondecreasing and since the objective function is continuous, we get lim i→∞ c Tx p i = c T z. According to Lemma 4, limit point z is within the feasible set P ∩ X and, because it is a minimizer of the objective function within a set including the entire feasible set, it is also an optimal solution to (1). Since Lemmas 4 and 5 apply to all convergent subsequences generated by solving OA master problems (5), any limit point of such sequence will be a global optimum. We summarize the convergence results in the next theorem. Theorem 1 Algorithm 1 either finds a global optimum of (1) in a finite number of iterations or generates a sequence {x p i } ∞ i=1 converging to a global optimum. Proof Suppose the algorithm stops in a finite number of iterations. Then the last solution of OA master problem (5) satisfies all constraints and according to Lemma 1 it is a global optimum of (1). In case the algorithm does not stop in a finite number of iterations, it generates a sequence converging to a global optimum of (1) according to Lemmas 3 and 5. In Theorem 2 we prove that improved DECOA described in Algorithm 3 also converges to a global optimum of (1). Theorem 2 DECOA described in Algorithm 3 either finds a global optimum of (1) in a finite number of iterations or generates a sequence {x p i } ∞ i=1 converging to a global optimum. Proof The core idea of improved DECOA, described in Algorithm 3, is the same as in basic DECOA described in Algorithm 1. In the Algorithm 3 we introduce enhancements such as LP-OA master problem, and line search sub-problems for speeding up the convergence of Algorithm 1. Hence improved Algorithm 3 refines outer approximation X faster, because in each iteration the additional methods make the outer approximation X smaller. Moreover, all conditions assumed in the proof of Theorem 1 remain valid. Therefore, the proof is similar to the proof of Theorem 1. Implementation of DECOA Algorithm 3 was implemented with Pyomo [17], an algebraic modelling language in Python, as part of the parallel MINLP-solver Decogo [29]. The implementation of Decogo is not finished, in particular parallel solving of sub-problems has not been implemented yet. The solver utilizes SCIP 5.0 [16] for solving MIP problems and IPOPT 3.12.8 [35] for solving LP and NLP problems. Note that it is possible to use other suitable solvers which can interface with Pyomo. Very often problems are not given in a block-separable form. Therefore, a block structure identification of the original problem and its automatic reformulation into a block-separable form have been implemented. The block structure identification is based on the idea of connected components of a Hessian adjacency graph. Consider a MINLP problem defined by n variables and by |M| functions h m , m ∈ M. Consider a Hessian adjacency graph G = (V , E) defined by the following vertex and edge sets In order to subdivide the set of variables into |K | blocks, we compute the connected components V k , k ∈ K , of G with k∈K V k = V . We obtain the list of variables V k ⊂ V , k ∈ K , such that n = k∈K n k , where n k = |V k |. In the implementation, we don't compute the Hessian of functions h m . Instead, we iterate over the (nonlinear) expressions of functions h m . If two variables x i and x j are contained in the same nonlinear expression, we insert the edge (i, j) to the edge set E of G. Using the blocks V k , k ∈ K , which correspond to the connected components of graph G, we reformulate the original problem into a the block-separable MINLP problem described in (1). We perform this procedure by adding new variables and constraints such that the objective function and the global constraints are linear. Note that the reformulated problem remains convex. As mentioned in Sect. 3, we add the supporting hyperplanes for each active constraints at pointŷ ∈ T k according to the formula Theoretically, we have g k j (ŷ) = 0. In practice, the value g k j (ŷ) is often very small, but, because of the numerical accuracy, it might not be identical to zero. To guarantee that the linearization cuts are valid, in practice we consider the non-zero value of g k j (ŷ) in (24). DECOA described in Algorithm 3 terminates based on the relative gap, i.e. |v − c Tx | 10 −12 + |v| < ε, where ε is a desired tolerance. In addition to it, the loops in the LP phase, described in Algorithm 2, are terminated if there is no improvement of the objective function value, i.e. c T (x p+1 −x p ) < δ, where δ is a desired tolerance. Numerical results DECOA described in Algorithm 3 has been tested on convex MINLP problems from MINLPLib [33]. Some instances don't have a reasonable block structure, i.e. the number of blocks might be equal to the number of variables or the instance might have only one block. In order to avoid this issue and to show the potential of decomposition, we filtered all convex instances from MINLPLib using the following criterion: where |K | is the number of blocks and N the total number of variables. In the MINLPlib the number of blocks is given by identifier #Blocks in Hessian of Lagrangian, which is available for each problem. The number of selected instances is 70 and the number of variables varies from 11 to 2720 with an average value 613. In Table 1 we provide more detailed statistics on this set of instances. As termination criteria, the relative gap tolerance was set to 0.0001 and the LP phase improvement tolerance was set to 0.01. The master problem and sub-problems were solved to optimality. All computational experiments were performed using a computer with Intel Core i7-7820HQ 2.9 GHz CPU and 16 GB RAM. Effect of line search and fix-and-refine In order to understand the impact of the line search and the fix-and-refine procedure, described in Algorithm 4, we run four variants of Algorithm 3: i Only projection, i.e. line search and fix-and-refine were not performed; ii Projection with fix-and-refine, i.e. line search was not performed; iii Projection with line search, i.e. fix-and-refine was not performed; iv Projection with line search and with fix-and-refine. For each run, we computed the average number of MIP-solver runs and the average time spent on solving LP-OA master problems (10), for MIP-OA master problems (8), and for all sub-problems. Note that the sub-problem solution time includes the time spent on solving projection (9), line search (11) and partly-fixed OA (14) sub-problems. Note that, the NLP time is not presented. Since DECOA can be well parallelized, i.e. all sub-problems can be solved in parallel, we computed an estimated parallelized sub-problem time. The estimated parallelized sub-problem time is computed by taking the maximum time needed to solve the sub-problems in each parallel step. This value might be too low, since it assumes that the number of cores is equal to the number of blocks and it does not take the time needed for communication overhead into account. Nevertheless, this number gives a good estimate of possible time improvement. Figure 1 shows that for most instances, the number of MIP runs remains the same regardless of the problem size. Moreover, for big problems, the algorithm needs not more than 2 MIP runs in order to close the gap, and this property is valid for all variants of the algorithm. The same behavior can also be observed in Fig. 2. It shows that most of the problems were solved with no more than 3 MIP runs regardless of the algorithm variant. This plot shows that the lowest average value of MIP runs can be obtained by running the algorithm with the fixedand-refine procedure. Moreover, the fix-and-refine procedure helps to solve some problems with fewer MIP runs. However, running the algorithm with fix-and-refine is computationally demanding. This issue is illustrated in Fig. 3, which shows that the sub-problem time for the algorithm with fix-and-refine is the highest. Moreover, this chart shows that, for each variant, the algorithm spends most of its time on solving sub-problems. In order to see the potential of parallelization, we computed the estimated parallelized sub-problem time. The computed estimate gives results lower than the LP time or MIP time. From Fig. 3 one can notice that the average time spent on solving LP-OA master problems and MIP-OA master problems is approximately equal. Due to this observation and the fact that the LP problems are easier to solve than MIP problems, the LP-OA master problems were solved on average more times than MIP-OA master problems. Solving more LP-OA master problems at the beginning helps to initialize a good outer approximation and, therefore, to reduce the number of MIP runs. Similar gains in reduction of MIP runs have been achieved in [25]. In contrary to DECOA, in [25] has been proposed to improve the quality of polyhedral OA with extended formulations, which are based on convexity detection of the constraints. Comparison to other MINLP solvers In this subsection we compare the DECOA algorithm with two MINLP solvers which do not make use of the decomposition structure of the problems. For this purpose, we have chosen the branch-and-bound-based SCIP solver 5.0.1 [16] and Pyomo-based toolbox MindtPy 0.1.0 [2]. All settings for SCIP were set to default. In order to compare DECOA with OA, for MindtPy we set OA as a solution strategy with SCIP 5.0.1. and Ipopt 3.12.8 as a MIP solver and NLP solver, respectively. Moreover, the iteration limit for MindtPy was set to 100. All other settings for MindtPy were set to default. For the comparisons with both solvers, we use the variant of Algorithm 3 without linesearch and fix-and-refine. It is the least computationally demanding variant of Algorithm 3, as has been shown in Fig. 3. The test instances were selected from MINLPLib [33] using condition (26). Table 1 presents the results for DECOA and SCIP for each instance individually. For each instance, it presents also its statistic, i.e. problem size N and average blocksize N k after reformulation. For each instance, we measured the total solution time T of the DECOA run. Note that the total time T does not include time spent on automatic reformulation, described in Sect. 5. T M I P denotes the time spent on solving MIP problems and N M I P denotes the number of MIP runs. T L P and T N L P denote the time spent on solving LP and NLP problems respectively. T sub denotes the time spent on solving sub-problems, i.e. the time spent on solving projection sub-problems (9). T SC I P denotes the time spent on solving the original problem with SCIP. In Table 1 we compare the solution time of SCIP and DECOA for each instance individually. However, comparing solution time of both solvers can't be realistic, since they are implemented using different programming languages, i.e. DECOA using Python and SCIP using C. It is known that Python is slower than C. One of the reasons for that, Python is interpreted language and C-compiled. Table 1 shows that currently for 9 % of the test set, DECOA shows a shorter solution time than SCIP. Moreover, for 6 % of the test set, the solution time is very similar to SCIP, i.e. SCIP time is within 80 % of DECOA time. Moreover, for almost all problems, T M I P is very small, and T sub is relatively large. Hence, since all sub-problems can be solved in parallel, there is a clear indication that running time for DECOA can be significantly reduced, see Fig. 3. From Table 1 one can conclude that T L P is also high. Its average fraction of the total time T is 18%. It is followed by T M I P and T N L P , which have average fractions of the total time 12% and 7% respectively. As has been discussed before, even though the LP problems are easier to solve than MIP problems, the number of solved LP problems in the LP phase is higher than the number of solved MIP problems. Table 2 presents the results for DECOA and OA for each instance individually. Both for DECOA and for OA, the number of MIP runs N M I P and total time T are presented. Additionally for OA, the solver status after finishing the solution process is provided. Table 2 shows the OA method failed to converge for 20% of the instances due to either iteration limit or solver exception. For some instances, MindtPy failed to close the gap due to infeasibility of NLP sub-problem, i.e. infeasible combination of values for integer variables. The results in Table 2 present that for almost all instances, the number of MIP runs N M I P for DECOA is less than the number of MIP runs N M I P for OA. However, the solution time T for DECOA is either bigger or smaller than the solution time T for OA depending on the number of MIP runs. If the number of MIP runs N M I P for OA is big, i.e. N M I P > 10, then for almost all instances, the solution time T for DECOA is smaller than the solution time T Conclusions and future work This paper introduces a new decomposition-based outer approximation (DECOA) algorithm for solving convex block-separable MINLP problems described in (1). It iteratively solves and refines an outer approximation (OA) problem by generating new supporting hyperplanes. Due to the block-separability of the problem (1), the sample points for supporting hyperplanes are obtained by solving low-dimensional sub-problems. Moreover, the sub-problems can be solved in parallel. The algorithm is designed such that the MIP-OA master problems are solved as few times as possible, since solving them might be computationally demanding. Four variants of DECOA have been tested on a set of convex MINLP instances. The experiments have shown that for each case, the average number of MIP runs is small. Moreover, the results show that the average number of MIP runs is independent of the problem size. In addition to this, the time spent on solving sub-problems is bigger than time to solve LP and MIP problems. The performance of DECOA has been compared to the branch-and-bound MINLP solver SCIP and to the OA method. Even though DECOA is based on a Python implementation, it can even be faster for some (9%) of the instances than an advanced implementation like SCIP. Probably this is due to the effect of the decomposition and the fact that it requires less MIP runs. Comparison to OA shows that DECOA reduces the number of MIP runs and it is more efficient in cases when the problem needs to be solved with a high number of MIP runs. Even though DECOA is clearly defined and proven to converge, there are possibilities to improve its efficiency. It is possible to obtain a couple of solutions from the MIP solver and project them onto the feasible set. This could increase the number of new supporting hyperplanes in one iteration. Unfortunately, Pyomo does not facilitate working with a set of MIP solution candidates. The numerical results show that the time for solving MIP master problems is small, and reducing the time for solving LP master problems and sub-problems would significantly improve the performance of DECOA. Therefore, it would be interesting to work on reducing the number of iterations during the LP phase, and on faster solving the projection sub-problems (9). Also the current implementation could be improved, i.e. by implementing the parallelization, which could reduce the running time of DECOA significantly. The possible advantage of DECOA over branch-and-bound solvers would be with large-scale problems, which cannot be solved in reasonable time by branch-and-bound. However, this has to be verified by systematic experiments. In the future, we aim to generalize DECOA for solving nonconvex MINLP problems.
9,352
2020-02-20T00:00:00.000
[ "Computer Science" ]
New capability for indirect neutron capture measurements: The DICER instrument at LANSCE , Introduction The accurate quantification of radiative neutron capture cross sections is essential in various applications such as radiochemical diagnostics, nuclear forensics and nuclear astrophysics. Although several studies have been performed on stable nuclei and a few on long-lived radionuclides [3] using direct techniques, the measurement of (n, γ) cross sections on short-lived radionuclides is far more challenging due to backgrounds from the decay. Hence, a number of indirect methods have been pursued and substantial effort has been devoted to quantify systematic errors associated with these techniques. The most popular indirect techniques include the surrogate method [4], the γ-ray strength function method [5,6], the Oslo method [7] and the β-Oslo method [8]. Total cross section measurements are less affected by the decay background due to the long sample-to-detector distances, which are typically of the order of tens of meters. From neutron transmission measurements, (n, γ) cross sections can be tightly constrained and even accurately calculated through the Nuclear Statistical Model (NSM) [9]. The technique is presented in detail in Ref. [10]. Briefly, neutron transmission spectra provide directly the level spacing (D 0 , distance between transmission dips), the total (Γ, width of the transmission dip) and neutron resonance widths (Γ n , depth of the transmission dip). There are resonances where the radiation width (Γ γ ) is not significantly smaller than Γ n and at the same time other reaction channels are suppressed. In those cases, the Γ γ can be easily calculated: Γ = Γ n + Γ γ . Γ γ distributions are known to have small fluctuations, therefore only a small number of resonances is essentially needed to calculate the average radiation width ( Γ γ ) with a sufficient accuracy. The design of the Device for Indirect Capture Experiments on Radionuclides (DICER) [9,11,12], was based on the aforementioned technique and is being developed at the Los Alamos Neutron Science Center (LANSCE) where a high neutron flux is delivered. A plethora of radionuclides relevant to radiochemical diagnostics, nuclear forensics, nuclear astrophysics and nuclear data in general [9,11,13] are expected to be studied at DICER. The instrument is designed to study small samples (tens of µg, 0.1-1 mm in diameter) with half-lives of the order of tens of days or higher and level spacings of the order of tens of eV or smaller. The DICER instrument DICER is located at the Manuel Lujan Jr. Neutron Scattering Center (flight path 13) at LANSCE. Source-to-detector distances of 31 and a 64 m are available, however, only the 31 m station, shown in Fig. 1, will be discussed. A detailed description of the first DICER generation is provided in Ref. [14]. Neutron source The neutron beam delivered at DICER is produced through spallation from LAN-SCE's 800 MeV proton beam, pulsed at 20 Hz [15]. The beam is impinging on a split, 10 cm in diameter, cylindrical tungsten target. Each proton pulse has a 125 ns FWHM and average intensity of ∼ 100 µA. The spallation neutrons are moderated in a liquid hydrogen moderator, a process which results in a neutron spectrum that spans from meV-MeV in energy regime, as shown in Fig. 2. Collimation system The collimation system allows DICER to perform simultaneously sample in and sample out measurements, unlike the traditional neutron transmission measurements where periodic sample insertion/removal is required. Periodically inserting and removing the sample from the beam, involves positioning errors, therefore the samples used in traditional measurements are significantly larger than the beam size. In summary, the DICER approach allows the measurement of small samples and reduces the measuring time by a factor of two by utilising two non-parallel neutron beam lines, which converge at the same spot on the liquid hydrogen moderator. A brief description on how this is achieved, will be provided below. A cylindrical shell of brass (Rotating Beam Blocker, Fig. 1 At a 14.85 m distance from the exit of the neutron source, a right rectangular prism made of brass (Binocular Collimator, Fig. 1), 30 cm long and 15 cm wide, is installed. This component serves both as a collimator and sample holder and provides two well defined and narrow beam lines, 1 mm in diameter, which point to the same area on the moderator, as illustrated in Fig. 3. The binocular collimator can house a cylindrical sample envelope of 1.5 cm in length and 1 cm in diameter. The positioning of the sample canisters with respect to the binocular collimator are pictured in Fig. 3. Figure 3. The binocular collimator of DICER allows to perform simultaneous sample in and out measurements, while it also serves as a sample holder. Finally, the last shaping of the neutron beam takes place at 18.5 m from the neutron moderator. At that position, a rectangle brass collimator (aperture stop, Fig. 1) 30 cm long and 15 wide, is located. The aperture stop cleans up the beam penumbra from the sample collimator and ensures two well defined beam spots at the detector position. Detectors The standard detection system of DICER consists of two 6 Li-glass disks, 10 cm in diameter and various thicknesses (1, 2, 4, 6.3, 12.7 mm) coupled to dual photomultipliers (PMTs). The PMTs are perpendicular with respect to the neutron beam propagation, hence not interacting with neutrons. This helps minimizing backgrounds. The two detectors are installed 31 m from the neutron source and are shown in Fig. 1. 3 Production of radioactive material at LANSCE and neutronic considerations: The 88 Zr case The DICER approach relies on the synergy between two LANSCE facilities: the Manuel Lujan Jr. neutron scattering center and the Isotope Production Facility (IPF) [16,17]. A demonstration of this collaboration is the production of the 88 Zr radionuclide. Briefly, 29 g of a cylindrical yttrium metallic target, 2.90 mm in length and 46 mm in diameter, encapsulated in an aluminum holder, was irradiated at the IPF. The target was irradiated for a total of ∼ 9 hours at an average beam current of 96.1 µA. After cooling the irradiated material for 82 days, the target was disassembled and dissolved in a hot cell environment in 150 mL of 6 mol/L HCl by slow addition of 5-10 mL portions. About 200 mCi/8 GBq of 88 Zr were separated and diluted in 1.9 mol/L DCl solution. Finally, the solution was evaporated almost to dryness and washed with 10 mL of 6 mol/L HCl. This solution was transferred into a 10 mL v-vial. The sample is shown in Fig. 4. The successful yttrium irradiation and 88 Zr recovery was confirmed through the means of gamma-ray spectroscopy, using a High Purity Germanium (HPGe) detector and a small aliquot (25 µL) of the separated material. The aliquot was diluted in 10 mL of 1 mol/L HCl and was then placed in front of an HPGe detector. The recorded spectrum shown in Fig. 5, illustrates the successful production and separation of 88 Zr : The 392.7 keV photo-peak from 88 Zr and the 898.7 and 1836.8 keV from the 88 Y daughter are dominating the spectrum. Since DICER is an instrument that measures all neutron reactions that take place during a measurement, it has to be ensured that the chemical format/solution of the sample is fairly transparent to neutrons. In other words, the minimum amount of neutron interactions with materials other than the one of interest ( 88 Zr in this example) is preferred. Hydrogen has a significant high probability to scatter neutrons, therefore is appreciably not transparent to neutrons. To mitigate that effect, HCl was replaced with DCl. The 88 Zr +HCl sample was dried once again and the same procedure described above was used to produce 88 Zr in DCl solution. As shown in Fig. 6, the transmission through DCl, for a 1.4 mol/L, 1 cm long and 1.2 mm in diameter cylindrical sample, ranges between 85 − 95% in the region where a 88 Zr resonance is expected. For this calculation, (n,tot) cross sections from the ENDF/B-VIII.0 library [18] were used for 2 H, 35 Cl and 37 Cl. Performance overview DICER is being commissioned since the autumn of 2019 and although the commissioning phase will finish at the end of 2022, many stable nuclei have already been studied such as 147,149 Sm,191,193 Ir, 95 Mo, 209 Bi, 197 Au, nat Cd and nat Gd. To demonstrate the good understanding of the new device and the reproduction of well-known reactions, Fig. 7 shows a satisfactory reproduction of DICER data using ENDF/B-VIII.0 resonance parameters. Conclusion A new instrument at flight path 13 at the Manuel Lujan Jr. Neutron Scattering Center of LANSCE is being developed to study indirect neutron capture on short-lived radionuclides, through neutron transmission measurements and resonance analysis. The DICER concept is based on the synergy between experimental facilities at LAN-SCE, such as production of radioactive samples at IPF and their neutron irradiation at the Manuel Lujan Jr. Neutron Scattering Center. Great effort has been made to design and precisely align the instrument as well as to develop an efficient and reliable data reduction scheme. Both efforts led to a deep understanding of the instrument's performance, which was demonstrated by the satisfactory reproduction of DICER data from evaluated resonance parameters [14]. Finally, DICER is getting ready to perform its first measurements on radioactive samples on 88 Zr and 88 Y.
2,181.2
2022-01-01T00:00:00.000
[ "Physics" ]
Meter-baseline tests of sterile neutrinos at Daya Bay We explore the sensitivity of an experiment at the Daya Bay site, with a point radioactive source and a few meter baseline, to neutrino oscillations involving one or more eV mass sterile neutrinos. We find that within a year, the entire 3+2 and 1+3+1 parameter space preferred by global fits can be excluded at the 3\sigma level, and if an oscillation signal is found, the 3+1 and 3+2 scenarios can be distinguished from each other at more than the 3\sigma level provided one of the sterile neutrinos is lighter than 0.5 eV. Introduction. The standard three neutrino (3ν) picture has been successful in explaining most oscillation data. However, data from the Liquid Scintillator Neutrino Detector (LSND) experiment [1] when interpreted as arising fromν µ →ν e oscillations, indicate a deviation from the simple 3ν picture. The Mini-Booster Neutrino Experiment (MiniBooNE) [2] provides supporting evidence for the LSND result that oscillations involving an eV mass sterile neutrino may be at work. Additional support may be found in an upward revision in the estimate of the reactorν e flux yield [3]. The fact that short baseline (SBL) reactor neutrino experiments do not detect the 3% larger flux (via a 7% larger event rate) could be explained as a consequence of oscillations to sterile states. Popular scenarios that are consistent with the the relevant data have either one sterile neutrino, with a 3+1 mass spectrum (such that the nearly degenerate triplet of mass eigenstates is lighter than the remaining state), or 2 sterile neutrinos [4,5]. The 5 neutrino (5ν) case has 2 viable spectra: a 3+2 spectrum in which the triplet is lighter than both sterile neutrinos, and a 1+3+1 spectrum in which one sterile neutrino is lighter than the triplet and one is heavier. In all cases, the sterile neutrinos mix little with the active neutrinos. Recently, it was suggested that a ten kilocurie scale 144 Ce-144 Pe β-decay source could be placed inside a large liquid scintillator detector to study eV sterile neutrino oscillations on baselines of a few meters with 1.8-3.3 MeV neutrinos [6]. Distinct virtues of this technique are (1) that with a point-like source, an oscillation signature can be demonstrated as a function of both energy and baseline, (2) the short baseline may be easily adjustable, (3) existing detectors can be utilized, and (4) antineutrino source activity is reduced relative to that of neutrino sources previously used for the calibration of low-energy radiochemical solar neutrino experiments since the inverse beta-decay cross section is higher than the neutrino-electron scattering cross section. Clear technical challenges are the feasibility of constructing such an intense radioactive source and of engineering suitable ultra-pure shielding of the source inside the detector. For a decisive measurement, Ref. [7] considered the possibility of an experiment at the Daya Bay site with a 500 kCi (1.85 × 10 16 Bq) source. The configuration of the 4 detectors in the Far Hall at Daya Bay makes it possible to place the source outside the detectors thus circumventing one of the technical issues. We treat the 500 kCi source as point-like although it will have a finite spatial extent depending on the freshness of the fuel being used for its production, the production and transportation time, as well as the final density of cerium oxide that is limited to about 4.5 g/cm 3 . This approximation is valid since the size of the source will be small compared to the 6.5 m oscillation length of interest. In this Letter we show that the parameter space preferred by global fits in the 3+1, 3+2 and 1+3+1 scenarios will be stringently tested by the proposed multi-meter baselineν e disappearance measurement at Daya Bay. For sterile neutrino masses below 0.5 eV, such a measurement can even distinguish between the 3+1 and 3+2 scenarios at the 3σ level. This enhanced sensitivity arises because knowledge of the ν e fraction of the ν 4 and ν 5 mass eigenstates breaks the degeneracy in the sterile mixings to ν e and ν µ , both of which are required to explain the anomalous SBL data. Sterile neutrino oscillations. For vacuum oscillations of MeV neutrinos from a radioactive source, the (CP phase-independent) ν e andν e survival probability at distance L is The energy-averaged νe survival probability as a function of distance for 3+1 and 3+2 sample points. Ue4 = 0.16 (giving a ∼ 10% oscillation amplitude), and in the 3+2 scenario, Ue5 is also 0.16. Right: Event distributions for the chosen radioactive source-detector configuration. The solid and dashed curves show the cases of no active-sterile oscillations [7], and of oscillations with δm 2 = 1 eV 2 and a 10% oscillation amplitude, respectively. tive neutrinos, P ee depends only on the four parameters, δm 2 41 , δm 2 51 , |U e4 | and |U e5 | via Since P 5ν ee is insensitive to the signs of ∆ 41 and ∆ 51 , ν e disappearance data cannot distinguish between the 3+2 and 1+3+1 spectra for identical mixing matrix elements. (In principle, the spectra can be distinguished if the suppressed but nonzero δm 2 54 contribution to the right hand side, −4|U e4 | 2 |U e5 | 2 sin 2 ∆ 54 , is included.) In the left panel of Fig. 1, we show the ν e survival probability for several 3+1 and 3+2 sample points. For the sake of illustration, we have used somewhat large values of U e4 and U e5 . The significant variation in the survival probabilities over the first few meters for different (δm 2 41 , δm 2 51 ) choices reveals the strength of the method. For all curves in Fig. 1, P ee is convolved with theν e energy spectrum from the radioactive source. Experimental set-up and procedure. The 500 kCi radioactive source at Daya Bay can be placed so that the 4 cylindrical detectors collectν e data with baselines from 1 to 8 meters. Several possible source locations have been studied, each giving a different spatial coverage of P ee (L). We choose "Point B" in the jargon of Ref. [7], which is located halfway between two of the detectors, and samples 2 principal baselines. It provides superior sensitivity for δm 2 ∼ 1 eV 2 with an oscillation length of about 6.5 meters. The no oscillation signal event rate is about 38,000 in one year after accounting for the 66.3% decrease in source activity over a one-year period [7]. Event distributions as a function of baseline are shown in the right panel of Fig. 1; the detector energy and position resolutions are 9%/ E(MeV) and 15 cm, respectively [7]. Depending on the energy window used, the reactor neutrino background is expected to lie between 22,000-32,000 events per year. However, this large background can be controlled because its shape will be known. We take the detectors to be identical and adopt the following χ 2 for our analysis [7]: where N ex i,j is a simulated dataset and N th i,j is the theoretical expectation for a given set of oscillation parameters, and i and j run over position and visible energy bins, respectively. σ s = 0.01 and σ r = 0.01 are the normalization uncertainties in the signal and reactor background fluxes, respectively, and σ b = 0.02 is the bin-to-bin uncertainty [7]. α s and α r are nuisance parameters that are allowed to float. N ex and N th are given by whereS andR (=28,000/year) are the number of signal events from the source and the number of reactor background events, respectively. The number of signal events (in all 4 detectors) with sterile neutrino oscillations is obtained by scaling the number of events for the 3ν case: where ∆n/∆E vis and ∆n/∆x are normalized event distributions binned in visible energy and position, respectively, and N tot = 38, 000 is the total number of events for the 3ν case in one year. The positron's energy in an inverse neutron β-decay event is E ν − (m n − m p ). Subsequent pair annihilation in the scintillator produces visible energy, 3+1. We checked that in the 3+1 scenario our procedure yields a 95% confidence level (C.L.) sensitivity that is comparable to that of Ref. [7] for δm 2 41 < 2 eV 2 . The oscillation amplitude that fits the global SBL data is given by Daya Bay data could push |U e4 | down far enough that the value of |U µ4 | needed to obtain an amplitude that explains the SBL data could conflict with the current bound on |U µ4 | shown in the left panel of Fig. 2. Since a meter-baseline measurement at Daya Bay will be independent of the earlier data, it is reasonable to impose the constraint on U µ4 as a prior. Then, Daya Bay can rule out most of the allowed region from a fit to LSND and MiniBooNE antineutrino data; see the right panel of Fig. 2. 3+2 and 1+3+1. We first consider Daya Bay's sensitivity to the 5ν scenario without recourse to specific points, models or fits. We employ a grid in the (δm 2 41 , δm 2 51 , |U e4 |, |U e5 |) parameter space, place a prior on the size of the mixing, min(|U e4 |, |U e5 |) = |U | min in steps of size 0.01 from 0.10 to 0.15, and suppose a null result at Daya Bay. The 95% C.L. sensitivity in the (δm 2 41 , δm 2 51 ) plane is shown in Fig. 3. As mentioned before, P ee does not depend on the signs of the masssquared differences. So the results of Fig. 3 apply to both the 3+2 and 1+3+1 spectra. We now specialize to 5ν models that are consistent with global neutrino data. In Table I, we display Daya Bay's sensitivity to several best-fit points to SBL data in the 5ν case assuming that no oscillations are seen in the Daya Bay dataset. These points would be completely excluded by Daya Bay because of their sizable U e4 and U e5 . parameter space, we use the globally allowed regions from an updated fit to the datasets listed in Ref. [5] in conjunction with data from the NOMAD [12] and CDHS [13] experiments [14]. The shaded areas of Fig. 4 are the globally allowed regions at 3σ. We see that at least one δm 2 is close to 1 eV 2 so as to explain the SBL data. All mixing parameters other than δm 2 41 and δm 2 51 are marginalized over and assume their best-fit values. As the global fits favor significantν µ −ν e transitions, the mixing parameters tend to be large enough to be testable at Daya Bay. Figure 4 shows that Daya Bay can exclude the 3+2 and 1+3+1 scenarios as an explanation of the LSND/MiniBooNE anomaly at 3σ. 3+1 or 3+2? So far we have demonstrated that a null result at Daya Bay can significantly constrain sterile neutrinos. We now entertain the possibility that future data confirms their existence. Then, a pressing issue will be to ascertain whether the 3+1 or the 3+2 scenario is operative. Since scenarios with more eigenstates should be able to mimic those with fewer eigenstates, a good The degree to which Daya Bay can discriminate between the 3+1 and 3+2 scenarios. We simulate an oscillation signal for points in the 99% C.L. region favored by LSND and MiniBooNE that are consistent with the 99% C.L. bound on |Uµ4| (see Fig. 2), and fit the spectrum from points in the 3σ region of the 3+2 parameter space (see the left panel of Fig. 4) to the simulated data. A more than 3σ discrimination is possible for δm 2 41 < 0.5 eV 2 . test of Daya Bay's discriminatory power is to fit 3+2 points to data simulated for 3+1. Assume that Daya Bay collects a dataset that is well-described by a point in the 3+1 parameter space. Then, in principle there is a 3+2 mixing scenario that gives the same oscillation pattern. However, this 3+2 point may be constrained by other oscillation data. To account for this possibility, we fit all the globally allowed 3+2 parameters to the 3+1 dataset and check if a good fit exists. The technical procedure is as follows. For every point on a grid in the (δm 2 41 , θ SBL , |U e4 |) parameter space that lies within the 99% C.L. allowed region of the right panel of Fig. 2 and is also consistent with the 99% C.L. bound on |U µ4 | in the left panel of Fig. 2, we simulate a dataset N ex i,j . We then fit points in 3+2 parameter space that are allowed at 3σ (shown in the left panel of Fig. 4) to this dataset (using Eq. 3), and find the 3+2 point with the minimum χ 2 corresponding to that (δm 2 41 , θ SBL , |U e4 |) point. For a given δm 2 41 , we repeat the procedure for other values of (θ SBL , |U e4 |) so as to find the global χ 2 min for each δm 2 41 . Note that the best-fit 3+2 value of δm 2 41 need not be the same as the value for which 3+1 data was simulated. We plot χ 2 min versus δm 2 41 in Fig. 5. The discrimination between the 3+1 and 3+2 scenarios is better for small δm 2 41 . This is because for small δm 2 41 , the deviation of the 3+1 spectrum from the 3ν spectrum is small in the meter-baseline experiment, which is harder to replicate with a 3+2 point that must also reproduce the anomalous SBL data.
3,235.4
2013-02-22T00:00:00.000
[ "Physics" ]
Improved Microstrip Antenna with HIS Elements and FSS Superstrate for 2 . 4 GHz Band Applications This research presents a microstrip antenna integrated with the high-impedance surface (HIS) elements and the modified frequency selective surface (FSS) superstrate for 2.4 GHz band applications. The electromagnetic band gap (EBG) structure was utilized in the fabrication of both the HIS and FSS structures. An FR-4 substrate with 120mm× 120mm× 0.8mm in dimension (W× L×T) and a dielectric constant of 4.3 was used in the antenna design. In the antenna development, the HIS elemental structure was mounted onto the antenna substrate around the radiation patch to suppress the surface wave, and the modified FSS superstrate was suspended 20mm above the radiating patch to improve the directivity. Simulations were carried out to determine the optimal dimensions of the components and the antenna prototype subsequently fabricated and tested. The simulation and measured results were agreeable. The experimental results revealed that the proposed integrated antenna (i.e., the microstrip antenna with the HIS and FSS structures) outperformed the conventional microstrip antenna with regard to reflection coefficient, the radiation pattern, gain, and radiation efficiency. Specifically, the proposed antenna could achieve the measured antenna gain of 10.14 dBi at 2.45GHz and the reflection coefficient of less than −10 dB and was operable in the 2.39– 2.51GHz frequency range. Introduction Microstrip antennas are commonly used in wireless communications devices for their low-profile, low-cost, and lightweight characteristics.Despite the benefits, this antenna type does suffer from the electromagnetic (EM) surface wave that occurs on the substrate.Specifically, the surface wave could induce the minor lobes and the EM wave to radiate in directions different from the radiation source.In addition, the surface wave contributes to the degradation of the antenna performance and gain.Likewise, the surface wave increases the cross-polarization of the antenna, thereby restricting the antenna's usefulness [1]. To address these issues, a metamaterial could be integrated into the microstrip antenna [2].The metamaterial refers to an engineered material whose behaviors or properties are naturally nonexistent, for example, a doublenegative material, a left-handed material, or a zero refractive index material [3].Another metamaterial suitable for the electromagnetic applications is the electromagnetic band gap (EBG) structure [4].Typically, the EBG structures are constructed by a periodic arrangement of dielectric materials and metallic conductors and can be categorized into three groups according to their geometrical configurations: the 3D volumetric structure, the 2D planar surface, and the 1D transmission line.Most suitable for integration with the microstrip antenna is the 2D planar surface EBG structure, which is typically fabricated on a printed circuit board (PCB).A typical 2D EBG structure consists of the upper periodic sheet metallic conductors parallel to the lower sheet metallic conductor with the dielectric substrate in the middle.In this research, the 2D EBG structures were of two configurations: the structures with and without a vertical via, which were, respectively, referred to as the mushroom-like EBG (or HIS) and the uniplanar EBG (or FSS superstrate).Despite the ease of fabrication associated with the uniplanar EBG, given the same frequency, the mushroom-like EBG is smaller in size than the uniplanar EBG, and the bandwidth of the former is wider than that of the latter [5].The design of the mushroom-like EBG structure was based on the highimpedance surface (HIS) principle [6] and subsequently incorporated onto the antenna substrate for suppressing the surface wave [7].Meanwhile, the design of the uniplanar EBG structure was based on the frequency selective surface (FSS) technology [8] and utilized as the superstrate layer suspended above the radiating microstrip patch antenna.The FSS structure was for enhancing the radiation aperture of the original radiating source to achieve the improved directivity. The HIS structure of the uniplanar EBG structure is presented in [9].Normally, the microstrip antenna suspended with the artificial magnetic conductor (AMC) suffers from fabrication.In [10,11], the AMC was mounted around the radiation source and FSS superstrate can easily fabricate.However, these structures have bulky size.In [12], the microstrip antenna (MA) with interdigital capacitance FSS superstrate achieved a high directivity and compactness of FSS size.However, the interdigital capacitance FSS superstrate must be additional designed.The dielectric resonator antenna (DRA) with superstrate and reflector plane is presented in [13].This structure has low back lobe and high copolarization to cross-polarization ratio in E and H planes.The antenna structure is not low profile.In this research, the mushroom-like EBG structure was mounted around the radiation source to suppress the surface wave together with increasing the directivity at the resonant frequency.Meanwhile, the FSS superstrate-layer structure was compact and easy to fabricate.Moreover, the low-profile antenna can be achieved by using FSS resulting in cost minimization.In addition, its refractive index should be zero or close to zero [14,15].The refractive index of the radiation source of an interior medium is close to zero.The angle of incident waves from an interior medium to an exterior medium is perpendicular to the medium surface, according to Snell's law [16].Specifically, while the EM wave travels through the superstrate layer, the waves will be deliberately directed by the superstrate, causing the waves to travel in parallel to free space, thereby achieving the high-directivity antenna [17].In [18], the woodpile EBG structure was used as the superstrate layer in place of the conventional dielectric layer.The woodpile EBG structure has a complete band gap.In addition, the woodpile EBG structure helps direct the radiation in the desired direction and lessen the EM wave propagation to other directions.Nonetheless, the woodpile EBG is afflicted with the fabrication challenges and its specific dielectric constant, rendering the FSS technology a good candidate as the superstrate since it could be fabricated on a PCB and thereby the low-profile antenna structure. In this research, the design and incorporation of the HIS and FSS structures is to achieve a compact and highdirectivity antenna.In general, the conventional FSS superstrate [19] requires multilayer to achieve the resonant frequency for a high-directivity antenna, resulting in the relatively bulky antenna structure.In contrast, this research has deployed a single-layer superstrate based on the optimally designed HIS and FSS structures, in which the HIS structure serves as the artificial ground plane and thus reduces the distance between the FSS superstrate and the radiation source, subsequently resulting in the low-profile antenna. This research is organized as follows: antenna with FSS and HIS is presented in Section 2. In Section 2, the directive wave direction from radiation source and "how does HIS work?" are described.Section 3 shows the design of FSS and HIS.In this section, the modified FSS cell is presented to reduce the cells size at the resonant frequency.In addition, the design of HIS works as an artificial ground plane is described in this section.In Section 4, the effects of FSS and HIS on a radiation pattern and gain of a microstrip antenna are described and the measured results are presented. Microstrip Antenna with HIS and FSS In this research, a probe-fed microstrip antenna (PFMA) was utilized as the radiation source due to its ease of integration with the mushroom-like EBG structure (i.e., HIS structure) on the same layer of substrate.In Figure 1(a), the ground plane of the PFMA acts as the lower plane of the 2D superstrate-layer EBG structure.Figure 1(b) depicts the top view of the FSS superstrate suspended above the antenna ground plane with a distance of h. To achieve the low-profile structure, the superstrate layer should be thin and thus of a single layer.In this research, the superstrate layer was thus redesigned using the transmission matrix in (1). 2 International Journal of Antennas and Propagation where r and t are, respectively, the reflection and transmission coefficients of the single-layer superstrate. The phase change of the wave propagation from the antenna to the superstrate layer can be determined by (2). The total transmission matrix is thus where and where φ = k 0 2h .In general, the distance (space) between the antenna and the superstrate is approximately one-half wavelength (h ≈ Δλ/2), where Δλ = c/Δf r , where Δf r is the difference between adjacent resonant frequencies.Figure 2 illustrates a microstrip antenna with HIS structures.The HIS elemental structure acted as the LC parallel equivalent circuit [20,21].The surface impedance (Z s ) can be, respectively, calculated using (6). At the resonant frequency ω 0 = 1/ LC , the surface impedance (Z s ) is very high, rising to infinity.The surface wave can thus no longer propagate along the substrate.The reflection phase (ψ) of HIS is calculated by (7). In this research, the frequency bandwidth of interest was between 90 °and −90 °of the reflection phase.At the resonant frequency, the reflection phase becomes 0 °.The bandwidth (BW) of the antenna with the HIS elements can be calculated by where Meanwhile, for the antenna integrated with both the HIS and FSS structures, their respective parameters can be calculated by [11]. where φ r and ψ r are, respectively, the reflection phases of the FSS superstrate and the HIS structure.In (9), given the reflection coefficient of 1, the antenna would achieve a very high directivity (D max ).In addition, the appropriate distance between the antenna and the FSS superstrate layer is governed by φ r and ψ r , as expressed in (10). Design of the FSS Superstrate and HIS Elements 3.1.The FSS Superstrate Layer.The proposed superstrate layer is of the 2D periodic FSS structure with loop elements.Unlike other elemental types, for example, the strip and patch elements, the loop elements offer the symmetry shape and ease of design.The loop-element FSS is typically of square shape with concentric elemental loops.In addition, the fabrication of the superstrate layer is straightforward with the loop-element FSS structure, given the target reflection coefficient for the desirable frequency range.In this research, individual FSS unit cells were simulated using CST microwave studio [22] to determine their optimal parameters that achieve the target reflection coefficient and total transmission coefficient for the FSS superstrate structure.Figure 3 illustrates the simulated magnitude and phase of the reflection coefficient of a square-loop FSS element for 3 International Journal of Antennas and Propagation various loop lengths (d 1 = d 2 ).The simulations revealed that the magnitude and phase were governed by the loop lengths.Specifically, at the center frequency of 2.45 GHz and the loop length (d 1 , d 2 ) of 20 mm, the magnitude and phase were −0.42 dB and 166 °, respectively. Notwithstanding, the square-shaped loop element suffers from the frequency tuning and large size limitations.In this research, the FSS cells were thus further modified using the fractal technique [23] and simulations carried out.Figure 4 illustrates the schematic of the modified FSS cell and, as an example, its simulated magnitude and phase of the reflection coefficient under various w 1 , where w 1 is the fractal width, w 2 is the edge width, d 1 is the modified loop length, d 2 is the concentric loop length, and d 3 is the load. The simulations indicated that the optimal dimensions of the modified FSS cells were 15.25 mm for d 1 , 5.75 mm for d 2 , 5.5 mm for d 3 , 5 mm for w 1 , and 2 mm for w 2 .Given the optimal dimensions, the magnitude and phase of the reflection coefficient of the modified FSS cells, at the center frequency of 2.45 GHz, were respectively −0.0057 dB and 157.63 °.By comparison, the modified FSS cells were smaller in size than the square-loop FSS cell, while the reflection coefficient magnitude of the modified FSS cells (−0.0057 dB) was larger than the square-loop FSS's (−0.42 dB); the microstrip antenna integrated with the modified FSS superstrate layer could thus achieve the higher directivity with a narrower distance between the antenna and the superstrate. Figure 5 depicts, as an example, the magnitude of the total transmission coefficient of the modified FSS superstrate layer for various w 1 .As the total transmission coefficient approaches 1 (i.e., 0 dB), the propagation of EM waves from the antenna becomes perpendicular to the superstrate layer.Since the waves traveling through the superstrate layer become parallel to free space, the higher directivity could thus be realized. Meanwhile, Figure 6 compares the magnitude and phase of the transmission coefficient of the modified FSS 4 International Journal of Antennas and Propagation in the presence and absence of load (d 3 ).It is clear that the load could significantly enhance the capacitance inside the FSS loop, and a strong resonance could thus be achieved at a lower frequency.In addition, the modified FSS structure exhibited no loss, giving rise to a mere small mismatch [24]. 3.2.The HIS Elemental Structure.In this research, the HIS elemental structure was fabricated from the EBG square cells with a vertical via and thereby resembles the mushroom.The reflection phase of interest was between −90 °and 90 °at the center frequency of 2.45 GHz. Figure 7 illustrates, as an example, the reflection phase of the HIS element under various w 3 , where w 3 is the HIS cell width and g is the gap distance since the fringe capacitance associated with the LC parallel equivalent circuit of the HIS structure minimally varies with the variation in the gap distance (g) and the subsequent resonant frequency.The gap distance (g) of 1.5 mm was deliberately selected due to the ease of actual fabrication and eventual compactness.The findings revealed that the reflection phase decreased with the increase in w 3 since the size of HIS unit cell was increased.Given the center frequency of 2.45 GHz, the simulated optimal HIS cell width (w 3 ) was 15 mm. Effects of FSS and HIS on the Microstrip Antenna Performance This section discusses the design of the HIS-FSS-integrated microstrip antenna fabricated on an FR-4 substrate with a dielectric constant of 4.3 ε r = 4 3 , given the center (target) frequency of 2.45 GHz.In addition, the effects of the integration of HIS and FSS on the antenna performance, with regard to |S 11 |, the radiation pattern, gain, and radiation efficiency, were determined.Figures 8(a) and 8(b), respectively, illustrate the microstrip antennas with only the FSS superstrate and with both the FSS and HIS structures.The incorporation of the HIS elemental structure resulted in a more compact antenna structure (Figure 8(b)), vis-à-vis that in the absence of HIS (Figure 8(a)).According to (10) and Figures 4 and 7, the phases of the reflection coefficient at the 2.45 GHz frequency for the modified FSS and HIS were 157.63 °and 0.53 °, respectively.In addition, the resulting distance between the antenna and the superstrate was h = 0 22 λ 2 45 GHz or 26 mm. In this research, the HIS structure was introduced to suppress the surface wave of the microstrip antenna whereby the HIS elements were mounted around the radiating patch of the antenna.The antenna evolution is illustrated in Figures 9(a)-9(c), beginning with the radiating patch (Figure 9(a)), the radiating patch enclosed by the HIS elements (Figure 9(b)), and the modified FSS superstrate structure (Figure 9(c)).Specifically, the size of the antenna (L), given the 2.45 GHz operating frequency, was 120 × 120mm and that of the radiating patch (W) was 29 × 29 mm (Figure 9(a)). 4.1.The Optimal High-Directivity Antenna with the FSS Superstrate Layer.In this step, the modified FSS parameters were varied for the optimal FSS cell dimensions with the resulting high-directivity antenna.Figure 10 illustrates, as an example, the simulation results with regard to the directivity and |S 11 | under various fractal widths (w 1 ). In Figure 10, given w 1 of 5.62 mm, the antenna exhibited the lowest |S 11 |, whereas the antenna directivity was highest under w 1 of 4.75 mm.In addition, given the initial distance between the antenna and the superstrate of 26 mm, the distance (h) was further varied for the highest antenna directivity, given the target operating frequency of 2.45 GHz.In Figure 11, the highest antenna directivity of 11.10 dBi could be achieved at the distance between the antenna and the superstrate (h) of 20 mm. Figures 12(a) and 12(b), respectively, illustrate the simulated electric field distribution of the conventional microstrip antenna and the proposed microstrip antenna with HIS elements and modified FSS superstrate.The electric field distribution of the conventional microstrip antenna behaves like a hemispherical shape above the radiator.The extension region of the proposed microstrip antenna with HIS and FSS is larger than the conventional microstrip antenna.This region is enlarged by FSS cell of the superstrate layer.Meanwhile, the electric field distribution through the superstrate layer becomes almost parallel to free space.The transmission coefficient of the superstrate layer is approached to 0 dB, resulting in higher directivity.Moreover, as shown in Figure 4, the reflection coefficient of the modified FSS was −0.0057 dB (greatly high reflection).The modified FSS superstrate placed parallel with the antenna with HIS elements at the distance of 20 mm.As the result, it causes resonant from the multiple reflection between the superstrate layer and the antenna [25].The very high directivity of the antenna was achieved, according to (9).Table 1 tabulates the optimal dimensions of the proposed microstrip antenna with the HIS and FSS structures.6 International Journal of Antennas and Propagation simulation and experimental (measured) results with regard to |S 11 |, the radiation pattern, gain, and radiation efficiency, respectively, under various antenna schemes (i.e., the conventional microstrip antenna, the microstrip antenna with HIS, and the microstrip antenna with both HIS and FSS). In Figure 14, the bandwidth of the proposed antenna (i.e., the microstrip antenna with HIS and FSS) was noticeably wider than that of the conventional microstrip antenna.This phenomenon could be attributed to the high-impedance surface of the HIS elemental structure at the resonant frequency and thereby the wider bandwidth of the proposed antenna ((8)).Moreover, the measured |S 11 | indicated that the proposed antenna was operable in the 2.39-2.51GHz frequency range. Figures 15(a) and 15(b), respectively, illustrate the XZ-and YZ-plane radiation patterns at 2.45 GHz.The halfpower beam widths (HPBW) of the initial antenna in the XZ-and YZ-planes were, respectively, 95 °and 105 °, and its front-to-back (F/B) ratio was 13.967 dB.Meanwhile, the HPBW of the proposed antenna in the XZ-and YZ-planes were, respectively, 45 °and 50 °, and its F/B ratio was 26.96 dB.By comparison, the HPBW associated with the proposed antenna was narrower than the conventional microstrip antennas.In addition, the XZ-and YZ-plane cross-polarization levels of the proposed antenna were lower than the corresponding cross-polarization levels of the 7 International Journal of Antennas and Propagation conventional microstrip antenna.This could be explained by the fact that the surface wave could no longer propagate and the FSS superstrate layer efficiently directs the EM waves from the microstrip antenna.The proposed antenna could thus achieve the higher directivity, relative to the conventional microstrip antenna. Figure 16 compares the simulated and measured antenna gains at 2.45 GHz under the various antenna schemes.The findings showed a significant increase in the antenna gain (10.14 dBi) under the proposed antenna scheme (i.e., the microstrip antenna with HIS and FSS) vis-à-vis that of the conventional microstrip antenna (2.28 dBi). Figure 17 compares the radiation efficiencies under the various antenna schemes.The radiation efficiency is enhanced with the incorporation of both the HIS and FSS structures into the microstrip antenna.Specifically, at the 2.45 GHz frequency, the simulated and measured radiation efficiencies of the proposed antenna are 82% and 77%, respectively.The measured radiation efficiency of the proposed antenna is enhanced by 35.6% comparing with the conventional microstrip antenna. Table 2 tabulates the comparative performances of the existing metamaterial-integrated antennas and the proposed antenna.Specifically, in [26], the PFMA with spiral-like EBG achieved a gain of 5.6 dBi, suffered from the design and fabrication challenge due to the spiral-like EBG structures, while in [27], the aperture-coupled microstrip antenna (ACMA) with FSS superstrate achieved a gain of 15 dBi at 9.5 GHz.However, this structure has high back lobe.Interestingly, the EBG resonator antenna (ERA) with phasecorrecting structures (PCS) superstrate in [28] achieved the highest gain (21.2 dBi), suffered from the fabrication challenge due to the PCS design.All in all, the proposed antenna (i.e., the microstrip antenna with HIS and FSS) could achieve a relatively high gain (10.14 dBi), given its compact size and low profile. Conclusions This research has proposed the microstrip antenna integrated with the high-impedance surface (HIS) elements and the modified frequency selective surface (FSS) superstrate for 2.4 GHz band applications.The electromagnetic band gap (EBG) structure was adopted in the fabrication of both the HIS and FSS structures.In the antenna design, the HIS elemental structure was mounted onto the antenna substrate around the radiation patch to suppress the surface wave, and the modified FSS superstrate was suspended 20 mm above the radiating patch to improve the directivity.Simulations were carried out to determine the optimal dimensions of the constituent components and the antenna prototype subsequently fabricated and experimented.The simulation and measured results were in good agreement.Specifically, the proposed antenna (i.e., the microstrip antenna with the HIS and FSS structures) could achieve the measured antenna gain of 10.14 dBi at the 2.45GHz frequency and the <−10 dB reflection coefficient.In addition, the HIS-FSS-integrated antenna was operable in the 2.39-2.51GHz frequency range.More importantly, the proposed integrated antenna outperformed the conventional microstrip antenna with regard to |S 11 |, the radiation pattern, gain, and radiation efficiency. Figure 1 : Figure 1: The schematic diagrams of (a) the FSS superstrate layer with the antenna ground plane acting as the lower plane of the EBG structure and (b) the top view of the FSS superstrate. Figure 2 : 2 Figure 3 : Figure 2: The schematic diagram of a microstrip antenna with HIS structures. Figure 4 : Figure 4: The simulated magnitude and phase of the reflection coefficient of the modified FSS cell for various w 1 . Figure 5 :Figure 6 : Figure 5: The simulated magnitude of the total transmission coefficient of the modified FSS for various w 1 . Figure 9 : Figure 9: The antenna structure.(a) The initial antenna.(b) The antenna with HIS.(c) The modified FSS structure.(d) FSS unit cell.(e) Side view. Figure 8 : Figure 8: The schematic diagrams of the microstrip antenna with (a) the FSS superstrate and (b) the FSS and HIS. Figure 10 :Figure 11 : Figure 10: The simulated antenna directivity and |S 11 of the modified FSS under various w 1 . Figure 12 : Figure 12: Simulated electric field distribution (a) conventional microstrip antenna and (b) proposed microstrip antenna with HIS elements and modified FSS superstrate. Figure 13 :Figure 14 : Figure 13: Photograph images of (a) the prototype microstrip antenna with HIS, (b) the prototype-modified FSS superstrate layer, and (c) the side view of the HIS-FSS integrated microstrip antenna. Figure 15 : Figure 15: The simulated and measured radiation patterns under various antenna schemes: (a) XZ-and (b) YZ-planes. Figure 16 : Figure 16: The simulated and measured antenna gains under various antenna schemes. Figure 17 : Figure 17: The simulated and measured radiation efficiency under various antenna schemes. Table 1 : The optimal dimensions of the microstrip antenna with the HIS elements and FSS superstrate. Table 2 : The performance comparison of antennas with metamaterial(s).
5,328.4
2018-03-21T00:00:00.000
[ "Engineering", "Physics" ]
Nuclear astrophysics activities at the n_TOF facility at CERN . The n_TOF facility at CERN is operational since 2001, and provides neutron-induced cross section data of interest to several research fields, including nuclear astrophysics. The neutron time-of-flight (TOF) facility features three experimental areas located at di ff erent distances from the pulsed neutron source. Two beam lines at nominal distance of 185 and 19 m are especially equipped for TOF experiments. A third station at approximately 3 meters from the neutron source was conceived for irradiation and activation measurements. So far, neutron-induced cross sections for more than 100 isotopes have been measured. Introduction Radiative neutron capture, i.e. (n,γ), cross sections represent one of the most relevant nuclear inputs to models of stellar nucleosynthesis of the elements heavier than iron. For instance, the s process [1,2] proceeds via a sequence of neutron captures and β-decays from a distribution of seed nuclei around iron, thus building up elements up to bismuth. In this scenario, β-decay rates are faster than neutron capture rates, the nuclear reactions proceed along the valley of stability on the chart of nuclei. In addition to (n,γ) reactions, and to a minor extent, (n,p) and (n,α) reactions on a few light elements can play a relevant role when these reactions absorb a large number of neutrons, thus affecting the efficiency of the s process in synthesizing heavy elements. On the other hand, (n,p) and (n,α) are of some relevance in the modelling of the nucleosynthesis occurred during Big Bang or for the study of particular topics as the stellar production of the 26 Al gamma ray emitter observed in our galaxy. The experimental observable of interest is the neutron-induced cross section averaged over the stellar neutron-energy distribution, typically referred to as Maxwellian averaged cross section (MACS). Experimentally, MACS are determined via two techniques: either time-of-flight (TOF) or activation. The TOF technique is based on the measurement of energy-dependent cross sections over a wide energy region, and subsequent calculation of the MCAS at different kT (k being the Boltzman constant and T the temperature). The second technique involves a sample first being irradiated with a neutron beam with a Maxwellian-like energy distribution, and subsequently the resulting product nucleus is counted. While from TOF measurements MACS between kT = 5 and 100 keV can be estimated, the activation experiments are performed at a single temperature, typically around kT = 30 keV. So far, the n_TOF collaboration has provided nuclear data using the TOF method for a large number of intriguing physics cases (see for instance ref. [3] and references therein). In the near future, the effort will be on activation measurements as well. From laboratory experiments to stellar reaction rates The astrophysical reaction rate depends on the number density of interacting particles times the reaction rate per particle pair ⟨σv⟩. This latter term describes the probability of nuclear reactions between two particles, moving at relative velocity v. It is important to note that the interacting particles are in thermodynamic equilibrium in a stellar plasma, therefore their kinetic energy is linked to their thermal motion. Consequently as already mentioned, the relative velocity can be described by a Maxwell-Boltzmann distribution ϕ MB : From the same figure, it is evident that ϕ MB presents a maximum located at different energy depending on the temperature. More in detail, the maximum occurs at the velocity v T = 2kT µ , µ being the reduced mass of the system formed by the interacting particles. As neutron-induced reaction cross sections are measured as a function of energy E (here E represents the centre-of-mass energy), it is customary to express the velocity distribution as energy distribution: Therefore, the reaction rate can be also expressed in terms of the MACS: The advantage of the TOF method with respect to activation is evident: numerical integration (Eq. 3) of energy-dependent cross section data makes is possible to derive MACSs at all relevant stellar temperatures (5 < kT < 100 keV). It is important to remark that the σ(E) are measured over a large energy interval. On the other hand, in case of rare and/or short-lived isotopes, it is not possible to prepare samples with sufficient mass (mg or higher) and/or enrichment for a TOF measurement. In these cases, an alternative to TOF measurements is the activation method, provided that neutron captures result in unstable isotopes. For instance, activation measurements on elements with half-life of minutes or less can be performed by cycling between irradiation and radiation measurement. The n_TOF facility After several facility upgrades, now n_TOF features two beam lines and corresponding experimental areas: EAR1 at 185 m and EAR2 at 19 m for TOF measurements, and an irradiation station referred to as NEAR for activation measurements. The n_TOF facility is a white spallation source driven by the CERN proton synchrotron (PS). More in particular, neutrons are produced by 20 GeV/c protons from the PS, impinging onto a massive 80 × 80 × 60 cm 3 Pb block [4]. The initially fast neutrons are moderated by a water layer of 5 cm, resulting in a wide neutron energy spectrum at both experimental areas EAR1 and EAR2 [5,6]. Neutron energies span over 11 energy decades, from meV to GeV, and ≈ 5 × 10 5 and 10 7 neutrons per bunch reach EAR1 and EAR2, respectively. Finally, some 100 times higher neutron flux with respect to EAR2 is expected at NEAR. The high instantaneous neutron flux at relative large distances from the spallation target is the result of the combination of the PS features and the ones of the neutron-producing target. Measurements at EAR1 and EAR2 The TOF technique is based on a measurement of the time needed by a neutron to travel a given distance L. This time t can be used to determine the neutron velocity: v = L/t, and consequently its kinetic energy E n : where γ represents the relativistic Lorentz factor γ = (1 − v 2 /c 2 ) −1/2 , m is the mass of the neutron and c is the speed of light. The PS provides a pulsed proton beam of approximately 10 13 protons grouped in bursts of 7-ns FWHM. The primary proton beam produces a large amount of secondary particles, including γ-rays. These latter particles travelling along the beam pipe reach the experimental Areas after a fixed time t γ = L/c, therefore providing a reference time to determine the "start" signal, i.e the moment when neutrons are produced in the spallation target. The "stop" signal is obtained from the time of detection of the neutron-induced reaction products. Consequently, the measured TOF is obtained from the time difference between stop and start signals. In addition to the neutron flux, a discriminating feature of neutron facilities is the energy resolution: where the resolution broadening is dominated by the neutron transport in the target-moderator assembly. Thanks to the long flight paths of EAR1 and EAR2, ∆E n /E n can be as small as 10 −4 . Measurements at NEAR Being so close to the neutron-producing target, NEAR is characterized by a very high neutron flux, well suited for activation measurements of astrophysical interest. The neutron beam is transported from the spallation target to the NEAR station through a collimated pipe in the shielding wall, where samples are irradiated with ≈ 10 8 neutrons per bunch. This new area is complemented with a γ-ray spectroscopy laboratory equipped with an n-type HPGe detector of 55% relative efficiency, for the measurement of the activity resulting from irradiation of samples in the NEAR station. Feasibility studies are ongoing to demonstrate the possibility of producing Maxwellianlike neutron spectra at different stellar temperatures by means of a neutron moderator/filter assembly, see for instance ref. [7]. In summary, after shaping the neutron spectrum to resemble a Maxwellian spectrum at a given temperature ϕ MB (kT ), a sample is irradiated for a certain period. The number of freshly produced nuclei are finally measured through the activity of the sample, which in first approximation is proportional to the MACS [8]. Moreover, the n_TOF collaboration promoted a program of measurements aimed at studying neutron induced charged particle reactions of astrophysical interest, amongst them the measurement of 7 Be(n,α) and 7 Be(n,p) reactions relevant for Big Bang nucleosynthesis [21,22], and neutron destruction of the cosmic gamma ray emitter 26 Al by (n,p) and (n,α) reactions [18,19].
1,951.6
2023-01-01T00:00:00.000
[ "Physics" ]
Effective second-order correlation function and single-photon detection Quantum-optical research on semiconductor single-photon sources puts special emphasis on the measurement of the second-order correlation function g(2)(τ), arguing that g(2)(0) < 1/2 implies the source field represents a good single-photon light source. We analyze the gain of information from g(2)(0) with respect to single photons. Any quantum state, for which the second-order correlation function falls below 1/2, has a nonzero projection on the single-photon Fock state. The amplitude p of this projection is arbitrary, independent of g(2)(0). However, one can extract a lower bound on the single-to-multi-photon-projection ratio. A vacuum contribution in the quantum state of light artificially increases the value of g(2)(0), cloaking actual single-photon projection. Thus, we propose an effective second-order correlation function g ˜ ( 2 ) ( 0 ) , which takes the influence of vacuum into account and also yields lower and upper bounds on p. We consider the single-photon purity as a standard figure of merit in experiments, reinterpret it within our results and provide an effective version of that physical quantity. Besides comparing different experimental and theoretical results, we also provide a possible measurement scheme for determining g ˜ ( 2 ) ( 0 ) . , which takes the influence of vacuum into account and also yields lower and upper bounds on p. We consider the single-photon purity as a standard figure of merit in experiments, reinterpret it within our results and provide an effective version of that physical quantity. Besides comparing different experimental and theoretical results, we also provide a possible measurement scheme for determining˜( ) Introduction Single photons (SPs) are an essential tool in both quantum optics and quantum information. This includes, besides many other things, device-independent quantum cryptography [1,2] or a photonic quantum network [3,4]. In terms of the Hanbury-Brown-Twiss (HBT) experiment, which measures the second-order correlation function a a a a 0 0 0 0 , 1 2 2 withˆ(ˆ) † a a the annihilation (creation) operator of a single mode of the field, a deterministic SP source should have g (2) (0)=0, as this quantity is connected to the probability of emitting two photons (or more) at the same time. The first SP light source was the fluorescence of single atoms, which provided an almost perfect SP source, but with low intensity and even lower collection efficiency. Rydberg atoms provide a better suitable source, where the light can be manipulated on the slow time scales of electronic decays. Nowadays they are implemented in many quantum-information protocols [5]. Other sources of SPs have been identified, such as spontaneous parametric down conversion, which already was used to generate pure SP states [6], quantum scissors based on applying teleportation to a coherent input [7], and strongly attenuated coherent states common in quantum cryptography [8]. Recently, it was even proposed to generate SPs via quenching the vacuum [9]. A large field of research is focused on quantum dots as artificial atoms, coupled to different micro-or nanostructured environments in solids [10][11][12][13]. These tailored manybody structures have been proposed as a powerful tool for SP-based architectures. Their controllable variety in optical properties can substantially widen the range of applicability of SPs [14]. However, because of their complex inner structure, it is not straight forward to show the SP character of such light sources. can be determined in the lab. We give relative and absolute bounds on p, and analyze theoretical quantum states and real experiments on semiconductor SP sources. Furthermore, we reinterpret the SP purity based on these results. Finally, we analyze a possible setup to measure the effective second-order correlation function directly. From now on, we will only focus on the value of g (2) (0) and omit the time argument in the notation. The paper is organized as follows. In section 2, we give the full proof that g (2) <1/2 indicates a nonzero projection on the SP Fock state in the analyzed quantum state of light. We then show in section 3 that the amplitude of this projection is arbitrary, independent of g (2) . In section 4 we derive a lower bound on the relative amplitude, which directly reveals the importance of the vacuum contribution and yields the effective secondorder correlation function˜( ) g 2 we propose. In section 5 we apply our results to known quantum states of light, showing the extended applicability of the original g (2) criterion. Some alternative interpretations to the above results are given in section 6. Section 7 is devoted to comparing our results to previous work on specific semiconductor systems. We propose a simple measurement scheme based on post selection to obtain the effective correlation function in section 8. Finally, we give conclusions and an outlook in section 9. 2. Proof that g (2) <1/2 implies SPP Let us first give the full proof that when for a given state  we obtain g (2) Hence, for any Fock state with n>1, g (2) will be greater or equal to one half. For the vacuum state ñ |0 , one may use the limit of coherent states to define g (2) ≔1. Thus, all Fock states besides ñ |1 follow the criterion g (2) 1/2, while for ñ |1 , we have g (2) =0. Any pure quantum state can be written in the Fock basis as a linear combination. Consequently, if a superposition of two arbitrary states with disjoint Fock statistics can not yield a lower g (2) than either of its constituents, the proof of the above conjecture would be completed for pure states. Due to the diagonal correlations involved within g (2) superposing two pure quantum states with disjoint photon statistics Finally, if we do not use the fact that y ñ | i are pure states and only consider the expectation values appearing in equation (3) themselves, we can substitute them with general density operators  i . Hence, the proof of our conjecture would be done for all quantum states if we can show that the right-hand side of equation (3) can not become lower than the value of g (2) for either of the two states involved therein. Because of the nonlinear nature of g (2) with respect to the quantum state, this property has to be shown explicitly. with r=n 2 /n 1 0. Let us further assume without loss of generality that g 2 =tg 1 with Î [ ] t 0, 1 , i.e. g 2 g 1 . The first derivative of g (2) with respect to s reads as The denominator of the derivative is positive for any value of s and r. The numerator is linear in s. Thus, there can be no more than one extreme point. As g (2) can not decrease over a full shift of s from 0 to 1, a value below g 2 thus requires a downward slope of g (2) at s=0. That slope can easily be seen from equation (5) to be The only combination for which this is not positive definite is r=t=1. In this case, the two constituent states have equal g i and n i . Consequentially, they can not be distinguished by the incoherent superposition in g (2) , and the second-order correlation function is simply constant. Hence, g (2) does not decrease at s=0 and no combination of states can lower the second-order correlation function below g 2 . In other words, for g (2) <1/2 we have a nonzero SPP, i.e.  á ñ > |ˆ| 1 1 0. Note that the inverse is not true, and g (2) can become larger than either of its constituents. If we consider two states with t=1, that is, states with different average photon numbers but the same g i , g (2) always increases above g i and maximizes for s=r/(1+r). An interesting application of this case is given by two coherent states with different amplitudes. In this case we have  a a = ñá | | i i i ,  a Î i and the average photon numbers a = | | n i i 2 . As both states have g i =1, we can easily derive the value of g (2) for their incoherent superposition at the maximum to be That means, for two coherent states being statistically mixed-which represents a fully classical state-one with large, one with small coherent amplitude, the second-order correlation function scales up limitless. This is an example of a classical state with superbunching to arbitrary orders. In figure 1, we depict g (2) for this case with r=100 over varying s. The same argument holds, e.g. for thermal states with g i =2. Amplitude of SPP With the existence of a nonvanishing SPP shown, the next step is to quantify the amplitude p of this projection. Unfortunately, including the effects of vacuum, it can be arbitrarily small, independent of g (2) . Consider the following state: Note that, due to the correlations under study being exclusively diagonal in Fock space, the phases in the prefactors are irrelevant. We easily compute Solving equation (9) for p yields The maximal value for p is obviously g (2) -dependent. However, q, and subsequently p can be chosen arbitrarily small for any fixed g (2) . For example, in figure 2, we show p, p+q and 1−p−q over q for a fixed g (2) =0.1. As one can see, there is no lower bound on p other than zero. Hence, for determining the absolute amplitude of the SPP, g (2) is insufficient, even as an approximation. One can also show that g (2) gives no upper limit on p either. Consider a state with fixed g (2) and variable Fock state number n2. The second-order correlation function now reads Inverting again for p one obtains Figure 1. Overall second-order correlation function for an incoherent superposition with g 1 =g 2 =1 and r=100 over s. For sufficiently large n, p gets arbitrarily close to 1, as Note that even for g (2) >1/2 we can have arbitrarily large p. Thus, there is no information on the absolute probability to obtain SPs in the explicit value of g (2) itself. Note as well, if we fix g (2) =0.1, for each of these states, the previously mentioned SP-purity would always be b=1−g (2) =90%, even though p varies arbitrarily. Effective second-order correlation function Notwithstanding the above result, the value of g (2) does allow to give a lower bound for the ratio p/q, i.e. the ratio of SMPP. In most semiconductor scenarios it is desirable to have a source with high SMPP ratio. A similar bound was proposed in [28] for pulsed excitations. Our result is a generalization based on the value g (2) . Consider now the state in equation (8) only with the state ñ |2 being substituted for a general state y ñ covering all multi-photon projections of the state yñ | , which now reads as Again, this does not limit the generality of our results, as phases of orthogonal states are irrelevant for the diagonal correlations measured and general mixed states yield the same results. Analogous to equation (10), we find where n 2 and g 2 are the average photon number and second-order correlation function of y ñ | 2 . Applying the results of section 2 onto y ñ | 2 which has only multi-photon projections, we know n 2 2 and g 2 1/2, with the equality being given in both cases for y ñ = ñ | |2 2 . Therefore, the ratio of p/q has a lower bound of Note that already from q1 on the right-hand side of equation (19), we can deduce a nonzero lower bound for p/q for g (2) <1/2. Using p=1−q−x, x being the vacuum projection, 0 0 , to solve for q and reinserting the result on the right-hand side of equation (19), we obtain the qindependent optimal value of this lower bound as The ratio is dependent on only one parameter, which is, however, not g (2) but a scaled version of it. We call˜( ) g 2 the effective second-order correlation function. We find the following general statements: for = (20) can be approximated by - . Both functions are depicted in figure 3. The necessary˜( ) g 2 to have a desired ratio p/qN is given by we may even have a larger multi-photon projection than SPP. Assuming a dominant SPP requires N=10, we find the upper limit of = » . This leads back to the previous statements for g (2) . In other words, if in the experiments the vacuum contribution is not determined, equation (20) yields all that can be concluded from g (2) <1/2. This does not exclude further conclusions like an upper bound on q (see section 6) or on the average photon number [26]. It does provide one way to formulate this limited information quantitatively. If, however, a non-zero vacuum contribution is known, this knowledge can be extended, see figure 4 for a visualization. For example, for a light source with 90% vacuum (x=0.9), g (2) <5 already yields nonzero SPP, while g (2) =0.5 actually implies p/q>37, which could be considered a very good SP source with respect to the SMPP. On one hand, for a source with high x, one may now say that g (2) <1/2 does imply to find predominantly SPs in photon-number experiments, rather than multiple photons. On the other hand, without definite information on the vacuum, the question of how good this source is, is undecidable. It should be clarified that large vacuum fluctuations counter the deterministic nature of a perfect SP source. If deterministic photons are needed for a given application, we can now clearly state that additional information is required. The vacuum contribution x is one example for such a quantity. For a nearly deterministic source, the conditions are x=1, and p/q?1, simultaneously. If instead only high SMPP rate is of interest, g (2) and˜( ) g 2 provide versatile information. There is a simple physical explanation for this huge influence of the vacuum contribution x on the SMPP ratio of light. Consider a state  0 without vacuum (x = 0) and a given SMPP ratio p/q. What happens to g (2) , if we include vacuum x, but request that p/q should remain fixed? Again, based on the diagonal correlations we use, we can include vacuum incoherently as All ratios of single-and multi-photon projections are fixed in this description. Due to the linear combination of density matrix elements in arbitrary expectation values, á ñ † a a n n is scaled down by a factor of (1−x), if vacuum is included, independent of n. The second-order correlation function is a quotient of one expectation value with the square of another, leading to an overall upscaling of g (2) by a factor of 1/(1−x). This factor is exactly compensated in our effective second-order correlation function˜( ) g 2 and thus yields the correct lower bound on p/q independent of x. We rescale the correlation function to a vacuum-independent parameter. Finally, the inclusion of the vacuum allows us to calculate absolute limits for the SPP p. Setting the righthand side of equation (20) equal to C, we can use q=1−x−p0 and obtain Note that such an absolute boundary for p was not possible without knowledge of the vacuum contribution as x could be arbitrarily close to 1. Also note that in this notation,˜( ) g 2 and x enter the bounds independently. Thus, knowledge of the vacuum projection is crucial for determining p. Application to classical states Before we move on to apply our results to explicit previous works, it is insightful to look at known quantum states of light, to gauge the quality of these results. For this purpose we analyze our model for the classical cases of a coherent and a thermal state. Their density matrix is analytically known and with = , they would both violate the standard condition for nonzero SPP, g (2) 1/2. Likewise, both states always have a SPP and for low average photon numbers á ñ  n 1 approach a state with dominantly not more than one photon. We can calculate the value p/q exactly for both states yielding Both cases are depicted in figure 5. It can clearly be seen that for low excitation, these classical states also are detected using the effective second-order correlation function. Furthermore, our lower bounds appear as very good approximations for  á ñ n 0.1. Another way to look at this result is that when including the necessary information to calculate the SPP, it may not limit the fields to nonclassical states of light. Different interpretations based on˜( ) g 2 So far we have focused explicitly on the SMPP, as the major quantity to be gained from measuring g (2) . In the following we give a few other interpretations of our results. First, let us reconsider the SP purity b=1−g (2) . Owing to the limitations we have shown above, a reasonable 'purity' of SPP in the quantum state of light field can only be given in comparison to multi-photon projection. Thus, we may define a purityb as the probability of obtaining SPs in a photon-number measurement, under the condition that more than zero photons appear at all. Within our notationb then reads as We used  and obtain a rather similar result as for the original SP purity b. Yet, the purity of SPs as defined byb improves b by a factor of two, even without knowledge of x. Note also that in case of no vacuum x=0, this SP purity is actually identical to a lower bound on p itself. Another way of interpreting this result is the upper bound on the multi-photon projection q. We easily see from rewriting equation (20) in terms of q that from which an upper bound on q and thus a lower bound on 1−q follows, as If for a quantum state of light  one obtains < ( ) , then there is a non-zero lower bound on the sum of projections on zero-photon and single-photon Fock state, The notion of very low multi-photon projection from low g (2) -measurements has been an important aspect of research, see e.g. [28,29]. At this point, it is worth comparing with the results from [26]. In that work it was shown that g (2) 1/2 implies that the average photon number á ñ n has to be smaller or equal to 2, the equality given for the Fock state Hence, a value of g (2) 1/2 implies q 3 1/3, q 4 1/6 and so on. In our derivation of equation (20), we used the monotonicity of g (2) with respect to increasing Fock states ñ |n . One can conclude that for a fixed g (2) below the value for that Fock state, the highest possible projection q n is realized in a state containing only contributions from the single-and n-photon Fock state, the state given in equation (11). In that notation the quantity q n from above is just 1−p and we can calculate q 3 0.134, This is already a factor of 4 better than the upper boundary fromequation (33). We can further improve the result by using the effective second-order correlation function˜( ) g 2 instead of the bare g (2) . For obtaining the maximal contribution of q n for ¹ x 0, one has to apply the corresponding conditions n 2 =n and g 2 =1−1/n in equation (19). For example, given 50% vacuum (x=0.5) and g (2) =1/2 implies a maximal 3-photon projection q 3 =0.051. Comparison with previous works This section is devoted to comparison with previous works from the solid-state community. While x should be easily measurable in experiments without resorting to a full quantum-state reconstruction, it is so far usually not the focus of research. Thus, we will carefully extract a lower limit for x from the data in the following references to not overestimate the effect of the vacuum. In the experiment performed in [30], the authors use a quantum dot in a high-quality pillar micro cavity, obtaining for low cw-laser power and after subtracting experimental limitations g (2) ≈0.08, which is already a very good value with p/q22. However, as this is a weakly excited quantum dot, vacuum should be relevant. Taking the fit parameters of the experiment (Rabi frequency ÿΩ=0.9 μeV, lifetimes T 2 ≈2T 1 =1150 ps) and applying those to the same simple two-level model they used, we roughly find x≈0.58, and thus » ( ) g 0.034 2 and p/q56. In this case an already good SP source can be shown to be even better by evaluating the vacuum. Furthermore, as we have an explicit value for x, we can calculate the SPP p from equation (24) as Thus, we get a very precise value for the real SP probability in a photon number measurement. In comparison, consider [20], wherein the authors experimentally analyze a SP filter changing from SP characteristics (g (2) <1/2) to coherent dynamics (g (2) =1) by varying the input laser power. For low laser power g (2) ≈0.35, which without knowledge of the vacuum only implies p/q2.5. For this and the following example, we use the relation  -á ñ x n 1 . Using figure 3(a) from that paper, we find for an input photon number of 0.1 that  á ñ n 0.1 and thus x0.9. This means, vacuum is the dominating contribution in this light field. Using the effective second-order correlation function, we can state » ( ) g 0.035 2 and p/q54. This is almost as good as the results of [30] above. Moreover, if we go to an input photon number of 0.9, they obtained g (2) ≈0.5, but still x0.6. Hence, » , which is still a better SMPP ratio, than for the original low-power limit given by the authors. Our results show that not so good SP sources may only be disguised as such via strong vacuum contributions. Now let us look at a theoretical example which does not aim at SPs at all [31]. Therein, the authors analyze a few emitters in a cavity and look at the output field. In particular, figures 4 and 6 of that work show the average photon number and g (2) , respectively, for varying temperatures and cavity-emitter coupling strengths for the case of two emitters. Combining both figures and our above analysis we can estimate that a nonzero SPP is given in almost every point of the depicted state space, but most interesting is the upper left corner in part (a) of these figures, showing low temperature and high coupling strength. In this region g (2) 4, which is far above the SP limit of 0.5. Yet, with an average photon number of the order of 10 −9 it becomes virtually impossible to observe more than one photon at once, as p/q≈5×10 8 . We have gathered all mentioned results in table 1. They clearly show how much higher the quality of a SP source, at least with respect to the SMPP ratio may be than what the sole value of g (2) implies. Table 1. Comparison of the experimental and theoretical results of previous works with respect to SMPP ratio. The third column gives the lower bound on p/q without knowledge of the vacuum contribution (x=0), whereas the last two columns give that bound with knowledge of x and the effective SP purity. The upper indices 'a' and 'b' on [20] indicate the two scenarios discussed in the main text. Reference g (2) p/q(x=0) x˜( ) g 2 p/qb Finally, in a very recent theoretical study the authors compared two different quantum-dot SP setups and analyzed theoretically the two-photon contribution [29]. Helpful for our comparison is that the authors present numerical results for g (2) as well as x, p, and the two-photon projection similar to our q, there denoted P 0 , P 1 , and P 2 , respectively. We consider figure 3 of [29], wherein all these quantities for both setups are plotted as a function of laser-pulse length. Using equation (31), we can determine an upper bound on q and thus on P 2 to compare with the numerical simulation of the two-photon projection, see figure 6. If information of the vacuum is not included, the result follows nicely the actual projection for short pulse lengths and correspondingly little vacuum. Around the maximum projection of P 2 at normalized pulse length of 3, vacuum becomes relevant and while the actual projection falls off, our upper bound grows up to one, when g (2) increases above 1/2. In contrast, including vacuum, for both setups the upper bound follows the actual Fockstate projection for all regions. In particular, for the two-level setup on the left, we see around the maximum some difference between the simulation and the upper bound, indicating an even higher Fock state projection than n=2. For the three-level setup, the upper bound stays close to the projection for all pulse lengths. Thus, in this setup there is virtually no higher Fock-state contribution. Finally, for completeness, it should be noted that for very short pulse lengths our upper bound falls slightly below the calculated projection. This deviation may come from numerical limitations or from the fact that for very short pulses the second-order correlation function is not based on steady-state correlations. Measurement of˜( ) g 2 Experimental limitations caused by strong vacuum contributions in weak light fields have been considered before. In [32] the authors discuss an optomechanical system with a dipole-like coupling between the optical photon and the mechanical phonon. Thus, a one-to-one relation between excited states of the two subsystems was established and single-phonon states were detected via single-photon measurement. The latter was performed by standard HBT experiment. Vacuum was a major issue in this work and hence, the authors circumvented this problem by employing post-selection techniques, see figure 2 of [32]. In a first step, they detected the emission of a photon from the initialization process, before applying HBT measurements in the next step. Thus, the g (2) measurement was limited to the cases with no vacuum. From a theoretical point of view, this procedure is the inversion of the vacuum inclusion used in equation (23), removing the vacuum component x from the original quantum state of light. Correspondingly, the determined g (2) of the post-selected state is equal to the effective second-order correlation function˜( ) g 2 of the original quantum state of light. Thus, without explicit determination of the vacuum contribution x, we can obtain the effective second-order correlation function and thus calculate the lower limit for the SMPP ratio, equation (20). However, there are two drawbacks to this method. On the one hand, as x itself is not determined, we still do not obtain bounds for the SPP p, rather a lower bound for = - . On the other hand, we do not analyze the original fields, but only the state without vacuum. While a nonzero SPP can easily be detected in this way, the anti-bunching character found from g (2) <1 may be completely lost, unless we know that the original g (2) fullfilled this condition, see the examples in section 5. Hence, we conclude that the values with-and without vacuum removal must both be determined, or, in other words, the vacuum contribution x is a necessary ingredient to estimate the actual SPP in the state. Figure 6. Comparison of the results of [29] with our description. Left: for the two-level system, Right: for the three-level system. The black solid line indicates the numerics for the two-photon contribution over increasing normalized excitation pulse length 1/γ. Using the data of the authors for g (2) and P 0 =x, we can compare with the upper bound for our qP 2 for the case without (blue, narrow dashing) and with (red, wide dashing) taking into account vacuum. Conclusions and outlook We have analyzed the gain of information from g (2) with respect to SPs. Any quantum state, for which the second-order correlation function falls below 1/2, has a nonzero projection on the SP Fock state. The amplitude p of this projection is arbitrary, independent of g (2) (0). However, one can extract a lower bound on the SMPP ratio. A vacuum contribution in the quantum state of light artificially increases the value of g (2) , cloaking actual SPP. Thus, we proposed an effective second-order correlation function˜( ) g 2 , which takes the influence of vacuum into account and also yields lower and upper bounds on p, when combined with the information about the vacuum projection. We considered the SP purity as a standard figure of merit in experiments and reinterpreted it within our results. Comparison with other experimental and theoretical results indicates that there are many more SP light sources, where the SMPP ratio is much higher than expected, due to the vacuum contributions. We also provided a measurement scheme for˜( ) g 2 , which, however, may yield artificial nonclassicality of the quantum state of light. These vacuum contributions entered our derivation quite naturally, and its physical origin was only explained afterwards. As indicated by the chosen examples the average photon number is an often determined quantity that may also provide further results on the SP character, compare also [26]. Other figures of merit for SP sources include indistinguishability [33] and coalescence [34] of different SPs, which require more emphasis on the decohering processes and thus the Hamiltonian and dissipative structure of the system at hand. Each of these quantities yield their own additional information, but require an individual derivation akin in size to the case discussed in this work. Performing and tracking these derivations is intended as future work.
6,973.4
2017-11-16T00:00:00.000
[ "Physics" ]
Improved Battery Cycle Life Prediction Using a Hybrid Data‐Driven Model Incorporating Linear Support Vector Regression and Gaussian Abstract The ability to accurately predict lithium‐ion battery life‐time already at an early stage of battery usage is critical for ensuring safe operation, accelerating technology development, and enabling battery second‐life applications. Many models are unable to effectively predict battery life‐time at early cycles due to the complex and nonlinear degrading behavior of lithium‐ion batteries. In this study, two hybrid data‐driven models, incorporating a traditional linear support vector regression (LSVR) and a Gaussian process regression (GPR), were developed to estimate battery life‐time at an early stage, before more severe capacity fading, utilizing a data set of 124 battery cells with lifetimes ranging from 150 to 2300 cycles. Two type of hybrid models, here denoted as A and B, were proposed. For each of the models, we achieved 1.1 % (A) and 1.4 % (B) training error, and similarly, 8.3 % (A) and 8.2 % (B) test error. The two key advantages are that the error percentage is kept below 10 % and that very low error values for the training and test sets were observed when utilizing data from only the first 100 cycles.The proposed method thus appears highly promising for predicting battery life during early cycles. Introduction Lithium-ion (Li-ion) batteries are used in a wide range of applications, from electronic devices to electric vehicles and grid energy storage systems, because of their low cost, long life, and high energy density. [1,2] These rechargeable batteries lose capacity, energy, and power over time as a result of internal electrochemical processes and external operating conditions. Thus, Li-ion battery aging is generally characterized as an increase in internal resistance and a decrease in capacity, which constitute major problems. [3,4] Battery aging increases the cost of energy storage systems and may potentially result in serious accidents such as fires and explosions. Therefore, accurate battery cycle life prediction is critical for optimizing the performance of energy storage systems while assuring their safety and reliability. [5] Since the emergence of the commercial electric vehicles (EVs), battery life-time has been a focus of research, with different Li-ion batteries being cycled and/or stored in order to identify different degradation mechanisms. [6] To maintain the safety and reliability of battery-powered systems, it is generally recommended that batteries should be replaced when they can only store 80 % of their initial capacity. Laboratory studies are typically performed to better understand battery aging behavior under various operating conditions, with the resulting data being fed into or used to develop battery cycle life prediction models. [7] In recent years, a variety of methods for predicting battery lifetime have been presented. [8][9][10] Generally, battery lifetime prediction methods include model-based, data-driven, and hybrid approaches. [11][12][13][14] Model-based approaches use information of a system's failure mechanisms (e. g., solid electrolyte interface (SEI) growth) to provide a mathematical description of the degradation process, or they build an empirical model (experience-based models) to reproduce the system's declining trajectory. [15] They normally use different filtering algorithms such as the Kalman filter (KF), [16] the extended Kalman filter (EKF), [17] or the particle filter (PF) [18] to update model parameters recursively by sampling one measurement data point at a time. Hu et al., [19] for example, used a dual fractional-order extended Kalman filter (DFOEKF) for co-estimation of state of charge (SOC) and state of health (SOH) for Lithium-ion batteries. Datadriven modeling strategies, on the other hand, use historical data, real-time data, or both to determine the characteristics of the currently observed damage state and estimate future trends. [12,[20][21][22] Ng et al. [23] published a list of the recent datadriven models for battery state estimation. Finally, hybrid approaches combine model-based and data-driven methods in order to leverage the strengths of both approaches. [11,15,24,25] Data-driven models using statistical and machine learning techniques have gained a lot of interest in battery prognostic applications since they do not necessitate a deep understanding of battery failure and other physical mechanisms. In these models, the battery systems are treated as black box systems to provide a mapping between various input and output variables. An increasing number of articles has been devoted to datadriven algorithms for predicting battery state and life-time in recent years. Che et al. [26] used a universal deep learning method for prognostic and battery pack state of health estimation. Hu et al. [27] developed a hybrid approach for lithiumion battery RUL prediction based on particle filter (PF) and long short-term memory (LSTM) neural network. Liu et al. [28] employed a Gaussian process regression (GPR) with composite kernels coupling the Arrhenius law and a polynomial equation to capture the electrochemical and empirical knowledge of battery degradation. Nuhic et al. [29] used the support vector machine (SVM) for the estimation of state of health (SOH) and the remaining useful life (RUL). Ma et al. [30] used the battery capacity in a specific window (the minimum embedding dimensions of the capacity data) as input features, and created a hybrid neural network that integrated a convolutional neural network and long short-term memory to predict battery lifetime. Son et al. [31] employed a Gaussian process regression using multiphysics features including mechanical and impedance evolutionary responses to estimate the SOH of batteries. Even though these present methods provide satisfactory results in terms of battery life-time prediction, they often require data corresponding to at least 25 % aging in order to accurately estimate the target value. Due to the non-linear and complex degradation process of Li-ion batteries, precisely estimating battery life-time at early cycles -where the battery is largely yet to exhibit capacity degradation -is more challenging. This paper offers two hybrid models combining a linear support vector regression (LSVR) and a Gaussian process regression (GPR) for battery cycle-life prediction using data from only the first 100 cycles in a data set [32] of 124 cells with lifetimes ranging from 150 to 2300 cycles. The paper is organized as follows: In section 2, a comprehensive mathematical description of the proposed hybrid data-driven model is given. In section 3, the methodologies including the data description, the data preprocessing, the model development, and the model assessment methods are reviewed. Section 4 shows the results of the battery cycle-life prediction and compares them to published data. [32] The paper is concluded in section 5. Regression Supervised learning can be applied in two different types of problems: regression as well as classification. While the regression approach tries to capture the behavior of the system, the classification tries to group and classify the system behavior in different subsystems. [33] Principally, any kind of regression problem could be modeled as where f x ð Þ represents a hidden function of input vector x and e � N 0; s 2 n À � is an independent and identically distributed Gaussian noise function with zero mean and variance s 2 n originating from an observation y. Linear Support Vector Regression For a given training data set D of n observations, D ¼ x i ; y i ð Þ; i ¼ 1; 2; :::; n f g, where x i 2 R d represents a d-dimensional input feature, y i represents a scalar target value, and n denotes the number of samples in the training set, Support Vector Regression (SVR) finds a d-dimensional coefficient vector w 2 R d and intercept coefficient b 2 R such that the prediction given by w T � x i ð Þ þ b À � is close to target value y i . Here, the target value is the battery cycle life, and x i represents a vector of input features for battery sample i. The Linear SVR, subsequently, solves the following primal problem: [34] min w;b where the epsilon-insensitive loss is used, which ignores errors smaller than e, and C > 0 is the regularization term. The dual problem is formulated as: [35] min a;a* 1 2 ða À a * Þ T Qða À a * Þ þ ee T ða þ a * Þ À y T ða À a * Þ subject to e T a; a * ð Þ ¼ 0; 0 < a; a * < C; i ¼ 1; :::; n, where e is a vector of ones, Q 2 R n�n is a matrix with Q ij ¼ �ðx i Þ T � x i ð Þ. Finally, once the optimization problem is solved, the target value is predicted as: where only support vectors (SV), i. e. samples that are within the margin, are considered. Gaussian Process Regression Gaussian Process Regression (GPR) is a non-parametric machine learning methodology. Unlike other supervised machine learning algorithms that estimate the probability of parameters of a specific function, the GPR calculates all likely functions that are fitting to the observation data. This approach uses a Bayesian framework to do prediction by collecting prior knowledge and deriving a posterior probability hypothesis. A GPR is typically defined by two key functions: the mean function m x ð Þ and the covariance function k x; x 0 ð Þ which are defined as By choosing the mean and covariance functions, one can write the Guassian process as: [33] f ðxÞ � GPðmðxÞ; kðx; x 0 ÞÞ; (6) Furthermore, by summing the target value and noise distributions, one can simply include independently, identically distributed (i.i.d) Gaussian noise, e � N 0; s 2 n À � , to the target value as: In supervised learning, locations with comparable observation values x i are predicted to have similar response (target) values y i . In GPR, this similarity is reflected by the covariance function, which determines how responses at one site x i are influenced by responses at other sites x j , x i 6 ¼x j ; i ¼ 1; 2; :::; n. Various kernel functions, with one or several hyper-parameters, can be used to define the covariance function k x i ; x j À � . Thus, the covariance function can be written as kðx i ; x j jqÞ. For many conventional kernel functions, kernel variance σ f and characteristic length scale σ l are two common hyper-parameters. The characteristic length scales describe how far the input values x i can be apart before the response values become uncorrelated. For any collection of input features X ¼ x 1 ; x 2 ; :::; x n ½ �; the GPR defines a jointly Gaussian probability distribution pðf x 1 ð Þ; pðf x 2 ð Þ; :::; p f x n ð Þ ð Þ. Therefore, from the GPR prior, the collection of training points and test points are joint multivariate Gaussian functions, with zero mean value, distributed as seen in Eq. Given the number of training samples as n and number of test samples as n * , K X; X * ð Þ denotes the n � n * matrix of the computed covariances including all pairs of training and test points, and similarly for the other entries K X; X ð Þ; K X * ; X ð Þ; and K X * ; X * ð Þ. To improve the GPR's performance, the hyperparameters of the covariance function must be tuned. This can be achieved by maximizing the log marginal likelihood defined as: where À 1 2 y T ðK þ s 2 n IÞ À 1 y is the data-fit term, À 1 2 log K þ s 2 n I is the complexity penalty term, and the À n 2 log2p is the normalizing constant term. One can obtain the posterior distribution by limiting the joint prior distribution to the functions that are fitting to observed data points. Subsequently, predictions at test points could be made by computing the conditional distribution as (see e. g. [33] ): pðf * jX; y; X * Þ � Nðf *; covðf * ÞÞ; (10) where f * ¼ KðX * ; XÞ½KðX; XÞ þ s 2 n I� À 1 y; (11) covðf * Þ ¼ KðX * ; X * Þ À KðX * ; XÞ½KðX; XÞ þ s 2 n I� À 1 KðX; X * Þ: Methodologies The major purpose of this study is to predict Li-ion battery cycle life at an early stage of battery usage. More specifically, we hypothesize that merging the LSVR and GPR models could yield better results than state-of-the-art methodology, [32] while still using the same data. Figure 1 depicts the procedure and steps for estimating cycle life, which include data description, data pre-possessing, feature selection, and model development, all of which are covered in detail in the following subsections. Data Description Reis et al. [36] reviewed over 30 datasets associated with Li-ion batteries. The MIT data set [32] consisting of cycling data for 124 LFP/ graphite cells (A 123 systems, model APR18650M1A, 1.1 Ah nominal capacity) was used in this work. All cells were charged using a variety of multi-step fast charging methodologies, then discharged at a constant current. For all cycles, the ambient temperature was fixed to 30°C. Continuous data including voltage, current, battery temperature, and internal resistance were collected as the battery cells were cycled to end of life (EOL), defined as 80 % of their initial capacity. The cycle-life histogram for 124 cell samples ranging from 150 to 2300 cycles is shown in Figure 2. Data Pre-Processing In ML applications, data pre-processing is critical for improving data quality and prediction accuracy. Generally, it includes removing outliers, filling missing values, time-domain synchronization, and normalization. [37] In this context, some battery samples from noisy channels as well as some batteries that did not reach 80 % capacity were removed. Two samples with outliers were noticed in the capacity fade curve for the first 100 cycles. The detected outliers were removed, and the missing data are then filled up using interpolated values. Finally, the whole data set was normalized using the z-score normalization method [38] as: where Z is the standard score, x is the observed value, m is the sample mean, and s is the sample standard deviation. Feature Selection Normally, machine learning applications contain plenty of input features in the dataset. While some of these features might have good predictive strength, the presence of non-informative features can add uncertainty to the predictions. Therefore, when it comes to creating a machine learning model, feature selection is crucial to minimize the number of input variables, to lower the computational cost of modeling, and to increase the model's performance. The two fundamental types of feature selection approaches are supervised and unsupervised procedures. The distinction is whether or not the features are chosen based on the target variable. Unsupervised feature selection strategies, such as those that remove redundant variables using correlation, disregard the target variable. Approaches that use the target variable, such as methods that eliminate irrelevant variables, are supervised feature-selection techniques. In this section, an unsupervised method was used to remove redundant features. Features with high correlation have approx- imately the same influence on the observed output. Therefore, when two features have a high correlation, one of them might be dropped without losing relevant information for predicting the output of interest. Before eliminating redundant features, additional features were added to the available ones developed by Severson et al. [32] All features with their respective definition are listed in Table 1. Below is a description of how the features are derived: [32] DQðVÞ ¼ Q 100 ðVÞ À Q 10 ðVÞ; DQðVÞ 2 R p ; DTðVÞ ¼ T 100 ðVÞ À T 10 ðVÞ; DTðVÞ 2 R p ; DQðVÞ where m is the number of cycles in the prediction, q 2 R m is a vector of discharge capacities as a function of the cycle number, N 2 R m�2 is a matrix with the first column containing cycle numbers and the second column containing a vector of ones, and b 2 R 2 is a coefficient vector. ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P p Figure 3 shows the correlation heat-map including all features. To remove redundant input variables, columns with correlation greater than 0.9 were dropped. As a result, six features of twenty-six were removed. Model Development In this section, a comprehensive data-driven model was employed to predict battery cycle life before more severe capacity degradation phenomenon occurs. To this end, two hybrid models combining a LSVR and a GPR model were developed. While the LSVR model was used to forecast battery cycle life, the GPR model was used to model the cycle life residual, which is defined as the difference between the real cycle life and the LSVR model's predicted cycle life. Severson et al. [32] utilized a linear model, and used the lasso and elastic net techniques for regularization to avoid over-fitting. They used four-fold cross-validation and Monte Carlo sampling for tuning hyper-parameters. Because recreating the same results would be difficult, the LSVR model, which employs the linear kernel, is used in this study. The GPR model was tested in the form of two different models: model A and model B. As illustrated in Figure 1, the final predictions were obtained by adding the LSVR model's predicted cycle life and the GPR model's predicted cycle life residual. The final models are therefore called hybrid model A and hybrid model B. It is worth noting that this design is theoretically equal to setting the LSVR model as a mean function of the GPR model. In section [FS], an unsupervised feature selection strategy was used to remove redundant features. In this section, the filter feature selection method was used to select the most relevant features. The filter-based feature selection method is a supervised method which uses statistical techniques to asses the relevance of features and target variable outside of the predictive models. [39] The absolute valued Pearson correlation coefficient, as the most commonly used ranking criterion in the filter methods, was employed to select the most relevant features correlated to the target values. It determines the linear relationship between the feature x and the target y, as: ðx i À � xÞðy i À � yÞ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P n i¼1 ðx i À � xÞ 2 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P n i¼1 where x i and y i denote the i-th sample of feature x and the target y, and � x and � y are the independent and dependent sample means, respectively. Figure 4 shows the listed computed Pearson coefficients between the remaining features and the cycle life value. A threshold of 0.5 was utilized to filter the relevant features to be used as an input variables in the LSVR model, leading to the final choice of x var, x mean dVdQ, x minT, x mean dQdV, as well as x IR2. Learning the parameters of a prediction function and testing it on the same data set is a fundamental error that can result in over-fitting. In machine learning applications, the common practice is to divide the entire data into three sets of data, i. e. training, cross-validation and testing, 60 : 20 : 20. It is well-known that the basic idea of cross-validation is to split the training set into two disjoint sets, one which is actually used for training, and the other, the validation set, which is used to monitor the performance of the trained model. The answer to the question on what the optimal number of the chosen folds would be, is more based on experimental rather than theoretical studies. One approach would be to choose the so-called leave-one-out cross-validation (LOO-CV), i. e. an extreme case of k-fold cross-validation obtained for k = n, the number of training cases. While his approach can be computational heavy, but the typical values for k are often in the range 3 to 10. In this work, the 80/20 training/test split on the data-set was used. Furthermore, the training set was split in to 5 smaller sub-sets, meaning that the 5-fold cross-validation was performed. Figure 5 depicts the procedure for k-fold cross-validation, in which a model is trained using k-1 of folds as training data and the resulting model is validated on the remaining data. After fitting the model using the training data and thereafter crossvalidating it, the model was evaluated using the test set. We evaluated various cross-validation with different k-folds (k ¼ 1; 2; :::5), with the results showing that our choice of 5-fold cross-validation had the lowest error. Model A It is worth noting that the covariance function must be carefully chosen or built since it determines the GPR's functionality. As discussed earlier, the covariance function determines how responses at one site x i are influenced by responses at other sites x j , x i 6 ¼x j ; i ¼ 1; 2; :::; n. In model A, firstly, relevant features with the cycle life residual were filtered using the Pearson correlation coefficient. The Pearson coefficients vary from 0.0079 to 0.43, as shown in Figure 6. As a result, a 0.25 threshold was set to filter the relevant features, and five features were chosen to be used in Model A. Then, five different isotropic kernel functions, i. e. with the same length scale hyperparameter, see section [GPR], for each feature, were used in the GPR model. The isotropic squared exponential (radial basis function-RBF) kernel function is one of the most common used covariance functions, and defined as: where s l is the characteristic length scale, and s f is the signal standard deviation. The isotropic Matern 3/2 kernel is defined by: Results and Discussion Section 3.4 covered the design of the developed hybrid datadriven models. The major point of interest in this study has been to improve the accuracy of the predicted remaining useful life for the studied batteries. Different statistical and datadriven-models were examined as described in chapter 3. The GPR model was used to forecast the cycle life residuals after subtracting the predicted cycle life from the observed cycle life values using the LSVR model. The hybrid models were developed in two forms: hybrid model A and hybrid model B. The key differences between them are the method of input feature selection and the type of kernels used in the covariance matrix for each case. Figure 7 shows the cycle life residual data distribution across all battery samples. The goal here is to use the GPR model to estimate the cycle life residual for each of the samples. To this end, a GPR model with alternative kernel functions was examined, as described in section 3.4. Although the squared exponential (SE) kernel function is powerful for machine learning applications, one drawback could be the smoothness of the predicted model which can exclude specific behaviors in the studied data. Here, the Matern class of covariance with or without ARD (Automatic Relevance Determination) can be of use. This class of kernel functions use Bessel functions and additional positive hyperparameters. The scaling parameter is chosen so that for an infinitely large scale factor, the kernel will converge to the ordinary SE covariance function. Thus, there is a trade-off between the smoothness and required hardness when choosing the right value for the scaling parameter. Low values (e. g. 1 2 ) would be too rough, whereas high values (e. g. 7 2 ) would be too smooth. The results provided in Table 2 clearly indicate this fact. Table 2 lists the prediction accuracy of hybrid model A using the RBF, Matern 3/2, Matern 5/2, rational quadratic, and exponential kernels. Despite the fact that the exponential kernel had the highest RMSE for the training set among all the kernels, it was chosen to represent model A since it had the lowest RMSE and %err for the test set. The hybrid model A has the advantage of keeping the %err for both the training and test sets below 10 %, despite the kernel function used in the GPR model. Similarly, Table 3 lists the performance of hybrid model B with five different ARD kernels. Using all ARD kernels in the GPR model, hybrid model B, like hybrid model A, is capable of keeping the %err below 10 %. Among these, the model using the exponential kernel has the best performance, with RMSE of 16.6 and 152, and %err of 1.4 and 8.2 for the training and test set, respectively. The final form of the hybrid models A and B is accepted as those with the exponential kernel in the GPR model. The predicted versus real cycle lifes for the LSVR, hybrid model A, and hybrid model B are depicted in Figure 8, with the blue points representing training samples and the red points representing test points. The more linear the distribution is, the higher the prediction performance. The hybrid models are clearly more linearly distributed, implying that the predicted cycle lives are closer to the real values. Performance prediction of the LSVR model, hybrid model A, and hybrid model B was thereafter evaluated. The models were tested using five different kernels, and the best results were chosen and compared with Severson et al. [32] Two metrics, the RMSE and %err, were used to evaluate the prediction performance of the models. Table 4 benchmarks the current work with the linear model developed by Severson et al. [32] who developed three separate models: the "Variance", the "Discharge", and the "Full" model, based on the feature types selected from different subgroups, and predicted and classified cells by cycle life. They reported their results in two ways (including and excluding an outlier sample that reached the end of life before cycle 100) for two sets of test: test 1 and test 2. They obtained high error values for the entire training, test 1, and test 2 sets using the "Variance" model, with RMSE values greater than 100 and %err Table 4, both with and without the added input features. Without the added input features, the LSVR model shows comparable % err values both for training (12.2 %) and test (12.6 %) set. However, when comparing the LSVR model to the hybrid models A and B, the latter perform better, especially on training data. With the new input features added to this study, hybrid model A outperforms all other models in terms of the RMSE (13.8) and % err (1.1 %) for the training set, while hybrid model B, with the RMSE and %err of 152 and 8.2, showed the best performance for the test data. Both models offer two key advantages over the other models: the first is that they keep the %err below 10 % for both the training and test sets, and the second is that the metrics of the training and test sets are not drastically different. All of the computations were done on a personal computer (Intel(R) Core(TM) i9-10885H CPU @ 2.40 GHz). It's worth mentioning that loading the data takes the longest time. The LSVR model takes 0.29 seconds, while the hybrid models A and B with exponential kernels take 8 and 11 seconds to run, respectively. Conclusion and Future Work Battery lifetime prediction at an early stage of cycling is critical for safe operation, considering the rapid technology development, and need for accurate state of health (SOH) monitoring in EV applications. Most data-driven models described in literature need data relating to at least 25 % of the aging process in order to properly predict battery lifetime. In this paper, a hybrid datadriven model combining the LSVR and GPR is proposed to effectively predict battery cycle life using data from only the first 100 cycles. Although the presented approach has shown the inherent potential of using data-driven approaches for describing and predicting the complex physical processes such as estimation of the Li-ion battery cycle life, the data greediness of these methods still calls for need of further research in the field. A smart combination of a physical reduced order model (ROM) with less parameters to be identified together with real as well as synthetic data would be one option track for future work.
6,651.4
2022-01-24T00:00:00.000
[ "Engineering", "Materials Science" ]
Protection against UVB-Induced Photoaging by Nypa fruticans via Inhibition of MAPK/AP-1/MMP-1 Signaling Ultraviolet B (UVB) irradiation is major causative factor in skin aging. The aim of the present study was to investigate the protective effect of a 50% ethanol extract from Nypa fruticans (NF50E) against UVB-induced skin aging. The results indicated that NF50E exerted potent antioxidant activity (IC50 = 17.55 ± 1.63 and 10.78 ± 0.63 μg/mL for DPPH and ABTS-radical scavenging activity, respectively) in a dose-dependent manner. High-performance liquid chromatography revealed that pengxianencin A, protocatechuic acid, catechin, chlorogenic acid, epicatechin, and kaempferol were components of the extract. In addition, the extract exhibited elastase inhibitory activity (IC50 = 17.96 ± 0.39 μg/mL). NF50E protected against UVB-induced HaCaT cell death and strongly suppressed UVB-stimulated cellular reactive oxygen species generation without cellular toxicity. Moreover, topical application of NF50E mitigated UVB-induced photoaging lesions including skin erythema and skin thickness in BALB/C mice. NF50E treatment inhibited UVB-induced collagen degradation as well as MMP-1 and IL-1β expressions and significantly stimulated SIRT1 expression. Furthermore, the extract treatment markedly suppressed the activation of NF-κB and AP-1 (p-c-Jun) by deactivating the p38 and JNK proteins. Taken together, current data suggest that NF50E exhibits potent antioxidant potential and protection against photoaging by attenuating MMP-1 activity and collagen degradation possibly through the downregulation of MAPK/NF-κB/AP-1 signaling and SIRT1 activation. Introduction The skin protects against pathogens and external damage and acts as a crucial barrier between the internal and external environments of the body. Exposure to chronic ultraviolet (UV) irradiation can lead to adverse pathological effects including skin damage [1]. UV-induced photoaging, which is characterized by modifications of the dermal extracellular matrix (ECM), leads to the development of wrinkles, fragility, laxity, coarseness, impaired wound healing, and increased epidermal thickness [2]. Furthermore, excessive ultraviolet B (UVB) irradiation causes the generation of intracellular reactive oxygen species (ROS). This results in oxidative stress and skin inflammation through the activation of mitogenactivated protein kinase (MAPK) and upregulation of tran-scription factors, such as activator protein 1 (AP-1) and nuclear factor kappa B (NF-κB) [3,4]. In addition, UVBstimulated ROS can enhance the expression of matrix metalloproteinase-1 (MMP-1) in fibroblasts, promoting skin photoaging [5]. MMP-1 degrades collagen type 1, a major ECM component that provides structural support to the skin, and leads to the decomposition of the dermis and skin aging [6]. Therefore, the development of antiaging agents that inhibit UVB-induced ROS generation is essential for suppressing the photoaging process. SIRT1, a NAD-dependent class III histone deacetylase, plays a vital role in lifespan extension and aging suppression and is regarded as a "longevity protein" [7]. Recent studies have demonstrated that an age-related reduction in SIRT1 levels may be associated with aging biomarkers found in dermal fibroblast cells, which are required for the production of ECM in the skin [8]. A recent study found that SIRT1 could decrease β-galactosidase and senescence biomarkers and attenuate the aging of skin lesions [9]. Nypa fruticans Wurmb. belongs to the family of Arecaceae and is regarded as an "underutilized" plant [10]. Nypa fruticans (NF) is predominantly distributed throughout India, Malaysia, Indonesia, and the Philippines and has been traditionally used for the medicinal treatment of conditions such as asthma, leprosy, rheumatism, and pain [10]. NF has been reported to exert various biological activities including antihyperglycemic, antinociceptive, antidiabetes, and antioxidant effects [11,12]. However, there are no reports regarding the protective effect of NF on photoaging. Accordingly, based on the known effects of NF, this study is aimed at investigating the potential protective effects of NF against UVBinduced skin aging in vitro and in vivo to develop novel, naturally sourced antiphotoaging agents. Preparation of Plant Extract. NF was obtained from an online market specializing in agriculture and marine products. NF was dried at 37°C using a dryer (Sanyo convection oven, Osaka, Japan) and ground into a fine powder (Figures 1(a) and 1(b)). Then, a 10-fold volume of ethanol (50%, v/v) was added to the sample and placed in a shaking incubator for 24 h at 60°C. The 50% ethanolic extract of NF (NF50E) was filtered (Whatman No. 1; Schleicher & Schuell, Keene, NH, USA) and incrassated using a vacuum rotary evaporator (Tokyo Rikakikai Co. Ltd., Tokyo, Japan). Subsequently, the sample was lyophilized using a freeze dryer (Il-shin Biobase, Goyang, Korea) and stored at 4°C. The extract was dissolved in dimethyl sulfoxide (DMSO) or distilled water for experimental use. High-Performance Liquid Chromatography (HPLC) Analysis and Mass Spectroscopy. The phytochemical characteristics of NF50E were identified by HPLC using a Shimadzu Prominence Auto Sampler (SIL-20A) HPLC system (Shimadzu, Kyoto, Japan) equipped with an SPD-M20A diode array detector (PDA) and LC solution 1.22 SP1 software. Protocatechuic acid, chlorogenic acid, catechin, epicatechin, and kaempferol were used as standard compounds. Reverse-phase chromatographic analysis was performed using a Phenomenex C18 column (4:6 mm × 250 mm) packed with 5 μm diameter particles. A stepwise gradient of solvent A to B was used (A: 2% acetic acid and B: 50% (d) Protocatechuic acid (peak 1), catechin (peak 2), chlorogenic acid (peak 3), epicatechin (peak 4), and kaempferol (peak 5) were detected as major components by high-performance liquid chromatography (HPLC) in a 50% ethanolic fraction of Nypa fruticans (NF50E). The HPLC chromatogram was recorded at 280 nm along with standard compounds. The dotted box denotes the major peak, which was identified as pengxianencin A by mass spectroscopy analysis (molecular structure of pengxianencin A). 2 Oxidative Medicine and Cellular Longevity acetonitrile (CAN) in 0.5% acetic acid). The flow rate was 0.8 mL/min, and the injection volume was 10 μL. A Q-Exactive™ Quadrupole-Orbitrap™ mass spectrometer (Thermo Fisher Scientific Inc., Rockford, IL, USA) was used to perform the mass experiments. The settings of the IT mass spectrometer were as follows: ESI voltage +4 kV, nebulization with N2 at 1.7 bar, dry gas flow 7 L/min, gas temperature 310°C, skimmer 1 voltage +12.4, collision energy set to 1 V, and ramped within 40%-200% of this value. The ion number accumulated within the trap was set to 10,000, and the maximum accumulation time was 200 ms. To determine the key chemical sdiagnostic product ions over the full range, the product ion spectrum was recorded in the targeted mode for the mass range m/z 50-1500. 2.3. Antioxidant Assays. NF50E was selected for measurement of total phenolic content (TPC) and total flavonoid content (TFC). The analysis of TPC was done by using the Folin Ciocalteu reagent [13]. Folin Ciocalteu reagent was added to a distilled water-diluted sample at a 1 : 10 ratio, and 11 mL of the resulting solution was stored at 25°C. 2 mL of Na 2 CO 3 20% solution was added, incubated for 1 h, and the absorbance of the mixture was measured at 595 nm. The TFC was measured according to a previously reported method [14]. Potassium acetate solution (0.1 ml of 0.1% ðv/vÞ) and 0.1 mL of 10% (w/v) AlCl 3 were mixed with 2.8 mL of distilled water, and a 0.5 mL sample was diluted with 1.5 mL of methanol, and the two solutions were mixed. The mixtures were kept at room temperature for 30 min, and the absorbance was measured at 405 nm. The results of TPC and TFC were expressed as mg gallic acid-equivalents (GAE) or catechin-equivalents (CAE) per 100 mg of extract, respectively, as described elsewhere [13]. 2,2-Diphenyl-1-picrylhydrazyl (DPPH) and 2,20-azinobis 3-ethylbenzothiazoline-6-sulphonic acid (ABTS) radical scavenging assays, ferric reducing antioxidant power (FRAP) assay, and cupric reducing antioxidant capacity (CUPRAC) assay were conducted to evaluate the hydrogen and electron-donating capacity of NF50E. We also confirmed the cell-free antioxidant activity of NF50E as described previously [13]. Cell Culture and Cell Viability Assay. HaCaT-immortalized human keratinocytes were purchased from AddexBio Technologies (San Diego, CA, USA). The cells were maintained in DMEM supplemented with 10% fetal bovine serum and 1% penicillin-streptomycin at 37°C in a 5% CO 2 humidified atmosphere. A 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay was performed to assess cell viability according to a previously described method [15]. Briefly, HaCaT cells were cultured at a density of 1 × 10 5 cells/mL in 96-well plates and incubated at 37°C for 24 h in a CO 2 incubator. When the cells were 80-90% confluent, various concentrations of NF50E (1, 3, 10, 30, and 100 μg/mL) were added followed by further incubation for 24 h. The media was changed with the MTT (5 mg/mL in phosphate-buffered saline; PBS) solution and incubated for an additional 1 h. Subsequently, the MTT solution was removed and 100 μL of DMSO was added to dissolve the formazan crystals. The optical density was measured using a microplate reader at 595 nm (Victor3; PerkinElmer, Waltham, MA, USA). For UVB irradiation experiments, HaCaT cells (1 × 10 5 cells/mL) were seeded in 96-well plates and incubated at 37°C for 24 h in a CO 2 incubator. The cells were then treated with different concentrations of NF50E for an additional 24 h. The media were discarded, 100 μL of PBS was added, and the cells were exposed to UVB (30 mJ/cm 2 ) radiation using a UV lamp (Bio-Link Crosslinker; Vilber Lourmat, Cedex, France). The overall average dose of UVB radiation exposure was set at 8.01 mJ/cm 2 /d according to a previous report [16]. In this experiment, we used UVB radiation at 30 mJ/cm 2 , which is equivalent to about 4 days of sun exposure. The media was then replaced with fresh media and predetermined concentrations of NF50E (1~100 μg/mL) were added. After 24 h of further incubation, cell viability was measured by MTT assay. 2.6. Measurement of Intracellular ROS. The redox-sensitive dye H 2 DCFDA was used to measure the production of intracellular ROS. HaCaT cells were harvested (1 × 10 5 cells/mL) in 96-well black clear-bottom plates for 24 h. Different concentrations of NF50E were added to the cells along with 25 μM DCFH-DA for 1 h. After washing with 100 μL of PBS, the cells were exposed to UVB (30 mJ/cm 2 ) radiation. After 30 min, the fluorescence intensity was measured using a microplate reader (Victor3; PerkinElmer, Waltham, MA, USA) at excitation and emission wavelengths of 485 and 535 nm, respectively. UVB-Induced Experimental Mouse Model. Balb/c mice (20-22 g) aged 7 weeks were obtained from Samtako Korea (Osan, Korea). The mice were housed in a temperatureand humidity-controlled room (22 ± 1°C, 55 ± 1%) under a 12 h dark/light cycle with free access to commercial diet and water. The study was approved by the Committee on Laboratory Animal Ethics (KNU 2017-0029), Kyungpook National University (Daegu, Korea). The mice were divided into five groups of five mice each as follows: UV(−)+Vehicle (G1), UV(+)+Vehicle (G2), UV(+)+EGCG (10 mg/mL) (G3), UV(+)+NF50E at a concentration of 10 mg/mL (G4), and UV(+)+NF50E at a concentration of 50 mg/mL (G5). The dorsal skin of each mouse was shaved using a hair trimmer and hair removal cream. Each mouse was treated with 150 μL of sample solution, followed by exposure to UVB 3 Oxidative Medicine and Cellular Longevity radiation. The UV intensity was gradually increased from 1 MED to 4 MED using the experimental schedule described in Supplementary Figure S1. In the preliminary experiment, UVB radiation of 75 to 300 mJ/cm 2 for 25 days was sufficient to induce acute skin inflammation in the mice. Saline and 1,3-butylene glycol at a 3 : 7 volume ratio were used as vehicles. Skin appearance was evaluated by visual observation and photographs were taken using a Nikon camera (D5100; Nikon, Tokyo, Japan). The skin thickness and level of erythema were measured using a digimatic thickness gauge (Code No. 547-315; Mitutoyo, Kanagawa, Japan) and colorimeter (CR-400; Minolta, Tokyo, Japan), respectively, to measure the Δa * value and skin erythema index [17]. Histochemical and Immunohistochemical Analyses. After the sacrifice of all mice, tissue samples were obtained from the dorsal skin. The dorsal skin tissues were immobilized in 10% formaldehyde solution in PBS for 24 h and embedded in paraffin. Slices were cut at 5 μm thickness, and the sections were deparaffinized prior to soaking in acetone and washing with PBS. The slides were treated with 3% hydrogen peroxide in methanol to block peroxidase activity, and epitope retrieval was conducted. Subsequently, the samples were incubated with 10% normal goat serum for 1 h. SIRT1 (ab166821; Abcam), MMP-1 (ab137332; Abcam), and IL-1β (ab9722; Abcam) were used as primary antibodies and incubated with the sections overnight. Hematoxylin and eosin (H&E) staining and Masson's trichrome staining were performed to examine the skin thickness and collagen content in the dermis, respectively. Stained slides were visualized by microscopy (ECLIPSE TE2000-U; Nikon, Tokyo, Japan). RNA Isolation and Reverse Transcription-Polymerase Chain Reaction (RT-PCR). Total RNA from the mouse dorsal skin samples was isolated using TRIzol reagent (Life Technologies; Carlsbad, CA, USA) according to the protocol described elsewhere [13]. Equal amounts of RNA (2 μg) were used as a template for the synthesis of cDNA using the RT-&GO Master Mix (MP Biomedicals, Santa Ana, CA, USA). The amplified products were electrophoresed on 1% agarose gels, visualized with ethidium bromide, and visualized using Image Lab software (ChemiDoc). GAPDH was used for normalization. 2.11. Statistical Analysis. The results are presented as the mean ± standard deviation (SD) using triplicate values. Statistical differences between the mean values were determined by Tukey's one-way ANOVA test using IBM SPSS Statistics software (Armonk, NY, USA). Differences were considered significant at p < 0:05. 3.2. Effect of NF50E on Antioxidant Activity. In order to investigate the antioxidant capacity of NF50E, various in vitro assays such as DPPH and ABTS-radical scavenging assays, FRAP assay, and CUPRAC assay were performed. In the DPPH and ABTS-radical scavenging assays, NF50E exhibited significant concentration-dependent radical scavenging activity with IC 50 values of 17:99 ± 1:63 and 10:78 ± 0:63 μg/mL, respectively (Figures 2(a) and 2(b)). In addition, ascorbic acid, a positive control, showed more potent DPPH and ABTS-radical scavenging activity with IC 50 values of 4:79 ± 0:52 and 4:35 ± 0:16 μg/mL, respectively. The FRAP and CUPRAC values obtained for NF50E increased with increasing concentration (Figure 2(c)). These results demonstrate the strong antioxidant activity of NF50E. 3.4. Effect of NF50E on HaCaT Cell Viability. Before the start of the cell experiment, an MTT assay was performed to confirm the toxicity of any extract or single molecule. To assess the toxicity effect of NF50E on HaCaT cells, various concentrations of NF50E (1, 3, 10, 30, and 100 μg/mL) were evaluated in an MTT assay. As shown in Figure 3(a), NF50E had no cytotoxic effects up to 30 μg/mL. Thus, 1, 3, 10, and 30 μg/mL of NF50E were used for further studies. Next, to evaluate whether NF50E could protect against cell death from UVB irradiation, an MTT assay was performed. As shown in Figure 3(b), exposure to UVB (30 mJ/cm 2 ) for 24 h induced cell death (23:76 ± 7:71%) compared with the nonirradiated group. Interestingly, NF50E treatment protected against UVB-induced cell death in a concentration-dependent manner up to 1.4-fold. Effect of NF50E on UVB-Induced ROS Production. Increasing evidence has indicated that ROS is one of the major causes of UVB-stimulated cellular senescence by damaging DNA strands and/or altering DNA bases [20]. To examine whether NF50E treatment could suppress UVBinduced cellular ROS production, a DCFDA-ROS detection assay was performed. As expected, UVB irradiation (30 mJ/cm 2 ) significantly increased cellular ROS production compared with production in the nonirradiated group (Figure 3(c)). However, NF50E treatment suppressed UVBstimulated cellular ROS formation in a dose-dependent manner. 3.6. Effect of NF50E on Cutaneous Changes in a UVB-Induced Mouse Model. To investigate the antiphotoaging potential of NF50E in vivo, the dorsal skin of mice was exposed to UVB as described in Materials and Methods (Supplementary Figure S1). As shown in Figure 4(a), the dorsal skin of the UVB-irradiated group was wrinkled, rough, dry, flaky, and reddish compared with that of the nonirradiated group; however, topical application of NF50E protected against UVB-induced lesions. Moreover, skin erythema was induced on the dorsal skin of the UVB-irradiated group (2 nd images in Figures 4(a) and 4(b)) compared with the nonirradiated vehicle-treated group (1 st images in Figures 4(a) and 4(b)), and NF50E treatment mitigated UVB-stimulated skin erythema (4 th and 5 th images in Figures 4(a) and 4(b), and Figures 4(c)-4(f)). As shown in Figures 4(e), H&E staining demonstrated that UVB irradiation led to an increase in epidermal skin thickness compared with the nonirradiated group, whereas NF50E treatment reduced UVB-induced epidermal thickening. As expected, compared with nonirradiated mice, UVB- 7 Oxidative Medicine and Cellular Longevity irradiated mice exhibited a thicker dorsal skin, and NF50E treatment significantly restored the skin thickness to nearnormal levels (Figure 4(f)). Effect of NF50E on SIRT1 Expression in the UVB-Induced Mouse Model. In Figure 5, immunohistochemical analysis revealed that UVB exposure decreased SIRT1 secretion in the dermis (brown color) along with the destroyed skin layer. NF50E treatment reversed this trend, but low expression of SIRT1 in the EGCG treatment group was evident ( Figure 5(a)). Furthermore, RT-PCR and immunoblotting analyses revealed similar results (Figures 5(b) and 5(c)), demonstrating that NF50E stimulated SIRT1 secretion, thereby protecting skin from the photoaging process. Effect of NF50E on Skin Aging Biomarker Expression in a UVB-Induced Mouse Model. Matrix metalloproteinases (MMPs) can degrade various ECM containing proteins such as collagen, fibronectin, elastin, and proteoglycans and contribute to photoaging [21]. In this study, RT-PCR revealed that the expression of MMP-1, MMP-8, and MMP-13 was significantly upregulated in the UVB-irradiated group compared with the non-irradiated group, and NF50E and EGCG treatment prevented this effect (Figure 6(b) and Supplementary Figure S8). Furthermore, both immunohistochemistry and immunoblot analyses revealed that UVB exposure upregulated MMP-1 expression in the epidermis (brown color), and NF50E and EGCG treatment reversed this trend (Figures 6(a) and 6(c)). Masson's trichrome staining demonstrated that the collagen content in the dermis of the UVB-irradiated group was reduced (blue stain) compared with that of the nonirradiated group; however, treatment with NF50E abrogated the UVBinduced reduction of collagen content in the dermis (Figure 6(d)). As expected, the mRNA expression of COL1A1 was also decreased in the UVB-induced group and NF50E treatment reversed this effect (Supplementary Figure S8). Active interleukin-1 (IL-1) is found in epidermal keratinocytes, and its expression is enhanced by UVB irradiation, resulting in inflammation [22]. In this study, immunohistochemical assay revealed that UVB irradiation enhanced IL-1β expression in the epidermis (brown color), which was suppressed by treatment with NF50E and EGCG ( Figure 6(e)). Notably, transcriptional factors including NF-κB and AP-1 play a crucial role not only in regulating MMPs and IL-1β but also in maintaining the ECM composition [23]. Since NF50E modulated the expression of MMPs and IL-1β, we next investigated whether NF50E could regulate NF-κB and AP-1. Immunoblotting assays demonstrated that both NF-κB (p65) and AP-1 (p-c-Jun) were markedly increased by UVB exposure (Figure 7(a); upper layer and lower layer, respectively); however, NF50E treatment considerably reduced this increase (Figures 7(a) and 7(b)). Effects of NF50E on the Phosphorylation of MAPK Proteins. We investigated the pathway through which NF50E exerts its antiphotoaging effects. Generally, UVBaugmented ROS production leads to the activation of MAPK proteins including ERK, p38, and JNK. MAPK induced NF-κB and AP-1, consequently enhancing the expression of MMPs and leading to a decrease in collagen and other ECM components in aged skin tissues [23]. To investigate the effects of NF50E on UVB-induced photoaging, the phosphorylation of MAPKs was assessed. The phosphorylation of p38 and JNK was significantly increased in UVB-irradiated cells compared with nonirradiated cells. Treatment with NF50E inhibited the phosphorylation of p38 and JNK (Figure 7(c)), but NF50E did not inhibit the phosphorylation of ERK1/2 (Supplementary Figure S9). These results indicate that the suppression of UVB-stimulated p38 and JNK phosphorylation by NF50E may be required for the attenuation of NF-κB and AP-1 in HaCaT cells. Discussion In this study, we investigated the mechanisms of antiphotoaging by a Nypa fruticans extract. In the course of the screening process for potent antiaging biomolecules from food Oxidative Medicine and Cellular Longevity sources, we found that a 50% EtOH extract of Nypa fruticans (NF50E) contained various polyphenolics, including protocatechuic acid, catechin, chlorogenic acid, epicatechin, kaempferol, and a cucurbitane triterpenoid, known as pengxianencin A (Figure 1(d)). Among them, protocatechuic acid exhibited not only antiskin aging effects by collagen synthesis and MMP-1 inhibition in vitro but also antiwrinkle effects in vivo [24]. Protocatechuic acid may be found in various natural sources. In this study, we are the first to identify protocatechuic acid in a Nypa fruticans extract, suggesting that the plant extract may possess unique antiaging compounds, which were predicted based on a literature search. These polyphenolics may induce the biosynthesis of elastin, collagen, and other skin matrix proteins, suggesting that they are deeply associated with the inhibition of certain enzymes or with induction of the MMPs during the aging/antiaging process in the skin [25,26]. Using mass spectroscopy, we identified the main peak of the extract, an alkaloid of the cucurbitane triterpenoid family, known as pengxianencin A (MW = 578.37). This substance was originally discovered in Hemsleya penxianensis tubers, and its function was assumed to be involved in self-defense from environmental insects and pathogens [27]. Based on this data, we confirmed that this substance is the main component of the antiaging activity by evaluating its activity in vitro and in vivo. As shown in Figure 2(d), we knew that NF50E exhibited a potent elastase inhibitory effect. In addition, HaCaT keratinocytes were used to explore the relationship between skin cell senescence and the protective effects of NF50E against UV exposure. There was no toxicity up to a concentration of 30 μg/mL of NF50E, and the extract improved HaCaT cell viability, which was decreased by UVB irradiation (Figures 3(a) and 3(b)). We concluded that the polyphenolic compounds protected cell viability and exhibited an antiaging effect. Thus, as we predicted, the components of NF50E decreased UVBinduced ROS generation and photoaging effects by increasing antioxidant activity in vitro and in vivo. Skin represents a protective barrier between internal organs and the environment and the appearance of photoaged skin is characterized by wrinkles, sagging, erythema, and thickness due to the degradation of ECM proteins [28]. In this study, the topical application of NF50E mitigated the adverse effects on murine dorsal skin (Figures 4(a)-4(d)). Matsumura et al. reported that skin becomes thicker as protection from UV-induced damage when subjected to UV exposure [29]. We discovered from our animal data that skin thickening was attenuated in the NF50E-treated group compared with the UV-irradiated group (Figures 4(e) and 9 Oxidative Medicine and Cellular Longevity 4(f); compare 2 nd to 1 st and 5 th images). At the histological level, it is known that chronically sun-exposed human skin suffers damage to the collagenous extracellular matrix that comprises the skin connective tissue and reduced levels of collagen and elastin [30,31], as shown in the UV-irradiated G2 group. Because collagen and elastin contribute to the strength and resiliency of the skin, and their degradation from UV-induced aging can result in an aged appearance [32], it is prudent to (i) protect collagen and elastin integrity for skin matrix stability, (ii) promote matrix biosynthesis such as collagen and elastin in the skin, and (iii) inhibit degradation-related enzyme activities in the skin environment. To further evaluate the mechanism of NF50E, we monitored the expression of antiaging biomarkers after NF50E treatment of HaCaT cells. Because Masson's trichrome staining revealed that NF50E abrogated the UV-induced reduction of collagen density in the dermis (Figure 6(d)), these results strongly suggested that the extract exerted multiple functions against UVB-induced skin damage, resulting in ROS reduction, ECM degradation, and a decrease in collagen and elastin content. Therefore, to investigate the role of molecular signaling pathways attenuated by NF50E, we evaluated the MAPKs and transcription factors. Immunohistochemistry results showed that NF50E enhanced SIRT1 expression while suppressing MMP-1 and IL-1β expressions. MMPs are known as calcium-dependent zinc-containing endopeptidases that regulate various physiological processes including apoptosis, inflammation, wound healing, and aging [33,34]. Activated MMPs lead to the degradation and synthesis inhibition of the ECM and collagen in connective tissues, thereby triggering photoaging [35]. Among the 28 different MMP family members, MMP-1, MMP-8, and MMP-13, which are known as collagenases, recognize substrates through a hemopexin-like domain and can degrade fibrillar collagen [21]. UVB significantly enhanced the mRNA expression level of MMP-1, MMP-8, and MMP-13, whereas NF50E decreased the UVB-stimulated expression of these genes in a dose-dependent manner (Figure 6(b) and Supplementary Figure S8). SIRT 1, a longevity protein with type III histone deacetylase activity, is a member of the sirtuin family and has an important role in cell survival and longevity during cellular senescence [36]. Thus, modulating SIRT1 pathways represents a strategy of suppressing cellular senescence and skin aging [32]. In this study, we provide evidence suggesting that SIRT1 plays a protective role in a UV-induced mouse model. The reduction of the SIRT1 and COL1A1 genes following UVB irradiation was prevented by NF50E treatment (Figure 5 and Supplementary Figure S8). It is now well-documented that MMP-1 is a key enzyme that degrades connective tissues resulting in photoaging [37]. We already confirmed that the protein level of SIRT1 was increased, whereas MMP-1 was decreased by NF50E (Figures 5(c) and 6(c)). There are no decisive reports, however, on the relationship of these two proteins, which may be closely regulated by downstream signaling pathways and transcription factors [38]. A major effector of the MAP kinase pathway is transcription factor 10 Oxidative Medicine and Cellular Longevity AP-1 which consists of the Jun and Fos family proteins. In addition, nuclear factor kappa B (NF-κB) is known to be activated by UV irradiation in skin keratinocytes and increases the expression of MMP-1 in the dermis. Thus, the regulation of NF-κB signaling represents a method of preventing UV-mediated cutaneous alterations or skin photoaging [29,39]. As expected, after exposure to UVB, the phosphorylation of MAPKs (p38 and JNK) was induced. However, treatment with NF50E markedly suppressed the activation of MAPKs along with the downregulation of NF-κB and AP-1 signaling (Figure 7). Possible mechanisms for the effects of NF50E against skin photoaging are presented in Figure 8. The results in this study demonstrate that NF50E exerts a protective effect against UVB-induced skin aging through the inhibition of MAPKs in vitro and in vivo. It has been documented that SIRT1 closely interacts with c-Jun [40]. Therefore, SIRT1 controls MMP-1 transcription through downregulation of AP-1 and NF-κB transcription factors, resulting in protein expression by decreased MMP-1 and increased SIRT1 in order to decrease wrinkling. Nipa (Nypa fruticans) was originally cultivated near seashores and swamp areas. Therefore, we surmised that the plant's growth environment may not be easy to replicate in other areas. This may impact the optimal conditions required for producing the active compounds needed for investigation [41]. Small amounts of salt such as sodium chloride are used in the development of skin washes and for removing debris from products, so the active compounds of the plant should be characterized so the ingredients can be developed for the purpose of anti-skin aging [42]. Presently, we cannot determine exactly which compounds (ingredients) are exert activity in the skin. Interestingly, we identified pengxianencin A as a component of the extract, and its precise mechanism on antiaging and crosstalk between MMPs and SIRT1 proteins will provide insight into its role as an active ingredient in the extract. Conclusions In conclusion, the present study showed that NF50E could effectively protect the skin from UVB-induced photoaging. NF50E protected HaCaT cell against UVB radiation by suppressing UVB-induced cellular ROS generation. In in vivo assays, photo-aged skin lesions such as erythema and skin thickness were attenuated by NF50E. In addition, NF50E upregulated the expression of SIRT1 and inhibited MMP-1 activity and downregulated NF-κB and AP-1 signaling via phosphorylation of p38 and JNK proteins. Collectively, these findings indicate that NF50E may be used as a natural biomolecule for the development of anti-photoaging foods or skincare products. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Figure 8: A proposed mechanism for the effects of NF50E on UVB-induced skin aging. NF50E protected photoaging through MAPKmediated NF-κB signaling, followed by MMP-1 downregulation and SIRT-1 upregulation. Red dotted T bars: potential inhibition; straight arrows: activation; dotted arrows: potential activation, 12 Oxidative Medicine and Cellular Longevity Figure S1: experimental schedule of UV irradiation intensity and time. 75 to 300 mJ/cm 2 was the range of UV irradiation; Supplementary Figure S2: total phenolic and flavonoid contents of a DW extract from Nypa fruticans (NFD) and a 100% EtOH extract from Nypa fruticans (NFE). Supplementary Figure S3: DPPH-radical scavenging activity of NFD and NFE. Supplementary Figure S4: effects of ABTS-radical inhibition activity by NFD and NFE. Supplementary Figure
6,643
2020-06-22T00:00:00.000
[ "Biology", "Medicine" ]
A mild TCEP-based para -azidobenzyl cleavage strategy to transform reversible cysteine thiol labelling reagents into irreversible conjugates † It has recently emerged that the succinimide linkage of a maleimide thiol addition product is fragile, which is a major issue in fields where thiol functionalisation needs to be robust. Herein we deliver a strategy that generates selective cysteine thiol labelling reagents, which are stable to hydrolysis and thiol exchange. A mild TCEP-based para-azidobenzyl cleavage strategy to transform reversible cysteine thiol labelling reagents into irreversible conjugates † Antoine Maruani, Shamim Alom, Pierre Canavelli, Maximillian T. W. Lee, Rachel E. Morgan, Vijay Chudasama* and Stephen Caddick It has recently emerged that the succinimide linkage of a maleimide thiol addition product is fragile, which is a major issue in fields where thiol functionalisation needs to be robust. Herein we deliver a strategy that generates selective cysteine thiol labelling reagents, which are stable to hydrolysis and thiol exchange. Advances in protein modification by chemical means have led to the development of a range of protein bioconjugation methodologies. 1 These methodologies have been successfully applied to a number of fields such as the fluorescent tagging of proteins, 2 and the development of therapeutic protein conjugates 3,4 to treat indications such as HIV, 5 cancer, 6 and malaria. 7 Chemically modified proteins are also utilised as diagnostics. 8 The use of synthetic methodology to modify proteins has to overcome many major obstacles, the most significant of which is the need for high selectivity, i.e. modifying only one amino acid type by discriminating against the other natural amino acids in a protein. 9 As free cysteines are extremely rare in proteins 10 and the thiol side chain has the highest nucleophilicity of all proteinogenic groups at physiological conditions, 11 it is a very popular target for the selective and site-specific modification of proteins. 12 Moreover, with the possibility of facile cysteine introduction by site-directed mutagenesis, cysteine modification is a leading approach. The most popular strategy for labelling the thiol moiety of cysteine residues is by alkylation with maleimides to form thioether-succinimides. 12,13 However, it has recently come to light that such an appendage is sub-optimal owing to issues of hydrolysis, and thiol exchange with reactive thiols in the blood (e.g. albumin). 14 This has major implications for biologics that employ a maleimide motif to functionalise a protein thiol for in vivo applications. For example, in antibody-drug conjugates (ADCs), where an antibody delivers a toxic payload to cancerous tissue selectively, the use of maleimides to attach cytotoxic drugs to an antibody is not ideal as thiol exchange onto human serum albumin in the blood results in off-site toxicity. 14 Although recent advances have been made in this area through the use of hydrolysed maleimides and succinimides, 14,15 a strong drive to develop novel reagents for reliable, chemoselective, stable and irreversible thiol labelling remains, and particularly for the construction of ADCs. 16 Recently, we have described a novel, reversible approach to cysteine bioconjugation through the use of bromomaleimides and bromopyridazinediones. 17 To date, our approach has provided access to complex bioconjugates in high yields, without prior activation of reagents with reliable, reversible conjugation. Owing to the demand for hydrolytically stable and thiol irreversible bioconjugates that react in a chemoselective manner, we naturally sought to explore the use of reagents that would meet these criteria. During the course of developing bromopyridazinediones for reversible cysteine bioconjugation, we became intrigued by the prospect of pyridazinediones (PDs) as irreversible cysteine functionalisation reagents. Previously we have shown that if one of the nitrogen atoms on the PD core is unsubstituted the molecule does not react with thiols at physiological pH or higher. 17a We postulate that this is a consequence of such a structure existing as its enol tautomer, which is likely to be significantly deprotonated under physiological pH (or higher), based on the reported pK a of 1-methyl-3,6-(1H,2H)pyridazinedione being B5.7 in H 2 O and the calculated pK a of its thioether analogue, ‡ 1-methyl-4-(methylthio)-3,6-(1H,2H)-pyridazinedione, being B5.9. 18 Thiol reactivity will therefore be greatly reduced as the electrophilicity of the resulting PD-core moiety will be tuned down considerably. As such, we set about developing a strategy where we could generate a mono-alkylated-PD species post-bioconjugation to a cysteine thiol to afford a thiol stable construct (see Fig. 1). Our study began with the reaction of model protein GFP-S147C 1 with pyridazinediones 2 and 3 to confirm our previous observations when using protein Grb2-L111C (see Fig. 2). 17a These results were consistent with our previous work and confirmed that a monoalkylated-PD is unreactive to thiol (or other nucleophilic functional groups on amino acid side-chains). These initial studies paved the way for us to appraise the use of a novel strategy for developing thiol-stable pyridazinedione bioconjugates (see Fig. 1). To do so, we needed to develop a selective method for cleavage of R 2 from the PD core. There are many strategies that could be applied, however, at this juncture we took the opportunity to develop a novel, mild and simple method based on an azide trigger. Our desire to use an azide-based cleavable handle originates from the bioorthogonality of the azide functional group. Taking inspiration from the welldocumented work on p-aminobenzyloxycarbonyl (PABC) linkers, 19 we set about using a p-azidobenzyl cleavage strategy (see Fig. 3). We initially evaluated our p-azidobenzyl cleavage strategy in a small molecule study through the use of cysteine derivative 6, formed by reaction of N-(tert-butoxycarbonyl)-L-cysteine methyl ester and mono-bromo PD 5 (see ESI † for details on synthesis). The use of an alkyne handle, which would conceptually be retained post p-azidobenzyl cleavage, would allow for the resulting construct to be readily functionalised by a Cu(I)-catalyzed Azide-Alkyne Cycloaddition (CuAAC). To our delight, treatment of derivative 6 with TCEP led to clean conversion to derivative 7, thus providing proof of concept for our novel cleavage strategy. Moreover, incubation of derivatives 6 and 7 with 15 equivalents of 1-hexanethiol in THF/PBS buffer (pH 7.4) only led to thiol exchange in the case of derivative 6. This provided encouragement for our hypothesis of a mono-alkylated-PD being thiol unreactive under physiological pH or higher (Fig. 4). Following these encouraging results on a small molecule study, we appraised our strategies on a model protein with a single cysteine mutation, GFPS147C 1. Initially, GFPS147C 1 was incubated with mono-bromo-PD 5 in sodium phosphate buffer (pH 8.0) for 1 h at 37 1C. As expected, this proceeded with complete conversion and afforded GFP-derivative 8. We next applied our TCEP cleavage strategy, by incubation of this derivative with 10 equivalents of TCEP in phosphate buffer at pH 8.0. Satisfyingly, clean conversion to bioconjugate 9 was observed, which is consistent with our small molecule study. It is also noteworthy that no hydrolysis occurred under these conditions, which is consistent with our previous observations on the PD core being hydrolytically stable. 17a Having established, using mass spectrometry, that our cleavage strategy is applicable on a protein, we next compared the thiol stability of 8 and 9 by incubation with glutathione (0.5 mM) for 72 h at pH 7.4 and 37 1C. Gratifyingly, GFP-derivative 9 was completely stable under the reaction conditions, whereas derivative 8 showed complete thiol exchange with glutathione. This therefore established proof of concept for both our strategies on a model protein scaffold. Moreover, this work also highlights the versatility of the PD platform with a facile shift from reversible to irreversible constructs achieved under mild conditions (Fig. 5). Following our work on developing a novel p-azidobenzyl cleavage strategy and obtaining a thiol stabile construct, we set about functionalising protein scaffold 9 by the use of 'click' chemistry. If successful, this would result in a facile method for functionalising the thiol-stable bioconjugate. A number of 'click' conditions were trialled using benzyl azide as our model azide. The most promising conditions were the use of Cu(I)Br as copper source and THPTA as ligand. These conditions gave complete conversion of starting material alkyne 9 to triazole bioconjugate 10a. Moreover, these conditions also allowed for clean reaction of the alkyne derivative with a dansyl azide and a sulfo-cyanine5 azide to afford 10b and 10c, respectively (Fig. 6). In conclusion, we have developed, via a novel p-azidobenzyl cleavage strategy, a route to thiol stable cysteine-bioconjugates that has a clear advantage over conventional maleimide chemistry. The strategy has been demonstrated on both a small molecule system and on a model protein, GFPS147C. Owing to the plethora of fields where thiol functionalisation needs to be robust and irreversible, e.g. in antibody-drug conjugates (ADCs), imaging and theranostics, we believe this work will find use in a variety of domains. We hope to deliver on the application of our platform in a range of contexts, including ADCs, in the near future. The authors gratefully acknowledge the EPSRC, Ramsay Memorial Trust and UCL for support of our programme.
2,055.2
2015-03-12T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
Using active learning in hybrid learning environments In this paper, an innovative pedagogical approach relying on flipped classroom and offered in a hybrid learning environment combining on-site and off-site attendees is proposed. The set-up is furthermore tested on two short courses offered at Chalmers University of Technology and analyzed using student course evaluation questionnaires. Several elements constitute the backbone of the courses. Such elements are either offered in an asynchronous fashion or in a synchronous fashion. The asynchronous elements are made of textbooks specifically written for the respective courses, pre-recorded short webcasts explaining the key concepts of the textbooks and on-line quizzes giving formative feedback to the students. Such elements should thus be studied by the students before attending the synchronous sessions. Because of the preparatory work made by the students, the synchronous sessions can focus on much more active forms of learning under the teacher’s supervision. The success of the pedagogical approach entirely depends on the contents of the synchronous sessions, which need to be carefully planned and designed so that they promote student learning. Although the hybrid learning environment gives rise to some additional challenges from a teacher’s perspective, it also gives much more flexibility in attracting students from remote locations, without compromising the learning experience. INTRODUCTION With overall declining student enrolments in nuclear engineering programs in Europe, being able to maintain highly specialized courses alive has become a challenge.As a possible remedy to such a situation, efforts have been pursued at Chalmers University of Technology, Gothenburg, Sweden to offer short courses in hybrid learning environments.Such environments make it possible to combine on-site and off-site attendees while preserving full interaction possibilities between both audiences and between each audience and the teacher.For that purpose, a special interactive teaching room was developed and allows both audiences to share audio, video and digital contents.Beyond the design of the room, special attention was put on implementing new pedagogical methods favoring student learning and interactions with the teacher.This paper reports on two short courses arranged in this hybrid format: a course titled "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" offered as part of the European Horizon 2020 CORTEX project (CORe monitoring Techniques and EXperimental validation and demonstration) [1], and a course titled "Deterministic modelling of nuclear systems" offered as part of the European Horizon 2020 ESFR-SMART project (European Sodium Fast Reactor Safety Measures Assessment and Research Tools) [2]. Both courses are given as "flipped" classes, i.e. the actual lectures are delivered asynchronously as short recorded lectures or webcasts.The students have to follow such webcasts prior to attending the synchronous sessions that are arranged either face-to-face for the on-site audience or remotely on the web for the off-site audience.Moreover, web-based quizzes associated to each webcast give the opportunity for the students to comprehend and further reflect on the topics presented during the webcasts.The main incentive in flipped classrooms is to get the students prepared to the synchronous sessions with the teachers, during which activities promoting higher-order thinking skills are arranged.Designing, implementing, and carrying out such sessions in hybrid environments is particularly strenuous.Nevertheless, student learning highly relies on the success of embedding active learning elements in the synchronous sessions.Beyond describing in more detail the set-up of both courses from a pedagogical perspective, the paper also focuses on the synchronous activities given in both courses.For the CORTEX course, those included short summarizing lectures or wrap-ups, discussions on the quizzes, and exercises requiring theoretical derivations led by the teacher.For the ESFR-SMART course, wrap-ups and discussions on the quizzes were also part of the synchronous sessions.In addition, the core of the active learning sessions was set up around the writing of various programming assignments in MATLAB Grader under the guidance of the teacher. The paper is structured as follows.The two short courses are first described, focusing mostly on the pedagogical approach used and, when relevant, the required IT infrastructure.Thereafter, the synchronous sessions offered in both courses are analyzed both from the students' and the teacher's perspectives.Finally, the results of the course evaluations are analyzed.The paper thereafter concludes on the applicability of the proposed course set-up and makes some recommendations. DESCRIPTIONS OF THE COURSES The course titled "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" was given on June 18-21, 2018 and had 16 on-site and 26 off-site registered participants, whereas the course titled "Deterministic modelling of nuclear systems" was given on September 9-13, 2019 and had 22 on-site and 39 off-site registered participants. As customary when offering courses free of charge, not all registered participants did actually come on site or did participate to the courses remotely.In terms of on-site attendance, the first course attracted 14 students, while the second course attracted 11 students.All the on-site attendees successfully completed all the in-class assignments and obtained a course certificate.Since the remote students had the possibility to either work on the in-class assignments and correspondingly obtain a course certificate or only get hold of the course materials, no strict control of the participation of the remote attendees was carried out.Providing the actual number of remote attendees following all course moments is thus not possible.Nevertheless, a careful check of the completion of all in-class assignments by the remote attendees wishing to obtain a course certificate was carried out.The remote attendees who obtained a course certificate amounted to 10 students for the first course and to 16 students for the second course. In both courses, the audience was mixed: MSc students with a solid background in nuclear engineering, PhD students and Post-Doc students in nuclear-related subjects, nuclear engineers and research scientists.Both courses were worth 1.5 ECTS (European Credit Transfer System). Contents The first course covered the fundamentals of nuclear reactor kinetics, with emphasis on one-and twogroup diffusion theory and provided a solid and rigorous theoretical background in reactor dynamics.The course also presented a special case of reactor kinetics, i.e. small space-time stationary fluctuations in nuclear reactors, also referred to as neutron noise or power reactor noise.Attention was put in the course on the derivation of the governing equations and on how to solve such equations.The course was designed so that the course attendees, after completing the course, were able to: • Know the governing equations describing reactor kinetics in diffusion theory. • Know the governing equations describing power reactor noise in diffusion theory. • Know how to solve such equations either analytically for homogeneous or piece-wise homogeneous systems, and numerically for heterogeneous systems.The course was structured around two main chapters.The first chapter dealt with space-time dependent reactor kinetics in diffusion theory and covered the following topics: • Static neutron transport (derivation of the static space-dependent neutron balance equations in diffusion theory, case of steady-state one-group diffusion theory, case of steady-state two-group diffusion theory).• Dynamic neutron transport (derivation of the dynamic space-dependent neutron balance equations in diffusion theory, case of dynamic one-group diffusion theory, case of dynamic two-group diffusion theory).• Resolution of the space-and time-dependence of the neutron flux in nuclear reactors (general discretization methods in space and time in diffusion theory, reduced Order Modelling − ROM − in diffusion theory, flux factorization methods in diffusion theory).The second chapter covered small space-time dependent fluctuations (power reactor noise) and the following specific topics: • Theory of first-order neutron noise (general principles, derivation of the first-order neutron noise in one-group diffusion theory, derivation of the first-order neutron noise in two-group diffusion theory).• Theory of first-order neutron noise in its factorized form (general principles, determination of the fluctuations of the amplitude factor, determination of the fluctuations of the shape function).• General solution of the neutron noise in one-group diffusion theory. • General solution of the neutron noise in two-group diffusion theory. • Validity of the point-kinetic approximation (case of critical systems, case of subcritical systems with an external neutron source).• Spatial discretization methods for resolving the neutron noise in nuclear reactors. The second course covered the deterministic modelling of nuclear systems, with emphasis on neutron transport, fluid dynamics and heat transfer.This course aimed at presenting the main algorithms in the computer codes used by the industry and in academia for the macroscopic modelling of nuclear systems.The underlying methods used in such codes, together with their assumptions and limitations, were thoroughly presented, so that the codes could be used with confidence.The course was designed so that the course attendees, after completing the course, were able to: • Know the governing equations describing neutron transport, flow transport, and heat transfer in nuclear reactors.• Know the modelling strategies used for neutron transport, flow transport, heat transfer in nuclear reactors, and for their coupling.• Understand the limitations of the different modelling strategies. The course was organized in six chapters.In the first introductory chapter, the governing equations for neutron transport, fluid transport, and heat transfer were derived, so that students not familiar with any of these fields could comprehend the course without difficulty.The peculiarities of nuclear reactor systems, i.e. their multi-physic and multi-scale aspects, were dealt with.An overview of the modelling strategies was thereafter given, with particular emphasis on deterministic methods, which represented the focus area of the course.In the second chapter, the computational methods for neutron transport at both the pin cell and fuel assembly levels were presented.The chapter was aimed at following the solution procedure in fuel pin/lattice codes as much as possible.This included resonance calculations of the cross-sections, the determination of the micro-region micro-fluxes, and of the macro-region macro-fluxes, and finally spectrum correction.The chapter ended with the preparation of the macroscopic cross-sections for subsequent core calculations, where the effect of burnup was also detailed.In the third chapter, the computational methods used for core calculations were presented.In the first part of this chapter, the treatment of the angular dependence of the neutron flux was described.In the second part, the treatment of the spatial dependence of the neutron flux was outlined.Thereafter, the solution procedure for estimating the core-wise position-(and possibly direction-) dependent multigroup neutron flux was described.Finally, the methodology used for determining the core-wise space-and time-dependent neutron flux in case of transient calculations was derived.The fourth chapter of the course focused on the computational methods used for one-/two-phase flow transport and heat transfer.From the local governing equations of fluid flow and heat transfer, macroscopic governing equations were derived, and the underlying assumptions clearly emphasized.The different flow models commonly used in nuclear engineering were introduced, models having various levels of sophistication: the two-fluid model, the mixture models with thermal equilibrium and specified drift, and the Homogeneous Equilibrium Model.The temporal and spatial discretization of the flow and heat transfer models were given special attention, with emphasis on their stability, consistence, and convergence.The fifth chapter tackled solving the coupling between neutronics and thermal-hydraulics at the core level.Various aspects of multi-physics coupling were highlighted: segregated versus monolithic approaches, coupling terms and non-linearities, information transfer, preparation of the macroscopic material data (cross-sections, diffusion coefficients, and discontinuity factors) as functions of the thermal-hydraulic variables, spatial coupling.The numerical techniques that could be used to solve multi-physics temporal coupling either in a segregated or in a monolithic manner were also discussed in detail.The last and sixth chapter summarized in, a nutshell, the macroscopic modelling techniques and presented a quick overview of the current efforts in high-fidelity reactor modelling. Pedagogical approach Both courses relied on an innovative pedagogical approach building upon the concept of active learning and a flipped classroom set-up. According to Bloom's revised taxonomy for the cognitive domain, which is illustrated in Fig. 1, students go through various thinking skills while learning [3].This process starts from low-order thinking skills, such as remembering and understanding the course concepts, to high-order thinking skills, such as applying, analyzing, evaluating the course concepts and creating.In a more traditional format, engineering students are exposed to new concepts for the first time in class.As a result, only low order thinking skills are triggered in the classroom.If the amount of new information provided to the students is too large, they will also have difficulty in processing the contents and will often become passive.This leads to the necessity for the students to go through the same contents as in the classroom but on their own and after the in-class sessions.Moreover, the students will have to deal with the course concepts at higher thinking orders mostly outside of the classroom and again on their own, unless dedicated in-class sessions are planned for such a purpose in the course curriculum. In the flipped classroom model, students are asked to do some preparatory work before attending the inclass sessions.In this asynchronous learning phase, the students can choose when and at what pace to study the preparatory course material.In contrast to the traditional teaching format, low-order thinking skills are practiced during this asynchronous phase, before the students meet the teachers and other students for synchronous interactions.As a consequence, the time spent with the teachers can be used more effectively to engage students in high order thinking, clarify difficult concepts and provide individual support.Since the students attend the synchronous sessions much better prepared than in a traditional teaching set-up, flipped classrooms were demonstrated to lead to much better learning outcomes and to contribute to a deeper approach to learning compared to traditional teaching [5,6]. The key aspect of flipped classrooms is freeing time in the classroom in order to organize engaging activities with the students under the teacher's supervision, thus favoring more active forms of learning.The active learning elements used in the two courses are thoroughly described in Section 3. Another novelty of the course set-up was to offer the courses in a hybrid learning environment, i.e. the students could follow the courses either on-site or remotely on the web.From both a pedagogical and an implementation perspective, following the courses and training sessions remotely create some additional challenges.A dedicated interactive teaching room was then designed [7], so that the synchronous in-class active learning sessions could be broadcasted on the internet while preserving full interaction possibilities between the on-site audience, the teacher and the off-site audience.This room is furnished with movable chairs, tables and whiteboards enabling the use of a more student-centered pedagogical approach.In addition, the room is equipped with audio and video hardware and software (2 cameras, 4 ceiling microphones, 6 ceiling loudspeakers, and 1 portable microphone, all combined using a AV Bridge™ Matrix Pro).The core of the system is driven by a high-end tablet PC running web-based conferencing tools and connected to the Bridge.An additional screen aimed at handling the communication with the remote participants is connected to the tablet and a video projector is mimicking the screen of the tablet.This setup allows the tablet screen to be shared to the on-site attendees (via the projector) and to the off-site attendees (via the web-based conferencing tool).Because of the nature of the tablet, the teacher has the possibility to show slides, annotate them, and write on the screen, all of this being visible to the on-site and off-site students.Moreover, the audio/video equipment allows synchronous interactions between the on- site and off-site participants in form of digital content sharing, audio interactions, and video communication. A picture of the room set-up is shown in Fig. 2. Such a teaching room allows offering the course to remote students in a pure web-based environment without any need to travel.The interactive teaching room allows guaranteeing the availability of active learning during the synchronous sessions for both the on-site and off-site attendees, who can furthermore interact between each other. The entire pedagogical approach used in the two courses is summarized in Fig. 3.The students had first to study the textbooks specifically written for both courses.Short lectures or webcasts associated to each of the sections of each chapter were also recorded and made available to the students.Those lectures aimed at extracting the most important features presented in the respective textbooks in order to help the students construct a hierarchical and conceptual understanding of those features, with the details presented in the textbook and left for self-studying.The lectures were recordings of lecture slides accompanied by the oral narrative from the teacher and with on-screen annotations made by the teacher.On-line quizzes were also associated to each of the webcasts, in order to provide formative feedback to the students on their learning.The quizzes were designed in such a way that high order thinking skills in Bloom's revised taxonomy were solicited.The webcasts and on-line quizzes were made available online using a platform called Chalmers Play, itself based on the Kaltura platform.Those three first moments, i.e. study of the textbook, attendance of the webcasts, and training on the on-line quizzes, represent the preparatory work the students had to complete prior to attending the in-class sessions (either on-site or off-site).Only asynchronous interactions between the teacher and the students were possible during those three first moments.Synchronous interactions were only possible during the in-class sessions, using remote conferencing/webinar software (Adobe Connect for the first course and Zoom for the second course).The synchronous sessions and the corresponding active learning elements are further described in the next section. Active learning techniques in the synchronous sessions The synchronous sessions were made of two distinct elements: wrap-ups and activities involving the students. The wrap-up were short lectures prepared in advance by the teacher and aimed at extracting from the various chapters the salient features of the concepts presented in the textbooks and in the webcasts.Those wrap-ups were specifically designed in order to help the students get a bird's eye view of the entire course and main concepts, thus further helping the students in establishing the inter-relations existing between the various topics covered.Furthermore, in case the students had not studied a given part of the textbook and the corresponding webcasts, the wrap-ups constituted a last opportunity for those students to catch up and comprehend most of the session following the wrap-up.After the actual lectures, discussions were initiated with the students either in an open "Question and Answer" session aimed at answering the questions raised by the students or in a more structured manner building upon the on-line quizzes the students were supposed to complete before the sessions. Most of the time in the synchronous sessions were spent on more active forms of learning, during which the students were actively engaged in activities carefully prepared by the teacher.When using active learning techniques, students learn much more efficiently since they are in control of their learning in the classroom with the support from the teacher [8]. In the course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors", the active learning technique that was used was group problem solving, a subcategory of collaborative learning.The students were put in groups of three or four, and they were assigned a task, question, or problem to solve together.Eight problems had been prepared on the first chapter of the course, and 12 problems had been prepared on the second chapter.Due to time constraints, the students only had time to go through half of the prepared exercises.The problems were of the "pen and paper" type, i.e. the students were asked to write down some theoretical derivations to find the answers to the problems.After being provided with instructions from the teacher, the students had to solve each of the problems.This was done in a collaborative manner between the students, as well as with the teacher, i.e. the teacher provided additional explanations and theoretical considerations when needed.The exercises were solved one after the other, i.e. the students were asked to complete each assignment at a pace dictated by the teacher.This allowed the teacher to also build upon each assignment, provide complementary information and most importantly relate the theoretical derivations to practical applications of reactor kinetics.Discussing the outcomes of each assignment was fundamental in capitalizing on the gained knowledge and soliciting high order thinking skills among students. In the course "Deterministic modelling of nuclear systems", the active learning technique that was used was also based on group problem solving, although in the completely different set-up.Namely, seven coding assignments had been prepared by the teacher.Each coding assignment focused on some specific part of the textbook and the students were thus asked to implement the numerical techniques and algorithms to a practical case, which was a one-dimensional heterogeneous sodium-cooled reactor in steady-state conditions.After developing a cross-section model in eight energy groups, the students had to develop a diffusion-based neutron transport solver, a fluid dynamic solver for the liquid sodium, and a heat transfer model resolving the axial and radial distribution of the temperature in the fuel rods.Finally, the students had to solve the entire coupled problem in a tightly coupled manner using the Jacobian-Free Newton Krylov method, for which they had to first implement the algorithm.In total, the students had to go through seven coding assignments, the ultimate assignment allowing the students to compute the coupled neutronic/thermal-hydraulic solution to the considered sodium-cooled system.All coding assignments were carried out in MATLAB Grader, which is a web-based platform allowing the students to complement some MATLAB codes, test those, and submit their solutions when all tests were successfully performed.Because of its web-based nature, MATLAB Grader provided the exact same coding environment to both the on-side and off-site attendees.A 30 day-free trial version of the full desktop version of MATLAB was also provided to all students, in case they had not already access to MATLAB.The full desktop version of MATLAB gave much more flexibility as compared to MATLAB Grader in case the students wanted to further test their codes. In both courses, the students were deeply engaged in solving the various assignments, discussing those with their peers and with the teacher.For the teacher, it is furthermore extremely rewarding to be able to help the students when they most need help.Solving the assignments also triggered numerous questions from the students.Although some of the questions were directly related to the assignments, some other ones were of much more general nature.Beyond the level of student engagement, the level of the questions also demonstrated that the students learned much better in this teaching format with a deeper learning of the subject and utilized higher order thinking skills in Bloom's revised taxonomy. ANALYSIS OF THE COURSE EVALUATIONS For both courses, an identical course evaluation questionnaire was used at the end of each course.For the first course, 23 persons responded to the course evaluation, out of which 52.2% were on-site participants.For the second course, 25 persons responded to the course evaluation, out of which 40% were on-site participants. Fig. 4 represents the respondents' overall impression on the courses, where one notices that all respondents considered the course to be either good or, to an overwhelming fraction, very good.No respondent was dissatisfied with the courses.The respondents were also asked to determine whether they learned better in the flipped classroom format or in a more traditional teaching format.As Fig. 5 demonstrates, a large majority of the respondents (73.9% for the first course and 68% for the second course) believed that they learned better or much better in the flipped classroom format than in the traditional format.Moreover, as Fig. 6 reveals, the quality of the pedagogical approach followed in the courses was considered to be either good or very good (to an overwhelming fraction for the first course). Finally, the students were asked to determine whether the on-line quizzes contributed positively or negatively to their learning (see Fig. 7) and whether they found the synchronous sessions engaging (see Fig. 8).The students overwhelmingly considered that the on-line quizzes favored their learning and that the synchronous sessions were engaging (with a vast majority in the second courses considering that the sessions were very engaging).A closer examination of the additional comments provided by the students demonstrated that, for the first course, dialogue with the teacher was somehow limited.This was explained by the fact that handling both the questions from the remote and on-site attendees represents a very challenging situation for the teacher, especially when the questions from the remote attendees are numerous and come from several sources (audio communication, chat room, Q&A).To circumvent this difficulty, help from a teaching assistant was obtained in the second course.The main responsibility of the teaching assistant was to handle the communication with the remote students and help those students when needed.Having an additional resource in the second course might explain the increase in student engagement. Course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" Course "Deterministic modelling of nuclear systems" Course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" Course "Deterministic modelling of nuclear systems" Course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" Course "Deterministic modelling of nuclear systems" Fig. 6 -Quality of the pedagogical approach followed in the courses. Course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" Course "Deterministic modelling of nuclear systems" Course "Fundamentals of reactor kinetics and theory of small space-time dependent fluctuations in nuclear reactors" Course "Deterministic modelling of nuclear systems" CONCLUSIONS AND RECOMMENDATIONS As demonstrated in this paper, student-centered teaching approaches favor a deeper understanding of the presented topics, thanks to the flipped nature of the set-up and the active learning techniques used in the synchronous sessions.Compared to a traditional teaching format, the proposed set-up leads to much more interactions between all parties involved, even if the courses are partially offered on-line. The flexibility with the hybrid format and with self-paced learning thanks to the flipped classroom make the course offering particularly attractive to students who do not have the possibility to travel and to staff members in the nuclear sector who cannot always come on-site to follow a course.In addition, the 24/7 availability of the recorded lectures and electronic resources is an aspect making this teaching format particularly well suited for continuous education of staff members and life-long learning. It should nevertheless be mentioned that the development of such a hybrid course with such an innovative pedagogical concept requires careful preparation and planning, and foremost, dedication from the teacher undertaking such a radical transformation.In addition to the necessary time and required efforts, the technical and administrative frameworks in place at the respective university might not be adapted to the teaching format.Moreover, many IT resources are required, such as a streaming platform for the webcasts, a platform for the quizzes, a platform for the remote synchronous sessions, and a platform for e.g. the coding assignments.Learning all those resources and obtaining the necessary help from the competent/responsible persons might also represent some additional complications and create some additional delays when setting up all those electronic resources.Furthermore, because of the asynchronous nature of most of the resources being made available to the students, those resources need to be ready well ahead of the synchronous sessions.Thorough testing of those resources is also necessary before they are made available to the students. Despite the challenges of using a hybrid learning environment, this innovative concept might represent a viable alternative to either fully on-site or fully web-based courses.This is particularly interesting considering the decreasing number of students enrolled in nuclear engineering programs at European universities.Thanks to the web-based character of the course, it is possible to attract a sufficient number of students, by combining the on-site and off-site attendees. ACKNOWLEDGMENTS The Fig. 1 - Fig. 1 -Illustration of Bloom's revised taxonomy for the cognitive domain, with higher-order thinking skills at the top of the diagram (from [4]). Fig. 2 - Fig. 2 -Picture of the interactive teaching room developed at Chalmers University of Technology (©Anna Wallin). research conducted was made possible through funding from the Euratom research and training programme 2014-2018 under grant agreement No 754316 (CORTEX project) and under grant agreement No 754501 (ESFR-SMART project).
6,423.2
2021-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Selectivity of the nucleon-induced deuteron breakup and relativistic effects Theoretical predictions for the nucleon-induced deuteron breakup process based on solutions of the three-nucleon Faddeev equation including such relativistic features as the relativistic kinematics and boost effects are presented. Large changes of the breakup cross section in some complete configurations are found at higher energies. The predicted relativistic effects, which are mostly of dynamical origin, seem to be supported by existing data. Recent studies of elastic nucleon-deuteron (Nd) scattering and nucleon-induced deuteron breakup revealed a number of cases where the nonrelativistic description based only on pairwise nucleon-nucleon (NN) forces is insufficient to explain the three-nucleon (3N) data. This happens in spite of the fact, that these high precision NN potentials describe very well the NN data set to about 350 MeV laboratory energy. Those findings extended exploration of the properties of three-nucleon forces (3NFs) to the reactions in the 3N continuum. Such forces appear for the first time in the 3N system where they provide an additional contribution to a predominantly pairwise potential energy of three nucleons. Generally speaking, the studied discrepancies between a theory based only on NN potentials and experiment become larger for increasing energy of the 3N system. Adding a 3N force to the pairwise interactions leads in some cases to a better description of the data. The best studied example is the discrepancy for the elastic angular distribution in the region of its minimum and at backward angles [1][2][3]. This clear discrepancy can be removed at energies below ≈100 MeV by adding 3NFs to the nuclear Hamiltonian. Such a 3NF, mostly of the 2π -exchange character, must be adjusted individually with each NN potential to the experimental binding energy of 3 H and 3 He. At energies higher than ≈100 MeV current 3NFs improve only partially the description of cross section data and the remaining discrepancies, which increase with energy, indicate the possibility of relativistic effects. The need for a relativistic description of 3N scattering was also raised when precise measurements of the total cross section for neutron-deuteron (nd) interaction [4] were analyzed within the framework of the nonrelativistic Faddeev calculations [5]. Also there the NN forces alone were insufficient to describe the data above ≈100 MeV. The effects due to relativistic kinematics considered in Ref. [5] were comparable at higher energies to the effects due to 3NFs. This demonstrates the importance of a study taking relativistic effects in the 3N continuum into account. Investigation of relativistic effects was focused up to now only on the bound state of three-nucleons. However, even sign of relativistic contribution to the 3N binding energy is uncertain (see [13] and references therein). Recently, first results of relativistic 3N Faddeev calculations for the elastic Nd scattering have become available [6]. The relativistic formulation applied was of the instant form of relativistic dynamics [7]. A starting point of this formulation for 3N scattering is the Lorentzboosted NN potential V ( k, k ; P ) which generates the twonucleon (2N) boosted t matrix t ( k, k ; P ) in a moving frame The NN potential in an arbitrary moving frame V ( P ) is obtained from the interaction v defined in the two-nucleon c.m. system by [8] (2) The relativistic kinetic energy of three equal mass (m) nucleons in their c.m. system can be expressed by the relative momentum k in one of the two-body subsystems and momentum of the third nucleon q (the total momentum of the two-body subsystem is then P = − q) as where 2ω( k) ≡ 2 m 2 + k 2 is the momentum-dependent 2N mass operator. The Nd scattering with neutrons and protons interacting through a NN potential V alone is described in terms of a breakup operator T satisfying the Faddeev-type integral equation [9,10] (4) T |φ = tP |φ + tP G 0 T |φ . The permutation operator P = P 12 P 23 + P 13 P 23 is given in terms of the transpositions P ij , which interchange nucleons i and j . The incoming state |φ ≡ | q 0 |φ d describes the free nucleon-deuteron motion with the relative momentum q 0 and the deuteron wave function |φ d . The G 0 ≡ 1 E+i −H 0 is the free 3N propagator with total 3N c.m. energy E expressed in terms of the initial neutron momentum q 0 relative to the deuteron The transition operators for elastic scattering, U , and breakup, U 0 , are given in terms of T by [9,10] The state U 0 |φ is projected onto the state |φ 0 which describes the free motion of the three outgoing nucleons in the 3N c.m. system in terms of the relative momentum of the 2N subsystem k 3N c.m. (2)(3), defined in the 3N c.m., and momentum of the spectator nucleon q defined above: This leads to the breakup transition amplitude The choice of the relative momentum k in the NN c.m. subsystem and the momentum q of the spectator nucleon in the 3N c.m. system to describe configuration of three nucleons is the most convenient in the relativistic case. In the nonrelativistic limit the momentum k reduces to the standard Jacobi momentum p [10]. To solve Eq. (4) numerically partial wave decomposition is still required. The standard partial wave states |pqα ≡ |pq(ls)j (λ 1 2 )I J (t 1 2 )T [10], however, are generalized in the relativistic case due to the choice of the NNsubsystem momentum k and the total spin s both defined in the NN c.m. system. This lead to Wigner spin rotations when boosting to the 3N c.m. system [6,7], resulting in a more complex form for the permutation matrix element [6] than used in the nonrelativistic case [10]. A restricted relativistic calculation with j < 2 partial wave states showed that Wigner spin rotations have only negligible effects [6]. Due to this we neglected the Wigner rotations completely in the present study. To achieve converged results at energies up to ≈250 MeV all partial wave states with total angular momenta of the 2N subsystem up to j 5 have to be used and all total angular momenta of the 3N system up to J = 25/2 taken into account. This leads to a system of up to 143 coupled integral equations in two continuous variables for a given total angular momentum J and total parity π = (−) l+λ of the 3N system. For the details of our relativistic formulation and of the numerical performance in the relativistic and nonrelativistic cases we refer to Refs. [6,9,10]. In the present study we applied as a dynamical input a relativistic interaction v generated from the nonrelativistic NN potential CDBonn [11] according to the analytical prescription of Ref. [12]. This analytical transformation allows to obtain an exactly on-shell equivalent to the CDBonn relativistic potential v which provides the corresponding relativistic t matrix. The boosted potential was not treated in all its complexity as given in Ref. [13] but a restriction to the leading order term in a P /ω and v/ω expansion was made The quality of such an approximation has been checked by calculating the deuteron wave function φ d ( k) of the deuteron moving with momentum P for a number of values corresponding to incoming nucleon lab. energy 250 MeV. The resulting deuteron binding energies and the deuteron D-state probabilities for the deuteron in motion are close to the values for the deuteron at rest. In Fig. 1 we show the nucleon angular distribution for elastic nucleon-deuteron scattering at E N lab = 250 MeV. It is seen that, like in a study of Nd elastic scattering in Ref. [6] where the AV18 [14] NN potential instead of CDBonn have been used, relativistic effects for the cross section are restricted only to the backward angles where relativity increases the nonrelativistic cross section. At other angles the effects are small. In spite of the fact that the relativistic phase-space factor increases with energy faster than the nonrelativistic one (at 250 MeV their ratio amounts to 1.175), the relativistic nuclear matrix element outweighs this increase and leads for the cross section in a wide angular range to a relatively small relativistic effect. The breakup reaction with three free outgoing nucleons in the final state provides a unique possibility to access the matrix elements of the breakup operator T with specific values of momenta | k| and | q| in a pointwise manner. Each exclusive breakup configuration specified completely by 3N c.m. momenta k i of outgoing nucleons requires three matrix elements i k( k j , k k ), q = k i |T |φ with (i, j, k) = (1, 2, 3) and cyclical permutations, and k and q providing the total 3N c.m. energy This is entirely different from the elastic scattering where, due to continuum momentum distribution of nucleons inside the deuteron a broad range of | k|-and | q|-values contributes to the elastic scattering transition matrix element. That particular selectivity of the breakup singles out this reaction as a tool to look for localized effects which when averaged are difficult to see in elastic scattering. This selectivity of breakup helps to reveal relativistic effects in the 3N continuum. Even at relatively low incoming nucleon energy E N lab = 65 MeV they can be clearly seen in cross sections of some exclusive breakup configurations as exampled in Figs. 2 and 3. For the configuration of Fig. 2 the angles of the two outgoing protons detected in coincidence were chosen in such a way that for the arc-length S ≈ 30 MeV all three nucleons have equal momenta which in the 3N c.m. system lie in the plane perpendicular to the beam direction (symmetrical space star (SSS) condition). For the configuration from Fig. 3 at the value of S ≈ 46 MeV the third, not observed nucleon is at rest in lab. system (quasi-free scattering (QFS) geometry). In these two breakup configurations the inclusion of relativity lowers the cross section: by ≈8% in the case of SSS and by ≈10% in the case of QFS. In the lower parts of Figs. 2 and 3 contributions to this effect due to kinematics and dynamics are shown. The five-fold differential cross section can be written as with the kinematical factor ρ kin containing the phase-space factor and the initial flux. The transition probability for breakup | φ 0 |U 0 |φ | 2 , averaged over the initial m in and summed over final m out sets of particles spin projections, forms the dynamical part of the cross section. In the lower parts of figures the ratio of the relativistic to the nonrelativistic kinematical factor ρ rel kin /ρ nrel kin as a function of S is shown by the dashed line. The corresponding ratio for the dynamical parts of the cross section is shown by the solid line. As seen in Fig. 3 for the QFS configuration the whole effect is due to a dynamical change of the transition matrix element. For this configuration the nonrelativistic and relativistic kinematical factors are practically equal for large region of S-values (see Fig. 3). For SSS about 30% of the total effect is due to a decrease of the relativistic kinematical factor with respect to the nonrelativistic one (see Fig. 2). The cross sections in these particular configurations are rather stable with respect to exchange between modern NN forces, combining them or not with three-nucleon forces [15]. Due to that relativistic effects seem to explain the small and up to now puzzling overestimation of the 65 MeV SSS cross section data [16] by modern nuclear forces and can account for the experimental width of this QFS peak [17]. At higher energies selectivity of breakup allows us to find the configurations with significantly larger relativistic effects. In Fig. 4 this is exampled at E N lab = 200 MeV and the predicted effects of up to ≈60%, which are mostly of dynamical origin, seem to be supported by the data of Ref. [18]. The selectivity of complete breakup is gradually lost when incomplete reactions are considered. In the total nd breakup cross section the effects disappear. Integrating over all available complete breakup configurations provides nearly equal relativistic (90.25 mb at 65 MeV and 43.37 mb at 250 MeV) and nonrelativistic (91.12 mb and 45.41 mb) total breakup cross sections. Also integrated elastic scattering angular distribution (71.25 mb and 9.33 mb-relativistic, and 71.40 mb and 9.57 mb-nonrelativistic) and the total cross section for the nd interaction do not reveal significant relativistic effects. This shows, that the discrepancies between theory and data found in previous studies at higher energies for the total cross section and elastic scattering angular distributions, which remain even after combining NN potentials with 3NFs, have to result from additional contributions to the 3N force, which have different than the 2π -exchange character. Summarizing, we showed that selectivity of the complete breakup reaction enables us to reveal in 3N continuum clear signals from relativistic effects. Existing breakup data seem to support the predicted effects, when the relativity is included in the instant form of relativistic dynamics. Precise complete breakup data at energies around 200 MeV are welcome to further test these predictions. The QFS breakup configurations due to their large cross sections and insensitivity to the details of nuclear forces are favored for this purpose.
3,134.6
2005-09-30T00:00:00.000
[ "Physics" ]
Blue Energy and Desalination with Nanoporous Carbon Electrodes: Capacitance from Molecular Simulations to Continuous Models Capacitive mixing (CapMix) and capacitive deionization (CDI) are currently developed as alternatives to membrane-based processes to harvest blue energy—from salinity gradients between river and sea water— and to desalinate water—using charge-discharge cycles of capacitors. Nanoporous electrodes increase the contact area with the electrolyte and hence, in principle, also the performance of the process. However, models to design and optimize devices should be used with caution when the size of the pores becomes comparable to that of ions and water molecules. Here, we address this issue by simulating realistic capacitors based on aqueous electrolytes and nanoporous carbide-derived carbon (CDC) electrodes, accounting for both their complex structure and their polarization by the electrolyte under applied voltage. We compute the capacitance for two salt concentrations and validate our simulations by comparison with cyclic voltammetry experiments. We discuss the predictions of Debye-Huckel and Poisson-Boltzmann theories, as well as modified Donnan models, and we show that the latter can be parametrized using the molecular simulation results at high concentration. This then allows us to extrapolate the capacitance and salt adsorption capacity at lower concentrations, which cannot be simulated, finding a reasonable agreement with the experimental capacitance. We analyze the solvation of ions and their confinement within the electrodes—microscopic properties that are much more difficult to obtain experimentally than the electrochemical response but very important to understand the mechanisms at play. We finally discuss the implications of our findings for CapMix and CDI, both from the modeling point of view and from the use of CDCs in these contexts. I. INTRODUCTION Electric power production from salinity gradients-by harvesting the free energy lost during the mixing of river with sea water in estuaries-in principle, has the potential of becoming a significant source of electricity on the global scale [1][2][3][4][5].The main technologies developed for that purpose to date, namely, pressure-retarded osmosis and reverse electrodialysis, exploit the osmotic pressure difference using hydrostatic pressure or electric potential differences applied across membranes [6].Despite the promises of these approaches-in particular, thanks to the control of flow through single nanotubes for the design of improved membranes [7]-a completely different strategy is also under consideration, to avoid the efficiency loss induced by membrane fouling.In 2009, Brogioli demonstrated the feasibility of capacitive mixing (CapMix) from cycling charge-discharge of a capacitor at high-low salinity [8].Since then, both the fundamental understanding and practical improvement of this idea have been remarkable [9][10][11].In the reverse process, capacitive deionization (CDI) offers an alternative to membrane-based desalination techniques [12,13]. For both CapMix and CDI, the use of porous carbon electrodes allows for the increase of surface area with the electrolyte, thereby increasing the specific capacitance.As in the context of energy storage in supercapacitors, also known as electric double layer capacitors (EDLC), this has naturally turned the attention of the community to nanoporous carbons.In particular, an unexpected increase in the capacitance of EDLCs using ionic liquids and organic electrolytes with carbide-derived carbon (CDC) electrodes was observed as the pore size decreased down to the size of electrolyte ions [14][15][16][17].Such materials have already been considered for CDI [18,19], as "the pore volume associated with micropores is particularly attractive for CDI" [12]. A fundamental understanding of the cation and anion adsorption inside the electrodes is essential to predict the capacitance and salt retention and its dependence on salt concentration in the electrolyte, which are the key factors governing the efficiency of both CapMix and CDI processes.While in situ x-ray and neutron experiments now provide information at various scales on the localization of ions inside the electrodes [20,21], quantitative predictions of the ionic concentrations, or equivalently the capacitance and salt adsorption, essentially rely on models of the electric double layer (EDL).The most commonly used models in these contexts are Debye-Hückel (DH) and Poisson-Boltzmann (PB)-possibly including excluded volume effects-theories [22][23][24][25][26][27][28][29], as well as modified Donnan (mD) models [30].Recently, a better description of steric effects and electrostatic correlations has also been introduced in this context using classical density functional theory (DFT) [31][32][33]. However, these continuum-based models may fail under extreme confinement down to the nanometer scale, where the discreteness of ions and water and interactions with the carbon surface on the molecular scale play an important role.Previous work on CDC electrodes with ionic liquids and organic electrolytes for EDLC applications has demonstrated that molecular simulation is a powerful tool to investigate charge storage and transport in this limit [34][35][36] and that it can be used as a starting point for a multiscale description of these systems [37].Such simulations have also emphasized the role of ion solvation at the interface and under confinement [38][39][40][41]. Aqueous electrolytes and model carbon-based materials have already been investigated by molecular simulation in the context of desalination by reverse osmosis [42][43][44] or for nanofluidic osmotic diodes [45].Molecular simulation provided insights into the structure and dynamics of water and aqueous electrolytes in carbon nanotubes and nanopores [46][47][48].Striolo and co-workers also simulated such electrolytes confined between charged carbon walls as model electrochemical cells for desalination [49,50] and reviewed the modeling challenges and opportunities of carbon-water interfaces for the water-energy nexus [51].Michaelides also emphasized the challenges associated with the description of the interactions between water and carbon surfaces [52], as well as the peculiar properties of water on graphene [53] and metals in general [54].However, our previous work demonstrated the importance of accounting for the polarization of carbon electrodes in contact with ionic liquids by using a method in which the potential between the electrode is held fixed (i.e., constant potential molecular dynamics, MD) [55].In the case of aqueous systems, this approach has been applied to pure water or ion pairs in order to understand the water-platinum interface [56][57][58][59]. We report here a constant-potential molecular dynamics study of realistic electrochemical cells based on an aqueous electrolyte and nanoporous carbon electrodes.In addition to polarization, under applied voltage, by the electrolyte, the complex structure of the CDC (pore-size distribution, relatively disordered structure) is also taken into account.We compute the capacitance for two salt concentrations and validate our simulations by comparison with cyclic voltammetry experiments.We then discuss the predictions of Debye-Hückel and Poisson-Boltzmann theories, as well as modified Donnan models, which are commonly used to predict the capacitance and salt adsorption for blue energy harvesting by capacitive mixing and for desalination by capacitive deionization.We show that it is possible to use the molecular simulation results to parametrize a modified Donnan model, which we then use to extrapolate the capacitance and salt adsorption capacity at lower concentrations relevant for CapMix and CDI, for which molecular simulations are not possible.A reasonable agreement is obtained with the experimental capacitance.We analyze the solvation of ions and their confinement within the electrodes-microscopic properties that are much more difficult to obtain experimentally than the electrochemical response but are very important to understand the mechanisms at play.We finally discuss the implications of our findings for CapMix and CDI.Methods are described in Sec.II, while results are presented and discussed in Sec.III. A. Molecular dynamics simulations The simulated system consists of two nanoporous carbon electrodes and an aqueous NaCl solution as an electrolyte (see Fig. 1).The carbon structure for the porous electrodes was obtained by quenched molecular dynamics [60], and it corresponds to the structure of a CDC synthesized at 800 °C.The geometrical analysis of the porous structure, which has a mass density ρ solid ¼ 0.939 g cm −3 , is performed with the ZEO++ software [61] using a probe radius r probe ¼ 1.3 Å, which corresponds approximately to the radius of a water molecule, resulting in a porosity of Φ ¼ 23.3% and a specific surface area of S ¼ 1934 AE 2 m 2 g −1 (for a probe size of 1.7 Å corresponding to an argon probe, the values are 18% and 1553 AE 2 m 2 g −1 , respectively).Two systems are simulated, corresponding approximately to average salt concentrations of 0.5 and 1.0 M, respectively. The force field consists of pairwise additive Coulomb and Lennard-Jones interactions, with Lorentz-Berthelot mixing rules for the Lennard-Jones parameters.We use the SPC/E model of water [62], whereas the parameters for carbon and for the ions are taken from Refs.[63,64].MD simulations are performed in the NVT ensemble using a time step of 1 fs.The temperature of the fluid is maintained at T ¼ 298 K using the Nosé-Hoover thermostat with a time constant of 1 ps, while the electrode atoms are kept fixed.Two-dimensional periodic boundary conditions are used (there is no periodicity in the direction z perpendicular to the electrodes), and Ewald summation to compute electrostatic interactions is adapted to this geometry [65,66].The water molecules are kept rigid with the SHAKE algorithm [67,68]. A voltage of Δψ ¼ 1 V is maintained between the electrodes by treating them as perfect conductors using the method of Refs.[65,69], in which the charge of each electrode atom is recomputed at each step of the simulation in order to satisfy the constraint of a fixed potential.As a result, the total charge of the electrode fluctuates in response to the instantaneous microscopic configuration of the electrolyte (see Fig. 1, where the central panels illustrate the heterogeneous charge distribution within the electrodes for a given configuration of the electrolyte, as shown in the bottom panels).This is necessary for a realistic description of the electrode-electrolyte interface [55], and it provides quantitative information on the capacitance and on the interfacial properties [70].We have shown previously that this method is suitable for the simulation of ionic liquids and organic electrolytes in nanoporous carbon electrodes [34][35][36]38]. Each electrode contains 3821 carbon atoms, and repulsive walls are placed on each side of the nonperiodic dimension of the simulation cell in order to prevent the molecules from exiting.The system corresponding to an average concentration of 0.5 M (resp. 1 M) contains 7700 (resp.7615) water molecules and 70 (resp.139) NaCl pairs.The box dimensions are 43.3 × 43.3 × 183.0 Å 3 .Together with the above number of atoms, this results in the correct density in the bulk region (see the definition of the regions in Fig. 1 and the density in Sec.III).Despite being the state of the art of molecular simulations of such systems, the relatively small size does not allow the simulation of dilute electrolytes because of the small number of ions involved, which would require simulations that are too computationally expensive to sample the equilibrium properties.The systems are first preequilibrated by fixing the electrode atom charges to zero for about 600 ps.Then, a voltage Δψ ¼ 1.0 V is applied, and the system is allowed to evolve until a steady state is reached (several nanoseconds).Finally, the charge of the electrode and the positions of the species at a steady state are averaged over about 3.2 ns to compute the capacitance, the density profiles, and the solvation and confinement properties. B. Electrochemistry experiments CDC powder (Carbon-Ukrain) is prepared by chlorination of TiC powder at 800 °C as reported elsewhere [14,15].The material is annealed for 2 h at 600 °C under H 2 to remove traces of chlorine and other surface groups [71].Electrochemical tests are performed using a two-electrode Swagelok cell.Active films are made by mixing 95 wt % CDC with 5 wt % polytetrafluoroethylene (PTFE from Dupont TM) binder.Once calendered, 11-mm-diameter electrodes are cut.The active film thickness is around 300 μm, with a weight loading of 15 mg cm −2 .Platinum disks are used as current collectors, and two layers of 25-μm-thick porous cellulose (from Nippon Kodoschi FIG. 1.The simulated system (top panel) consists of two nanoporous carbon electrodes, with a structure corresponding to carbide-derived carbons synthesized at 800 °C (cyan lines) and an aqueous NaCl solution as an electrolyte (here at 1 M; sodium is shown in blue, chloride in orange, oxygen in red, and hydrogen in white).A potential difference of 1 V is applied between the electrodes, and the charge q of each electrode atom fluctuates in response to the instantaneous configuration of the electrolyte (see the color scale in the central panels, where water is not shown).The bottom parts illustrate the electrolyte confined in the electrodes. Corporation, NKK) are used as a separator.Cyclic voltammetry experiments are carried out with a multichannel potentiostat (VMP3, Biologic) for several NaCl concentrations (0.05, 0.1, 0.5 and 1 M) at a scan rate of 1 mV s −1 .Two series of measurements per system for two ranges, between 0.0 and 0.6 V and between 0.0 and 0.7 V, are performed, leading to four estimates of the capacitance for each of them.The values and uncertainties reported here are the corresponding averages and standard deviations.Such voltages are sufficiently low to avoid faradic (redox) processes linked with water decomposition on the highsurface-area carbon electrodes. A. Capacitance Figure 2 shows the cyclic voltammograms (CV) for different salt concentrations for potentials varying between 0.0 and 0.6 V.The electrochemical response is indeed capacitive in the considered range of concentrations and voltages.The experimental capacitance is then calculated from the derivative with respect to the potential of the electrode charge, which is obtained by integrating the electric current during the discharge of the electrochemical cell.In molecular simulations, we compute the integral cell capacitance from the average charge hQi of the electrodes as C cell ¼ hQi=Δψ, which is related to the capacitance of both electrodes as The corresponding electrode capacitance is then obtained by assuming that the electrodes behave symmetrically (C þ ¼ C − ).Experiments performed using a three-electrode cell confirm that this is a reasonable assumption, with C − being only slightly larger than C þ (less than 10%). The capacitances from molecular simulations and experiments are summarized in Table I.We first note that the results are in remarkable agreement for both salt concentrations.Such a quantitative agreement is in fact better than that previously obtained in similar simulations of CDC with ionic liquids and organic electrolytes (e.g., about 20% in Ref. [72]).In the latter cases, discrepancies are mainly due to the Ohmic drop in the experiments with no or little solvent.We now compare these results with the predictions of three theories that are commonly used in the contexts of CapMix and CDI, namely, DH and PB theories, and the mD model. The simplest description of the EDL capacitance follows from DH theory, which treats the electrolyte as point ions in a continuous medium of relative permittivity ϵ r , which interact only via mean-field electrostatics (as in Poisson-Boltzmann theory) in the limit where these interactions are weak, i.e., ½ðeψÞ=ðk B TÞ ≪ 1, with e the elementary charge, ψ the potential (with respect to the bulk value), k B Boltzmann's constant, and T the temperature.The corresponding capacitance per unit area is with ϵ 0 the vacuum permittivity and λ D the Debye screening length in the bulk electrolyte: where the sum runs over concentrations of ions, c i , with valencies z i (in the present case of a 1-1 electrolyte with salt concentration c salt , this sum is simply 2c salt ). In order to improve the predictions, it is not sufficient to simply numerically solve the nonlinear PB equation without introducing other physical effects.Indeed, in that case, the predicted capacitance would be even larger than within DH.This is because the nonlinearity of the PB equation increases the potential drop across the EDL and results in concentrations so large that interactions between ions beyond mean-field electrostatics-in particular, the effect of excluded volume due to the finite size of the ions -cannot be neglected.Therefore, we only discuss the extension of PB, which captures packing effects (if not electrostatic correlations) [73,74].Following Freise's approach for electrolytes [75], which was also successfully applied by Kornyshev in the context of ionic liquids [76], we introduce a maximum salt concentration c max , which captures the saturation of the EDL due to the finite volume of the ions.The differential capacitance per unit area is then given by with γ ¼ 2c salt =c max the lattice saturation parameter and φ the potential drop across the EDL (φ ¼ Δψ=2 for a symmetric capacitor).In macroporous or mesoporous materials, the next natural step to improve these models of the EDL is to include a Stern layer of condensed ions [22], with an associated capacitance per unit surface However, in the present case (CDC material) where the pore size is comparable to that of the ions and solvent molecules, the distinction between Stern and diffuse layers is certainly ambiguousand the quantification of ϵ St and λ St is somewhat arbitrary.Therefore, while it would be possible to use C St as a fitting parameter, we will not follow this approach here.When the screening length in the electrolyte becomes comparable to the electrode pore size, the overlap between EDLs within an electrode renders the description more difficult.However, in the limit where the EDL is larger than the pore size, it is possible to obtain a simplified description where the potential inside the micropore is uniform, with a potential difference called the Donnan potential Δψ D between the micropore and the bulk electrolyte.The ionic concentrations inside the micropores are then related to that in the bulk as follows: , where μ att is an attractive excess chemical potential that results in a larger salt concentration inside the micropores even in the absence of the Donnan potential.This parameter is usually kept at a fixed value (typically 2-3k B T), but a selfconsistent determination has also been suggested by introducing another relation: μ att ¼ E=c mi;ions , with c mi;ions ¼c mi;þ þc mi;− ¼2c bulk e μ att =k B T cosh½ðeΔψ D Þ=ðk B TÞ the salt concentration in the micropores and E a parameter arising from the polarizability of the electrode [77].The charge density per unit volume of micropore, σ mi F ¼ ðc mi;þ − c mi;− ÞF ¼ −2Fc bulk e μ att =k B T sinh½ðeΔψ D Þ=ðk B TÞ, with F Faraday's constant, is then written as with Δψ St the Stern potential difference and C St;vol a capacitance per unit volume of micropore, which is usually parametrized as For the symmetric electrochemical cell considered here, with identical electrodes, the cell voltage under equilibrium conditions, i.e., vanishing electric current, is related to the Donnan and Stern potentials as For comparison with molecular simulations and experiments, we finally compute the specific capacitance (per unit mass of the electrode) from the capacitance per unit area of the DH and PB models by multiplying by the specific surface area S and from the electrode capacitance per unit volume of micropore of the modified Donnan model (−σ mi F=ðΔψ=2Þ, the factor of 2 arising from converting the cell capacitance to electrode capacitance) by multiplying by the porosity Φ and dividing by the mass density ρ solid of the electrode. The predictions of these three models, using reasonable assumptions for the corresponding parameters, are summarized in Table II.In particular, for the DH and PB models, we consider, for the dielectric constant of the solvent, both the bulk value for water and a value (arbitrarily) reduced by an order of magnitude.This allows us to account for, in a simple manner, the change in the dielectric response of water at an electrified interface and under confinement, even though such a response is more complex because of, in particular, the symmetry breaking induced by the walls [78][79][80][81][82] and may result in an unexpected enhancement of permittivity in specific geometries [83]. We first observe that DH overestimates the capacitance, compared to the molecular simulations and experiments of Table I, by more than 1 order of magnitude, even when a reduced permittivity is introduced.The increase in capacitance with salt concentration observed in Table I is captured by DH theory, even though the scaling as the square root of concentration overestimates this increase.The order of magnitude of the capacitance predicted by PB theory can be made comparable to the experiments if one uses the reduced permittivity and a maximum salt concentration inside the EDL of c max ¼ 1 M, even though such a value is small (see, in particular, the discussion of salt concentrations inside the micropores below).A larger value, c max ¼ 5 M (still below the solubility of NaCl in water at room temperature), results in an overestimate of the capacitance.In addition, the PB theory predicts that the considered conditions fall in the saturation regime, where the capacitance decreases slightly when the salt concentration increases, in contradiction with the experimental and molecular simulation results. Finally, the modified Donnan model, using typical values from the literature for the various parameters, underestimates the capacitance by a factor of about 2-3 when a fixed value of μ att is used, but it roughly captures the slight increase in capacitance with salt concentration.Since the effect of the self-consistent scheme to determine μ att is to reduce the capacitance [77], it does not improve the agreement with the experimental and molecular simulation results in the present case.Overall, none of these models is able to correctly capture the order of magnitude of the capacitance and its increase with the salt concentration.Nevertheless, one should distinguish between the DH and PB models, which apply a priori better for planar walls or large pores [84], and the modified Donnan model, which applies in the full double-layer overlap regime, even though it describes the interactions of the ions with the surrounding fluid and electrode in a simplified mean-field way, which does not properly account for interactions on the molecular scale.We have therefore attempted to parametrize such a modified Donnan model from our molecular simulation results for the capacitance, changing as few parameters as possible.As shown in Table II (case c), this can be achieved by increasing the value of C St;vol;0 by a factor of about 2.25.Since the parameters of the modified Donnan model are sensitive to many factors such as surface area or porosity and therefore depend on the preparation process, it is not unexpected that the values from the literature (even though for similar materials) are not straightforwardly transferable to the present experimental results.However, this underlines the need for experimental data to fit the modified Donnan model, whereas the present molecular simulation approach only uses the experimental capacitances for validation purposes. B. Water density and ionic concentration Molecular simulations further provide information on the fluid confined inside the electrodes.Figure 3 shows the density profiles of water and ions across the simulation cell in the two simulated systems (the local carbon density is also indicated), while Table III summarizes the corresponding average ion concentrations in the bulk and inside both electrodes (per unit pore volume) as well as the associated water density.The water density profile in the region far from the electrodes is flat, and the corresponding density is equal to that of bulk water.Some layering over 2-3 water layers is observed at the interface between the electrode and the bulk region due to the discreteness of the fluid.The water density per unit length of simulation box is smaller inside the electrodes than in the bulk because of the presence of the carbon matrix.However, the water density inside the pores (see Table III) is in fact larger than in the bulk.Such an increase may be due to several factors, including confinement, which perturbs the structure of the fluid, in particular, the ability to form hydrogen bonds, or electrostriction in the presence of the local electric fields inside the electrode [85].We also note that there is an asymmetry between the electrodes, with a slightly larger water density in the positive electrode correlated with a TABLE II.Single-electrode capacitance in F g −1 from DH, PB, and mD theories.Here, c max is the maximum concentration allowed in the PB theory, accounting for volume saturation in Ref. [76] For the modified Donnan model, with fixed or self-consistent attraction parameter μ att , we use values from the literature for similar materials for the Stern capacitance parameters C St;vol;0 and α (see text), and the electrode capacitance is computed from the charge of a symmetric electrochemical cell under a voltage of 1.0 V. Uncertainties are based on that for the specific surface area.slightly smaller ion concentration.Such an asymmetry is likely due to the different ionic radii and solvation properties or to the effect of the surface charge distribution on the orientation of the water molecules [40,70,86], which are not simple dipoles-features that are not included in any of the DH, PB, or modified Donnan models-and it is consistent with the above-mentioned slight asymmetry observed on the experimental capacitances.An asymmetry could be introduced using additional parameters, e.g., different c max or γ (resp.μ att ) for different ions in the PB (resp.modified Donnan) model, which would have to be parametrized accordingly. Capacitance (F g The average cation and anion concentrations in the bulk are equal and slightly lower than the ones anticipated when designing the simulations with the average target concentration; correspondingly, the ionic concentration inside the pores is larger than in the bulk.This observation is consistent with the experimental observations that motivated the introduction of the attractive excess chemical potential in the modified Donnan model.The magnitude of this increase corresponds to μ att ∼ 1-2k B T, i.e., slightly smaller than but comparable to the values used in the literature.The increase in ion concentration inside the electrodes may also contribute to the smaller permittivity inside the pores, even though in the bulk the decrease for such concentrations does not go beyond a factor of 2 [87]. The ionic concentration inside the electrodes obtained from MD simulations is compared to the predictions of the modified Donnan model in Table IV.The mass of adsorbed ions is computed as the total mass of ions inside the electrodes, divided by the mass of both electrodes [12], where the subscripts refer to the ions and the superscripts to the electrodes, and M Na and M Cl are the molar masses of the ions.The mD model underestimates the ionic concentration by a factor of about 3-4 if values from the literature are used for the parameters μ att and C St;vol;0 .As for the capacitance, using the self-consistent μ att scheme of Ref. [77] results in a slight decrease of the ionic concentration inside the electrodes; i.e., it does not improve the b Same C St;vol;0 and α as above but with μ att ¼ E=c ion;mi , with c ions;mi the ionic concentration in the pores and E ¼ 300k B T mol m −3 ; see Ref. [77].prediction.Using the values of μ att and C St;vol;0 fitted to the MD simulation results for the capacitance at high concentration, which reasonably reproduce the experimental capacitance at lower concentrations, also improves the prediction for the adsorbed salt, even though the agreement with simulations is not quantitative. In addition, the ionic concentration is larger in the negative electrode, where cations are in excess compared to anions.Such an asymmetry, which cannot be predicted at the level of DH, PB, or modified Donnan models, likely arises from the difference in size between the cations, even though the asymmetry of the water molecule may also play a role (as it does in the solvation of ions in the bulk).This difference in volume occupied by the ions is also anticorrelated with the difference in water density inside the electrodes.This is consistent with previous simulations of ionic liquids and organic electrolytes inside CDC electrodes, which indicated that the overall volume of the liquid inside the pores is more or less unchanged [34,38].The asymmetric ion concentrations also seem to suggest that the electrodes do not carry the same charge.However, this is not the case because the interfacial regions (where water is layered; see Fig. 1) also carry an excess ionic charge, which is larger on the negative electrode size.We finally note that this asymmetry also suggests that the capacitance of the positive and negative electrodes may differ slightly.However, going beyond this assumption to determine the electrode capacitances from the cell capacitance would require dedicated approaches that go beyond the scope of the present work [88]. C. and confinement inside the electrodes Before examining the consequences of the above considerations for CapMix and CDI on the macroscopic scale, as will be done in the next section, here we provide some additional microscopic information on the solvation of ions and their confinement within the electrodes.Such information is indeed much more difficult to obtain experimentally than the electrochemical response, while it is very important to understand the mechanisms at play.The solvation number of each ion is the number of water molecules in its solvation shell, defined by a cutoff radius determined from the position of the first minimum of the radial distribution functions (3.3 and 3.9 Å for Na þ and Cl − , respectively).In addition, the degree of confinement (d.o.c.) of each ion can be computed as the fraction of the solid angle occupied by electrode atoms within the first coordination shell of the ions [38]. Figure 4 illustrates the solvation number distribution for Na þ and Cl − , in the bulk and inside the positive and negative electrodes for the 1.0 M system (similar results, not shown, are obtained for the 0.5 M system).While in the bulk the distribution for Na þ is narrow around 6 water molecules, under confinement the average solvation number decreases (to 5.4 and 5.7 in the positive and negative electrodes, respectively) and the distribution becomes broader.Similar behavior is observed for Cl − , with a decrease from 7.4 in the bulk to about 7 in both the positive and negative electrodes and a broadening of the distribution. While the decrease in solvation number under extreme confinement may seem rather limited compared to our previous studies on ionic liquids and organic electrolytes [34,38], it is worth noting that Na þ ions are "traditionally thought to have an almost unbreakable solvation shell," as discussed by Sayer et al. [89].We have therefore further examined the link between the coordination number and the confinement of the ions.The broadening of the distribution points to the existence of several microscopic environments experienced by both ions inside the electrodes.We further investigate this issue by computing the joint distribution of solvation number and degree of confinement, illustrated for the 1.0 M system in Fig. 5 More generally, these results show that charging the capacitor not only unbalances the ionic concentrations inside the electrode micropores but also depends on more complex molecular features.While a detailed study of such specific effects is clearly out of the scope of the present work, are likely to play a role in the charge and discharge and are therefore important in practice for the applications.Molecular simulation provides an appropriate tool to investigate such effects without introducing them a priori in a model. D. Implications for CapMix and CDI Finally, we now discuss the implications of our findings for the harvest of blue energy by capacitive mixing and CDI.The CapMix cycle is illustrated in the charge-voltage plane in Fig. 6.Using a symmetric electrochemical cell with a voltage supply of Δφ and two electrolytes with different concentrations (leading to cell capacitances C 1 > C 2 for sea and river water, respectively), the energy extracted per cycle is given by the area of the shaded trapezoidal region as where is an effective capacitance corresponding to the two capacitors in series (note that here C 1 and C 2 refer to full electrochemical cell and not electrode capacitances).We now estimate ΔE cycle for the CDC considered here by considering typical concentrations of river and sea water (20 and 500 mM, respectively) and a typical voltage supply Δφ ¼ 300 mV, as done in previous studies [8][9][10].Table V reports the capacitance and ion adsorption predicted by the mD model with parameters fitted to the simulation data at 0.5 and 1.0 M, as a function of salt concentration.While not perfect, the agreement with the available experimental capacitance (see Table I) seems sufficient to estimate the capacitance at an even lower concentration of 20 mM, with the result 74 F g −1 .In turn, Eq. ( 5) predicts a theoretical energy per cycle of ΔE cycle ∼ 0.6 J g −1 .Such a value is smaller than the value anticipated for CDCs by Brogioli when introducing the idea of CapMix [8], namely, 1.6 J g −1 with comparable salt concentrations (24 and 600 mM).However, this estimate was based on the assumption of a capacitance of 300 F g −1 , which is too large compared to the actual one (see Table I).The order of magnitude, however, remains comparable.In addition, this lower value remains about 3-4 times larger than the experimental data reported for the same voltage with a porous carbon with larger pores (density 0.58 g cm −3 , porosity of 65%, SSA 1330 m 2 g −1 ) [9].This confirms the potential interest of CDCs for CapMix.Previous studies of CDC electrodes focused on their application to CDI rather than CapMix, so a direct comparison of the capacitance and adsorbed salt content is difficult.Indeed, in such cases, the experiments are performed at low salt concentration because at high concentrations, CDI consumes more energy than other desalination processes such as reverse osmosis [12].For example, Porada et al. reported capacitances of about 10-15 F g −1 and salt adsorption capacities of 10-15 mg g −1 for concentrations of 5 mM and various CDCs when working with voltages of about 1 V [18,19].From the mD model with parameters fitted to reproduce our MD simulation results for the capacitance (at high concentration), we extrapolate the salt adsorption capacity to 10 mg g −1 .Keeping in mind that this model even underestimates the simulation results at high concentration, this confirms the potential of CDCs, in general, for CDI compared to other materials (with typical values mainly in the range 1-10 mg g −1 ; see, e.g., Table 1 in Ref. [12]) and simultaneously suggests that there is room for improvement to optimize CDCs for this application. IV. CONCLUSION AND PERSPECTIVES We have shown that molecular simulation provides a reliable tool to investigate aqueous electrolytes in realistic nanoporous carbon electrodes, for sufficiently large salt concentrations for which such simulations can be done in practice.The predicted capacitances are in excellent agreement with experiments.In contrast, Debye-Hückel and Poisson-Boltzmann theories cannot be applied under such extreme confinement, even by taking into account the decrease in permittivity induced by the latter or by introducing excluded volume following the approach that was successful with ionic liquids.These models should be used with caution for nanoporous carbons such as CDCs to estimate the capacitance or the extracted energy.In contrast, we have shown that the molecular simulation results at high concentrations can be used to parametrize a modified Donnan model, which then allows one to extrapolate the predictions to lower concentrations relevant for river water in CapMix and for CDI, finding reasonable agreement with the experimental capacitance.This approach is therefore fundamentally different from fitting the experimental data to a modified Donnan model, which is standard practice in CDI (such models are used much less in the CapMix community, where Poisson-Boltzmann theory is usually preferred). While here we have not considered classical DFT, recent work capturing excluded volume and electrostatic correlations between ions suggests that these effects may increase the energy produced per unit area [31], therefore further overestimating this quantity.Nevertheless, it would be necessary to investigate the predictions of classical DFT under comparable conditions and, ideally, also in more realistic geometries.Molecular simulations could then provide reference data to validate DFT or even help build better functionals for that case [47].Explicitly including the structure of the solvent [90][91][92] may also significantly improve the accuracy of the description of the confined fluid.In turn, classical DFT would also provide predictions at low salt concentrations, which are out of reach for molecular simulations.Meanwhile, we have shown that the modified Donnan model may be a reasonable alternative to make simple predictions, provided that the corresponding parameters are correctly adjusted. Overall, the present results underline the potential of CDCs for both CapMix and CDI, thanks to their pore size comparable to that of the ions.Even though the associated computational cost, which, in particular, prevents us from reaching the low concentration regime, does not position molecular simulation as an alternative for the daily prediction of material properties for applications, the present work clearly demonstrates its interest to investigate in future work the factors governing charge storage and salt adsorption in these materials, by quantifying, e.g., solvation numbers and degrees of confinement, as shown here, or diffusion coefficients of ions and water inside the pores.More generally, it will also help to understand, on the molecular scale, the effects of physiochemical factors such as the geometry of the electrodes (considering not only CDCs but also other nanoporous carbon materials), hydrophilicity [93], ion specific effects, or the possible presence of chemical moieties such as carboxylic groups, and hence to guide the design of improved materials.Future work should also investigate the dynamics and the energy loss during charging and discharging, which may be larger than with more porous materials, even though previous work with ionic liquids and organic electrolytes demonstrated that the dynamics was not slowed down to a point preventing their use in supercapacitors (see, e.g., Ref. [36]).As in previous work in this latter context, molecular simulation can serve as the starting point for a multiscale description [37], which captures possible heterogeneities on larger scales, such as the finite size of carbon grains, on the scale of tens or hundreds of micrometers [35]. FIG. 2 . FIG. 2. Cyclic voltammograms of electrochemical cells basedon CDC and aqueous solutions of sodium chloride at concentrations ranging from 0.05 to 1.0 M as electrolytes.The potential scan rate is 1 mV s −1 . FIG. 3 . FIG.3.Density profiles along the simulation cell, for an average salt concentration of 0.5 and 1.0 M (top and bottom, respectively).The negative (resp.positive) electrode is on the left (resp.right) side of the cell. FIG. 4. Distribution of solvation number for Na þ (left panel) and Cl − (right panel) ions in the bulk, negative and positive electrodes, for the 1.0 M system. FIG. 5 . FIG. 5. Joint distribution of solvation number and d.o.c. of Na þ (left panel) and Cl − (right panel) ions in positive (dark blue, red) and negative (light blue, orange) electrodes, for the 1.0 M system.The histograms correspond to the discrete values of the solvation number and to finite intervals of the continuous d.o.c.(of width 2% and 10% for Na þ and Cl − , respectively). TABLE I . Electrode capacitance (in F g −1 ) from molecular simulations and experiments.Note that the simulations cannot be performed for the lower concentrations. with ϵ St and λ St the effective permittivity and width of the Stern layer, respectively), in series with that for the diffuse layer C DL ¼ C DH or C PB .The total capacitance of the interface is then obtained as [77] Ref.[18]: μ att ¼ 3k B T, C St;vol;0 ¼ 200 MF m −3 , and α ¼ 30 F m 3 mol −2 .St;vol;0 and α as above but with μ att ¼ E=c ion;mi , with c ions;mi the salt concentration in the pores and E ¼ 300k B T mol m −3 ; see Ref.[77]. TABLE III . Cation and anion concentrations, as well as water density, in the bulk and in the electrode pores, from molecular simulation under a voltage of 1 V.The uncertainties are of order 0.1 M for the ion concentrations and 0.01 g cm −3 for the water densities. TABLE IV . Ionic concentration inside the electrode micropores for a symmetric electrochemical cell under a voltage of 1.0 V, from molecular dynamics simulations and from the modified Donnan model with various assumptions.Results are given as the total mass of ions inside both electrodes, per unit mass of both electrodes [see Eq. decrease in its average solvation number, most of the Na þ cations do not experience direct contact with the electrode: In the negative (resp.positive) electrode, more than 91% (resp.96%) have a d.o.c.smaller than 2%, and no cations with a d.o.c.larger than 8% were observed, resulting in an average d.o.c. of only 0.4% (resp.0.2%) in the negative (resp.positive) electrode.While the majority of Cl − anions in the negative electrode (about 92%) have a d.o.c.smaller than 10%, about 20% of the anions in the positive electrode have a d.o.c.larger than 10% and a corresponding larger decrease in the solvation number (6.5 or less).Highly confined Cl − anions are also observed inside the positive electrode (about 3%), with a d.o.c.larger than 30% and a solvation number as small as 3.However, the average d.o.c. of Cl − remains moderate: about 3% (resp.7%) in the negative (resp.positive) electrode.The larger ability of Cl − to desolvate compared to Na þ is consistent with their different hydration free energies. FIG.6.Capacitive mixing thermodynamic cycle, using two electrolytes with different salt concentrations.Segment A: the electrochemical cell is charged under a supply voltage Δφ in the presence of the more concentrated electrolyte (sea water), corresponding to a large cell capacitance C 1 .Segment B: the voltage between the electrodes rises when the electrolyte is replaced by the more dilute one (river water, small cell capacitance C 2 ) under open circuit conditions.Segment C: the electrochemical cell is then discharged down to the supply voltage Δφ, before being flushed with a concentrated electrolyte under open circuit conditions (segment D).The energy extracted per cycle, ΔE cycle , is equal to the area of the shaded region.
9,634.8
2018-04-26T00:00:00.000
[ "Materials Science" ]
Dynamic analysis of torus involute gear including transient elastohydrodynamic effects The torus involute gear can compensate large axial misalignments and may possess good meshing characteristics without lead correction. In order to study its dynamic characteristics and verify its feasibility of the practical application, a new efficient rigid-elastic coupling dynamic model of multi-tooth is established which includes effects of lubrication oil film and tooth deformations directly in contact simulation of gears. In this model, each tooth is connected with the gearwheel by a rotatable spring-damper element whose stiffness is calculated through analysis of tooth deformation. The normal tooth contact force is determined via Lankarani and Nikravesh model. Variation of contact stiffness and rotatable spring stiffness with contact points are both taken into account. Combined with tooth contact analysis, the computation of friction coefficient is implemented with high efficiency by introducing the average lubrication oil film height. A three-dimension multi-body model of a torus involute gear pair is employed and verified by an impact experiment. The simulated results provide useful information about tooth impacts, dynamic transmission error and lubrication conditions like oil film heights and friction coefficients, and show that this type of gear can work with good meshing characteristics. The contributions in this paper lay theoretical basis for the application of the torus involute gear. Principle curvature of gear 2 at contact point Introduction The cylindrical involute gear has the advantages of line-contact meshing, constant working pressure angle, insensitivity to center distance variations and simplicity in use.However, in most cases, transmission performance of line-contact conjugate surfaces is not satisfactory: high sensitivity to machining or mounting error; forced deformation, deviation load and edge contact caused by variation of tooth contact stiffness or bending stiffness.Topological modification technology [1,2] is usually utilized to produce mismatched meshing of line-contact engaged gears, but actually it is difficult to design and optimize mismatched meshing regions.Mitome [3] invented spherical gears, which geometrically have two types of tooth profiles, convex teeth and concave teeth.Insensitive to machining or mounting error, the spherical gear set allows large axial misalignments without meshing interference.For a spherical gear set, if the spherical center of tip surface or root surface deviates from the gear axis, the gear set can no longer be called a spherical gear set but a torus involute gear set [4].This type of gear set, easily machined, can also compensate large axial misalignments.As shown in Fig. 1, the tip surface or root surface is respectively an outer or inner torus generated by a circle whose center is not in the gear axis.Each flank of cross-section along face width is still an involute profile generated from the same base circle.For the torus involute gear with convex teeth (hereinafter referred to as convex gear), tooth thickness on the reference circle gradually decreases from the middle to both ends of face width, whereas tooth thickness on the tip circle increases.For the torus involute gear with concave teeth (hereinafter referred to as concave gear), tooth thickness on the reference circle gradually increases from the middle to both ends, while tooth thickness on the tip circle decreases.Although studies on mathematical models and tooth contact analysis have been implemented [5], more sufficient proofs, especially dynamic performance, need to be provided to verify that the torus involute gear (TIG) can work with good meshing characteristics. As the main excitations of motions along the off-line-of-action (OLOA), friction forces usually couple with backlash and time-varying meshing stiffness to make gear pairs act as nonlinear and time-varying systems.Additionally, power dissipation due to viscous shear of the lubrication oil film along the tooth contact interfaces forms the main source of mesh viscous damping [6].On the other hand, dynamic meshing forces and slipping velocity fluctuations affect tribological behavior in terms of oil film height, lubrication viscosity and friction forces.Hence it is necessary to understand them together with acting forces and torques, so as to ensure a sufficient transmission lifetime.Currently gear dynamics is usually investigated based on vibration theory [7,8].For simplicity, contact for tooth surfaces is modeled through a torsional spring-damper element which acts tangentially to the base circle of each gear.The dynamic mesh impact cannot be calculated accurately in the torsional vibration model.As a numerical approach, the multi-body dynamics method (MBM) could be used to efficiently model contact of gear pairs with acceptable accuracy and considerably less computational effort compared with the finite element method (FEM) [9].However, meshing friction is usually simplified without considering effects of lubrication oil film in most of multi-body gear models, and contact stiffness is time-independent.Fietkau [10] proposed an efficient transient elastohydrodynamic gear contact simulation method with commercial code Simpack which took into account oil films and elastic deformations directly in the multi-body simulation.But the variation of contact stiffness over mesh is not counted in, and Reynolds equation needs to be solved with the complicated multi-grid method. In view of tooth shape's complexity of TIG and point-contact pattern, a new efficient multi-body model is in request, which can account for effects of lubrication oil film, tooth deformation and variation of contact stiffness.Therefore, the main motivation of this paper is to build a new multi-tooth dynamics model including transient elastohydrodynamic effects with low computation efforts to understand dynamic characteristics of TIG comprehensively and verify the feasibility of TIG set, which will lay theoretical foundations for the practical application of it. Multi-tooth dynamics model An approach of Ebrahimi [9] with circumferentially movable teeth is adopted.Fig. 2 shows a model of meshing multi-tooth.Each rigid tooth has one degree of freedom (DOF): the rotation about the rigid gearwheel center whose angular displacement is .The gearwheel has one rotational DOF with respect to the ground, which is denoted by .All teeth and their respective contacts are treated separately.In order to count in elastic deflection of the tooth, each tooth is connected with the gearwheel by a rotatable spring-damper element.The forces resulted from the flank contact are calculated based on the contact force model including effects of mixed elastohydrodynamic lubrication (EHL). Calculation of rotatable spring element stiffness In order to compute the stiffness of a rotatable spring-damper element, its relation with tooth deformation needs to be figured out.As shown in Fig. 3, for simplicity, misalignment of the LOA induced by flank contact deformation is neglected.i.e. contact points only translate along the LOA over meshing.In terms of geometric characteristics of involutes, the stiffness of a rotatable spring-damper element is calculated as: ( As is known, a gear with continuous linear shifting is called the beveloid gear or conical involute gear.In essence, the TIG is a special spur gear with continuous shifting in the second order.To calculate the tooth deformation, each TIG is approximately viewed as a combination of two conical involute gears as depicted in Fig. 4. In the middle cross-section of a TIG, a coordinate system is established as Fig. 5. Then based on elastic mechanics, the relations between Hertzian contact stiffness , bending stiffness , shear stiffness , axial compressive stiffness , fillet foundation stiffness and Hertzian energy , bending energy , shear energy , axial compressive energy , fillet foundation energy are respectively gained, which are written as: where: Stiffness related to the elasticity of root foundation is as follows [11]: where = − − tan ; = 2 .The total tooth stiffness is as follows: Over a mesh cycle the stiffness of a single tooth can be calculated from the effective root to addendum by treating the pressure angle of each contact point as a variable.Then Eq. ( 1) is employed to give the time-varying stiffness of a rotatable spring-damper element. Normal contact modeling Modeling collision and contact accurately is essential to simulating multi-body systems.The straightforward force-penetration relation proposed by Lankarani and Nikravesh [12], the given Eq. ( 5), is widely used for mechanical contacts because of its simplicity and easiness in implementation in a computational program and because it is the only model that accounts for the energy dissipation during the impact [13]: The contact stiffness that depends on material and geometry properties of the contacting surfaces is given as: where . The comprehensive curvature radii for meshing points are acquired from tooth contact analysis (TCA).Then contact stiffness that varies with the mesh point is obtained over a mesh cycle. Calculation of friction coefficient based on mixed EHL model As most of gear transmissions work in mixed EHL, the normal contact force load is shared by lubrication oil film and asperity contacts.Similarly, the friction force includes two parts. Calculation of friction force of lubrication oil film Friction force of lubrication oil film in mixed EHL is the shear force of lubrication oil film acting in the tangential direction of tooth surface, which includes sliding friction force and rolling friction force.See that sliding friction force is usually much larger than the rolling one, only sliding friction force is considered, which is the sum of shear forces of lubrication oil film acting on tooth surfaces.That is: To calculate , the non-Newton Ree-Eyring model is employed as follows: Neglecting the variation of temperature, dynamic viscosity can be approximately represented as follows: = exp (ln + 9.67) (1 + 5.1 × 10 ) .− 1 . Thus: On the ground of Hertzian contact theory, when a load is exerted on two contacting bodies, the contact region is approximately regarded as an ellipse, and the average contact stress is calculated by: According to the studies of Gao [14] and Greenwood [15], the relations between contact area of lubrication oil film and film height ratio can be got by use of regression analysis as follows: Calculation of friction force in asperity contact Theoretically, the friction force is sum of shear forces in asperity contacts.Considering friction coefficient for each asperity is approximately constant, we have: It is generally accepted that asperity contact is in the state of boundary lubrication.The value of is 0.1-0.2[16].In this paper, is set to 0.13. How to reckon the asperity load ratio has drawn much attention in the studies of EHL.The method posed by Jiang [17] is thought to be more applicable to industrial gear drives.Based on numerical and experimental analysis, the formulae of the asperity load ratio and film height ratio were presented as follows: = ℎ = 0.07 .exp(8.9 × 10 ) . . Finally, the total friction coefficient can be calculated as follows: It can be seen from the above analysis, , and need to be calculated through TCA.Calculation of friction coefficient may be carried out as the flow chat in Fig. 6. Model verification As the measurement of contact forces is very difficult or even impossible for many technical problems, so far there is no direct way of measuring dynamic contact forces of teeth.To validate the simulation model, the basic experimental impact investigation is performed using one gear and one simple impact body.For the simplicity of the experimental setup, the investigated gear is not rotating.A cuboid is used as the impact body. The recoil velocities of the cuboid are indirectly measured using the oscilloscope, and these velocities are compared with simulations.According to the impulse theorem, when there is no friction force, only the gravity force and contact force contribute to the velocity change of the impact cuboid.An assumption can therefore be made that if recoil velocities of the cuboid agree, the contact forces must agree too.The air supplied slide rail is used to guarantee zero friction.As illustrated in Fig. 7, the cuboid translates on an air supplied slide rail in direction of the LOA and collides with a tooth.The slide rail is mounted on a precision rotary stage that allows to adjust the alignment angle .The rotary stage itself is mounted on a frame that allows to adjust the alignment angle .Electrical signals can be acquired to determine the time interval between two successive collisions.So the initial velocity for the subsequent collision is = 0.5 sin , and its recoil velocity is = 0.5 sin .The investigated gears have the parameters shown in Table 1.Table 2 demonstrates the measured and simulated results.From this table we note that the simulated results agree well with the measured ones. Results and discussion A numerical example of a TIG pair is provided to validate the above approach.From parameters in Table .1, the tooth surfaces for engaged gears are constructed from the mathematical models, and then 3-D models can be built.Fig. 8 shows the tooth surfaces of engaged TIGs.Then TCA is conducted for each rotating angle of the convex gear (i.e.gear 1) based on theory of differential geometry.As displayed in Table 3, principle curvatures, comprehensive curvature radius, semi-major axis and semi-minor axis are obtained for each contact ellipse.Using Fourier fitting, they can be represented as functions of angular displacement of the gear wheel. Through polynomial fitting, spring-damper stiffness and contact stiffness can be represented as functions of pressure angles.Fig. 9 displays spring-damper stiffness varying with pressure angle at each contact point, and contact stiffness of the TIG pair is shown in Fig. 10.From above stiffness, the mesh stiffness can be calculated. To verify the analytical result of mesh stiffness, an 8-9-tooth finite element model is built as follows (See Fig. 11).The preprocess of the finite element model is completed in Hypermesh; Either of the engaged gears can only rotate about its own axis; Teeth are connected with a central node by massless rigid elements; The driving gear has a certain angular velocity, and a torque 20 N•m is exerted on the other gear.Contact algorithm is explicit surface to surface contact.The dynamic finite element analysis (FEA) is performed in ABAQUS to obtain the comprehensive deformation of TIGs.The mesh stiffness is then computed, and the comparison of two methods is made in Fig. 12.On the basis of above work, a 3-D multi-rigid-body dynamic model is built in ADAMS as follows.Either of the gearwheels has one rotational DOF with respect to the ground; Each tooth can rotate about its gearwheel center, which is connected with the gearwheel by a rotatable spring-damper element; Contact pairs are defined according to the mating relationship of teeth; The pinion (gear 1) is driven by a constant angular velocity, and a constant torque 2 = 20 N•m is simultaneously exerted on the gear (gear 2). Dynamic transmission error Fig. 13 displays the effect of original viscosity on the dynamic transmission error (DTE).Although variation of original viscosity changes friction coefficient, the DTE is observed to vary slightly, which means that the friction force has little effect on DTE.Besides, it can be seen from this figure that transmission precision of the gear set is high because the TIG originates from conventional involute gears.Fig. 14 shows the effect of angular velocity on the DTE.It is noteworthy that the magnitude of DTE does not always rise with the increasing angular velocity.When the angular velocity is more than 600 r/min, the magnitude of DTE rises as it increases.In general, the angular velocity has greater effect on the DTE than the original viscosity. Contact force Fig. 15 shows the normal forces for 3 successive meshing tooth pairs.From this figure the transmission continuity of the gear set can be checked.The duration for double pair meshing over one mesh cycle is Δ , and the mesh cycle period is Δ .So the contact ratio Ω = 1 + Δ Δ = 1+ 0.0188 0.044 ⁄ ⁄ = 1.4273, which shows that transmission continuity is achieved.Moreover, it can also be seen from Fig. 15 that the normal contact force is nearly independent of lubrication viscosity when considering effects of EHL.Fig. 16 is the result obtained by superposing the normal contact forces of all tooth pairs.The magnitude of normal contact forces can be verified preliminarily.Utilizing the conventional gear design approach, the normal contact force is = 2 2 = 20000 57.56 ⁄ ⁄ = 347.49N, and the average value of our model is 353.35N. The increase in average normal contact force is induced by the effect of impacts during meshing in and out as well as friction forces.Besides, Fig. 16 indicates that when the preceding tooth pair meshes out, at the pitch or the subsequent one meshes in, there exists a major fluctuation of normal contact force.Fig. 17, at the pitch point the relative slip velocity between tooth flanks changes its sign and therefore the friction force does as well.As a result, the normal contact force tends to decline (See Fig. 16).Moreover, original viscosity slightly affects the friction force.The friction force rises with the falling original viscosity because the asperity contact force rises.Fig. 18 displays the friction force of 4th tooth pair along LOA for different angular velocity.It can be seen from Fig. 18 that the angular velocity has great effect on friction forces.As angular velocity rises, the relative slipping velocity and entraining velocity of engaged flanks increase, and consequently the fluctuation and peak value of friction force tend to increase.However, the general tendency of all plots are similar. Frequency-domain analysis for normal contact force and angular acceleration of the driven gear are shown in Fig. 19.From the figures the primary resonances are both found at 40.05 Hz, nearly equal to gear mesh frequency ( 1 1 60 ⁄ where 1 is the tooth number for gear 1, and 1 is the rotational speed for gear 1), which is associated with a mode having the LOA transverse motions and the torsional motions of the wheels. Lubrication oil film height Fig. 20 displays the average lubrication oil film height of the 4th tooth pair for different original viscosity (Those of the 3rd pair and the 5th pair are also plotted in black thin line).As shown in Fig. 20, original viscosity affects the lubrication oil film height markedly.With the increasing of original viscosity, the average lubrication oil film height increases.For a generic tooth pair, as the normal contact force rises, it begins to mesh in, and the average lubrication oil film height declines gradually due to the increasing normal contact force.When the subsequent tooth mesh in, the contact normal force decreases, and therefore the average lubrication oil film height rises.Obviously, the increasing of angular velocity causes the entraining velocity of engaged flanks to increase.As a result, the average lubrication oil film height tends to rise and fluctuate clearly.All in all, either angular velocity or original viscosity affects lubrication oil film height more strongly than it does to contact forces. Conclusions A new efficient model for gear contact simulation in mixed EHL is presented and integrated in multi-body simulation environment.Lubrication oil height and tooth deformations are directly counted in this model.Time-varying contact stiffness and rotatable spring stiffness are both taken into account.Combined with TCA, the computation of friction coefficient is implemented with high efficiency.Subroutines for the computation of normal tooth contact forces and friction coefficient are developed and called by the numerical integrator in commercial code ADAMS.A 3-D multi-body model of a TIG set is employed and verified by an impact experiment.The results indicate that the continuity and precision of transmission can be guaranteed using this type of gear set.Important values like dynamic transmission error, normal and tangential contact forces and average film heights are calculated with high efficiency for different conditions, which lay foundations for application of TIG.Besides, it is feasible to analyze the tribo-dynamic behavior of other point-contact gears using the approach in this paper. 8 . a) Convex gear b) Concave gear Fig. Tooth surfaces of TIGs Table 2 . Recoil velocity of cuboid
4,480
2016-11-15T00:00:00.000
[ "Engineering", "Materials Science" ]
‘We are all in the same boat’: a qualitative cross-sectional analysis of COVID-19 pandemic imagery in scientific literature and its use for people working in the German healthcare sector Background The COVID-19 pandemic presents a significant challenge to professional responders in healthcare settings. This is reflected in the language used to describe the pandemic in the professional literature of the respective professions. The aim of this multidisciplinary study was to analyze the linguistic imagery in the relevant professional literature and to determine the identification of different professional groups with it and its emotional effects. Method A list of 14 typical, widespread and differing imageries for COVID-19 in form of single sentences (e.g., “Until the pandemic is over, we can only run on sight.”) were presented to 1,795 healthcare professionals in an online survey. The imageries had been extracted from a qualitative search in more than 3,500 international professional journals in medicine, psychology and theology. Ratings of agreement with these imageries and feelings about them were subjected to factor analysis. Results Based on the list of imageries presented, it was possible to identify three factors for high/low agreement by experiences, and two factors for high/low induced feelings. Broad agreement emerged for imageries on “fight against the crisis” and “lessons learned from the crisis”, while imageries on “acceptance of uncontrollability” tended to be rejected. Imageries of “challenges” tended to lead to a sense of empowerment among subjects, while imageries of “humility” tended to lead to a sense of helplessness. Conclusion Based on the qualitative and subsequential quantitative analysis, several factors for imageries for the COVID-19 pandemic were identified that have been used in the literature. Agreement with imageries is mixed, as is the assessment of how helpful they are. Introduction The COVID-19 pandemic constituted a significant challenge to the global healthcare system.For instance, employees in the healthcare system reported substantial amounts of psychological distress (1)(2)(3), which manifested in symptoms associated with depression, anxiety and post-traumatic stress disorder (4)(5)(6). However, challenges differed across professional groups.While physicians and nurses were faced with particularly difficult working conditions, especially in the beginning of the pandemic, which made this group especially vulnerable to mental distress (7)(8)(9), other groups, such as psychologists, were able to reduce patient contact, at least in part, e.g., with the help of telemedicine (10).At the same time, the challenges faced by hospital spiritual/pastoral care workers remains largely unknown.This group plays an important role in the care of elderly and palliative patients (11,12), but was largely considered dispensable and therefore received little attention in the public discourse of the crisis (13). Only few empirical studies have looked at the effects of imagery on the recipient during the COVID-19 pandemic and it is therefore still an open question, how imagery affects the perception of the pandemic.Pisano and colleagues (27) demonstrated that participants had formed new semantic associations (e.g., "trench"-"hospital") during the pandemic, that were stronger and more readily available than classical associations (e.g., "trench"-"soldier").Further research showed that, by experimentally creating and comparing different news articles about the pandemic, the inclusion of metaphors in the articles predicted greater self-efficacy in readers (28).This was particularly true for metaphors referring to the possibility of change, but was also found for war metaphors.In line with these findings, Naamati-Schneider and Gabay (16) found that metaphors that created a sense of mission and meaningfulness were helpful in coping with an extreme health crisis, while metaphors that generated a sense of isolation and sacrifice intensified helplessness and fear, thus undermining effective coping mechanisms.Past research has also shown that seriously ill patients found it helpful when healthcare professionals used metaphors in their conversations (29).For a general overview of the use of metaphors in the healthcare sector, see (17). The aim of this multidisciplinary study was to analyze imagery of the COVID-19 pandemic in the medical, psychological, and theological professional literature, and to determine the feelings of different professional groups towards it.This helps to understand how different professional groups see themselves in the pandemic, but also gives insight into what language is helpful for these groups when talking about the pandemic.Additionally, we wanted to find out how identification with certain linguistic imagery was predictive for stressors of the pandemic, or protective personality traits. Data collection The online survey was conducted in March and April 2022.This was at the end of the fifth COVID-19 wave, after two years of pandemic, with many public safety measures starting to loosen up in Germany (30).The participation link was provided through online platforms and mailing lists of a large German university hospital and further general hospitals as well as several medical professional associations.The study was approved by the ethics committee of the local medical faculty (reference number: 125_20).All participants provided their online informed consent prior to completing the survey. The survey consisted of 137 items and took approximately 30 minutes to complete.The complete questionnaire, including all scales, was administered in German.This included questions regarding age, gender, living conditions, children, migration background, occupational characteristics, profession, years of professional experience, employment status and a number of further questionnaires.In our analysis, we focused on age, gender, profession and the presented questionnaires in measures. Unipark (www.unipark.com)was used to program and host the survey.Inclusion criteria for participation were a minimum age of 18 years, working in the healthcare sector, residence/working place in Germany, and sufficient German language skills. Sample characteristics A total of 1,795 participants completed the questionnaire and were included in the analysis.The majority of the sample consists of women (n = 1,301), with 491 men and three people who identified as diverse.The gender distribution in our sample is representative of the overall gender distribution in the healthcare sector within the population we researched, as well as reflective of global trends in healthcare sector employment (31).The participants who identified themselves as diverse were included in all analyses except for those looking at gender differences, because the sample size was too small for a meaningful analysis.Age was assessed based on 5 groups, with the majority falling in the range of 51-60 (n = 504) followed by age 41-50 (n = 410), age 31-40 (n = 400), age 18-30 (n = 331), and age > 60 (n = 150).Participants were placed in 5 occupational groups, based on their self-disclosure; physicians (n = 330), nurses (n = 508), psychologists (n = 55), spiritual care workers (n = 124) and others (n = 778).Spiritual care workers in this sample are primarily Protestant or Catholic theologians with additional training to offer comprehensive spiritual support in hospital settings.Their services include counseling, spiritual guidance, and emotional support for patients and their families, functioning as part of a multidisciplinary healthcare team.Others consisted of a wide range of professions, including e.g., students, administrative staff, physiotherapists, and social workers, and served as a general reference group. Measures Imageries Approximately 3,500 articles from journals in the fields of (a) life sciences, medicine, and healthcare systems, psychology, psychiatry, and the wider mental health system, (b) theology (including Protestant, Catholic, and spiritual care), (c) social sciences and philosophy (including education), and (d) exemplary findings in political and social sciences were searched from 2020 and 2021.This search utilized the databases PubMed, KVK (encompassing all German catalogues, WorldCat, and National Library of Medicine), and Index Theologicus.Search terms used were "COVID-19", "corona", and "SARS-CoV-2", common to both English and German, as well as "crisis" and its German counterparts "Krise" and "Krisenbewältigung".Each of these terms was paired with "resilience" or "Resilienz" in their respective languages.The results were collected in Citavi (Version 6) and MAXQDA (Version 2020) for which full-text versions were obtainable.The database search was initially conducted using the specified search terms, followed by a detailed review based on the titles, introductions, and conclusions of the articles.The English and German international material proved more heterogeneous than expected in terms of text genres, content, and methodologies used; it included inter-, multidisciplinary, and transdisciplinary research work by internationally assembled research teams.Consequently, we applied hermeneutic methods from the humanities (textual and linguistic analysis of active, passive, mediopassive directions of individual and collective agency, 1 st and 3 rd person perspectives, temporal dynamics, etc.) and discussed the results in a structured group process.A multidisciplinary consortium of 10 physicians, psychologists, and theologians identified 14 widely used linguistic imageries of the SARS-Cov-2 pandemic, intended to represent as broad a spectrum of the language used as possible (e.g., 'Until the pandemic is over, we can only run on sight'). Following this selection process, participants in the study were presented with the sentences and asked how much they agreed with them [not agreeing at all (1)completely agree (4)] and how they felt based on it [very helpless (1)very enabled (5)].These two aspects will be labeled "agreement" and "induced resolve" in the further text.Background for the choice of these questions was the premise that an imagery will have more impact if firstly a person highly agrees or is highly identified with its meaning, and if secondly it helps to mobilize feelings of resolve and control (16,28).The sentences presented are listed in English translation in Tables 1, 2, the original German wording of the items used in this study can be found in Supplementary A. Transpersonal trust The Transpersonal Trust scale (TPV) was used to assess religiosity and spirituality (32).The scale describes a person who recognizes the existence of a higher reality, trusts in it, and experiences a strong connection with it (e.g., "I feel connected to a higher reality/being/God.I can trust in this even in difficult times") and has been previously employed in studies with healthcare workers during the COVID-19 pandemic (33).It consists of 11 items and is rated on a four-point Likert scale ranging from 0 ("does not apply at all") to 3 ("applies completely").In our sample, the TPV demonstrated high reliability with a Cronbach's a = .84. Depressive and anxiety symptoms (PHQ-4) Depressive and general anxiety symptoms over the last two weeks were assessed with the Patient Health Questionnaire PHQ-4 (34), which has been used in the studied sample before (35).The questionnaire consists of four items (e.g., "Feeling nervous, anxious or on edge" and "Feeling down, depressed or hopeless") and is answered on a Likert scale from 0 ("not at all") to 3 ("almost every day").Cronbach's a in this sample is .83. Impact of event scale (IES-6) The IES-6 is a 6-item short version of the Impact of Event Scale-Revised (IES-R).It measures the principal components of PTSD on a four-point Likert scale from 0 ("not at all") to 3 ("often").The instructions were tailored to the coronavirus and questions included "I tried not to think about it" and "I felt watchful or on-guard" (36).This approach has previously been used for studying PTSD in healthcare workers during the COVID-19 pandemic (6).Internal consistency of the IES-6 is Cronbach's a = .73 in the present study. Optimism Optimism was assessed based on Kemper et al. (37), using the item "How optimistic are you in general?", which is answered on a seven-point Likert-scale from 1 ("Not optimistic at all") to 7 ("very optimistic").Higher values reflect a higher level of optimism.This question has been deployed to study optimism in healthcare workers during the COVID-19 pandemic before (38). COVID-19-related variables The questionnaire included a range of COVID-19 related variables.In this analysis, we focused on problems related to COVID-19, which were measured with 18 items on a scale from 0 "strongly disagree" to 4 "strongly agree", based on Matsuishi et al. (39).Items focused, among other things, on anxiety about infection, sleep problems, physical or mental exhaustion, smoking, and drinking alcohol, during the COVID-19 pandemic over the past 2 weeks and included items such as "I was afraid to become infected" and "I felt physically or mentally exhausted".Items were deployed before to measure COVID-19-related problems in this population (2) A mean score of all answers was calculated with a Cronbach's a = .75 in this study. Statistical analysis All statistical analyses were conducted using IBM SPSS Statistics (Version 26) and R (Version 4.1.1).To explore the factor structure in the imageries, all 14 sentences were treated as a scale and a factor analysis with Varimax rotation was run with them.Internal Factor analysis of imageries A factor analysis was conducted to explore the structure of the imageries for agreement and induced resolve.The objective of the factor analysis was to check whether the imageries could be placed in meaningful factor structures based on these two assessments and use this as a basis for further analysis. First, we obtained a Kaiser-Meyer-Olkin (KMO) index for agreement and hope of .85 and .91 respectively, with a highly significant Bartlett's sphericity test for both scales (p < .001).The Cattell (40) scree test (Eigenvalues) suggested a three-factor solution for "agreement" and a two-factor solution for "induced resolve" based on the imageries.The three factors explained 45.74% of the total variance for agreement and were named "fight against the crisis", "lessons from the crisis" and "acceptance of uncontrollability", based on the included items.The two factors of induced resolve explained 49.85% of the total variance and were named "challenges" and "humility" (see Tables 1, 2). Internal consistency was Cronbach's a = .79for agreement and.87 for induced resolve.This supports the finding of larger interpersonal variance in the agreement with the imageries and a three factor (rather than a two-factor) solution for the agreement scale. Comparison of sociodemographic characteristics There was broad support for imageries about fight against the crisis (M = 3.12, SD = 0.59) and lessons from the crisis (M = 2.68, SD = 0.57), while imageries about acceptance of uncontrollability tended to be rejected (M = 2.06, SD = 0.67).Imageries of challenges tended to lead to a sense of empowerment among participants (M = 3.50, SD = 0.78), while imagery of humility tended to lead to a sense of helplessness (M = 2.70, SD = 0.79). There were significant positive correlations between agreement and age on the factor lessons (r = .27,p < .001)and acceptance (r = .14,p < .001),but not fight (r = .03,p = .324).This means older people agreed more with imagery of lessons and acceptance while there was no age difference for agreement on imagery of fight.Older participants also felt more enabled to deal with the pandemic by imageries of challenges (r = .17,p < .001)and humility (r = .22,p < .001). Post-hoc analyses revealed that physicians (M = 3.03, SD = 0.61) and psychologists (M = 2.95, SD = 0.53) agreed significantly less compared to other occupational groups (M = 3.19, SD = 0.61) to The crisis has reminded many people that they too will die. .402 .530 The pandemic is the stress test for churches to prove that they recognize what people really need. Association of imageries with further parameters To find out to what extent the imageries are related to protective and vulnerability variables, we computed a linear hierarchical regression model for each factor found in the factor analysis.We included the TPV, IES and optimism as protective variables, and problems with COVID-19 and PHQ-4 as vulnerability variables. The relations of the predictors and dependent variables seem to be complex.We found that the prediction of fight imageries is strongly related with the recent experience of trauma (B = .13,p < .001).Agreeing on imageries of lessons from the crisis adds additionally optimism (B = .04,p = .004)and transpersonal trust (B = .12,p < .001).Traumatic experience (B = .11,p = .006)and transpersonal trust (B = .07,p < .001)are also good predictors for agreeing on imageries of acceptance.High transpersonal trust is associated with the feeling of being enabled through imageries of challenges (B = .06,p = .004)and humility (B = .07,p = .003),while a high number of COVID-19 related problems had a negative association (challenges: B = -.17,p < .001;humility: B = -.18,p < .001).Interestingly, anxiety and depression (PHQ-4) did neither predict agreement nor induced resolve through the imageries. Discussion The aim of this study was to analyze imageries of the COVID-19 pandemic in the professional literature and determine its usefulness for different professional groups.Based on the response of a large Mean answers with SEM on the factors of agreement and induced resolve for different occupational groups.Significant between group differences are marked with * and + , if no marking is giving, the group doesn't differ significantly from any group (e.g., for the scale agreement with the factor fight against the crisis, physicians* and psychologists* differ significantly from spiritual care worker + and others + , while nurses don't differ significantly from any group). sample of different groups of professionals in the healthcare sector, we measured the degree of personal agreement with a set of imageries in relation to COVID-19, and whether these imageries could induce a personal resolve to deal with the crisis.Using a factor analysis based on the degree of agreement, we could assign the different imageries to three factors, which we named "fight against the crisis", "lessons from the crisis" and "acceptance of uncontrollability".When looking at feelings of empowerment or helplessness associated with the imageries, we found a two-factor structure, with imageries belonging to the first factor having in common that they could be described as "challenges", whereas imageries of the second factor could be described as expressions of "humility". Our findings are in line with previous research that demonstrated a substantial use of metaphors of war, fighting and struggle in our communication about the COVID-19 pandemic (14,18).We also found that imageries of induced resolve tended to fall into two broader categories, i.e. overcoming obstacles and learning from it versus individual powerlessness in the face of such an immense event.This also expands on previous findings that showed that, while metaphors of war and fighting are the most prevalent (23), they are not necessarily the most helpful, particularly when in the metaphors fighting is more associated with helplessness and uncertainty instead of meaning and sense of mission (16). In our sample, participants overall agreed more with imageries of fighting and learning, and often disagreed with imageries of acceptance.They also found sentences that represented the crisis as challenge more helpful compared to those that conveyed humility.Age correlated positively with the agreement on all factors but fighting, and with how helpful they found the imageries.It makes sense that life experience comes with a different perspective on such an event, as more crises may have already been mastered in the past.This might lead to a shift away from a heroic perspective of facing a crisis head on towards a perspective of the inevitability of certain consequences independent of how much one fights them, and the chance to grow and learn from difficult situations. Two professional groups in particular stood out in the group comparison, namely physicians and spiritual care workers.On the one hand, physicians reported the lowest agreement on imageries of acceptance of all groups.This could be attributed to their professional identity, which typically involves a proactive stance against diseases.Physicians are trained not to accept diseases such as infections as something unchangeable over which one has no control and must be accepted.Rather, they learn early on in their training to take responsibility for patients and to regard death as a kind of defeat or failure.Spiritual care workers, on the other hand, stood out given that they reported the highest scores on the scales challenges and humility, suggesting that they found these imageries particularly helpful in comparison.They were also the only group who rated humility as either neutral or positive, while all other groups found images about humility unhelpful.A possible explanation of this finding may be found in the professional understanding of spiritual care workers: While many people consider images that remind them of their own limitations to be frightening and disempowering, it is precisely this experience of facing a seemingly insurmountable challenge with humility and, at the same time, hope that is part of Christian theology (41).Additionally, spiritual care workers are probably working more with imagery on a daily basis and have therefore a better access to them. Last, we calculated a regression analysis to understand the relationship between the imagery with stress (PHQ-4, IES, problems), transpersonal trust, and optimism.For the agreement to the imageries, the subjective burden in terms of trauma-related psychological symptoms as measured by the IES stood out, which was positively related to the agreement to all categories.This means that agreement with the imageries was particularly high for those feeling currently stressed.It might be that participants, who feel vulnerable and stressed by the pandemic, can relate more to the pandemic associated imagery and are more touched by it.In contrast, regarding the question to what extent the imageries could be helpful for the personal resolve to master the crisis, problems with COVID-19 were negatively related to how helpful one found the images to be.The more problems one had with the crisis, the more helpless one felt due to the imageries.This makes sense when considering that the imageries of the crisis are ultimately a confrontation with the very thing the people are struggling with.Interestingly, the PHQ-4, as a general measure for stress, compared to the IES and problems related to COVID-19, as more specific markers for pandemic related stress, had no correlation with the imagery.This supports the notion that imagery affects specifically people who are emotionally affected by the stressor involved, which in this case is the pandemic.This is the first study to measure empirically the reaction of health care workers from different occupational groups to imageries of COVID-19, which were excerpted from professional literature.This enabled us to directly compare the impact of the imageries between these groups and map the perception of the language for these groups in terms of agreement with imageries and induced resolve.We also demonstrated that participants who suffered from higher directly COVID-related stress (but not more general depression or anxiety) tended to agree more with the imageries but also felt more helpless through them. Limitations The study is limited in that we only used imageries from scientific literature and surveyed only healthcare professionals.Additionally, the gender imbalance in our sample, with a majority of female participants, further limits the generalizability of our results to broader populations.This gender distribution, while reflective of the workforce in healthcare settings, may not accurately represent other demographic contexts.The study also acknowledges that the majority of our selected literature and imagery comes from Western sources, potentially limiting the applicability of our findings to non-Western contexts and perspectives.This Western focus reflects the current distribution of published research in this area and underscores the need for more diverse cultural research on the topic.We also had to select a number of imageries from a very large body of literature, which is inherently limiting (42). Conclusions Verbal imageries are powerful tools in critical situations.Our study demonstrates that imageries used for the COVID-19 pandemic had differential effects on different professional groups in healthcare in terms of agreement with the imagery used, and in terms of whether it was experienced as enabling for coping with the crisis.On the one hand, this calls for a careful use of imageries when speaking of a crisis.On the other hand, it supports the importance of interprofessional collaboration in healthcare, as the diversity of perspectives (e.g., adding acceptance to a combative spirit) can help to cope with challenges such as experiences of trauma and loss.Furthermore, our study shows how an interdisciplinary cooperation of the humanities (excerpting the imageries) and quantitative psychological research (conducting and evaluating the survey) can represent a genuine enrichment for research.Further studies could explore how and why certain imageries are particularly helpful for certain groups and how an interdisciplinary approach could help in a change of perspective and ultimately make a team more resilient. TABLE 1 Three-factor solution for the scale "agreement". TABLE 1 Continued For descriptive and comparative statistics, analyses of variance (ANOVA) were performed and effect size given in partial h 2 .In case of multiple comparisons, Tukey post-hoc tests were conducted and effect size given in Cohen's d. TABLE 2 Two-factor solution for the scale "induced resolve". TABLE 3 Linear hierarchical regression for the factor agreement and induced resolve.
5,555.2
2024-02-05T00:00:00.000
[ "Medicine", "Linguistics" ]
Unraveling the Thread of Aphasia Rehabilitation: A Translational Cognitive Perspective Translational neuroscience is a multidisciplinary field that aims to bridge the gap between basic science and clinical practice. Regarding aphasia rehabilitation, there are still several unresolved issues related to the neural mechanisms that optimize language treatment. Although there are studies providing indications toward a translational approach to the remediation of acquired language disorders, the incorporation of fundamental neuroplasticity principles into this field is still in progress. From that aspect, in this narrative review, we discuss some key neuroplasticity principles, which have been elucidated through animal studies and which could eventually be applied in the context of aphasia treatment. This translational approach could be further strengthened by the implementation of intervention strategies that incorporate the idea that language is supported by domain-general mechanisms, which highlights the impact of non-linguistic factors in post-stroke language recovery. Here, we highlight that translational research in aphasia has the potential to advance our knowledge of brain–language relationships. We further argue that advances in this field could lead to improvement in the remediation of acquired language disturbances by remodeling the rationale of aphasia–therapy approaches. Arguably, the complex anatomy and phenomenology of aphasia dictate the need for a multidisciplinary approach with one of its main pillars being translational research. Introduction The principles that govern language rehabilitation remain a perpetual topic of interest in the field of aphasia [1].In the short history of language treatment, there have been several approaches to the study of aphasia rehabilitation.Most of them usually focus on language per se, whether it is the exact aphasic profile, the type and/or severity of observed language disturbances, the underlying-and supposedly impaired-language mechanisms, or techniques to enhance verbal behavior and overall communication ability.(For a review of these approaches, see [2]).This is probably derived from the fact that for more than a century, the Wernicke-Lichtheim model defined not only the neural and functional substrate of language [3] but also the ideas and strategies concerning aphasia rehabilitation [1]. In recent years, an emerging alternative perspective, based on comparative anatomy, neuroimaging, and lesion studies, has contradicted this functional organization dogma.This theoretical perspective states that the so-called "language network" may have evolved before the emergence of language as a neural substrate of a domain-general processing mechanism [3].Thus, language faculty could be viewed as the product of natural selection based on physiological and cognitive pre-adaptations, such as perisylvian networks or white-matter pathways that are also present in animals [4], which may appear, prima facie, Biomedicines 2023, 11, 2856 2 of 14 to be specialized for discrete functions such as syntax but actually support other, more fundamental cognitive domains, such as working memory (for a relevant short discussion, see [5]). For example, it has been argued that Broca's area, a traditionally labeled "language" region that has primarily been associated with speech production, is a "supra-modal hierarchical processor", even in non-verbal tasks (see [6]).It has been demonstrated that Broca's area is engaged in abstract sequencing [7] as well as in the processing of other types of information related to complex motor sequences, music, or mathematics [8] (for a review of the involvement of Broca's area in several non-language processes, see [9]).There is also evidence, derived from healthy brain functioning, of the involvement of perisylvian "language" regions in a broad spectrum of executive functions.Brodmann area 45 has been shown to be involved in selective retrieval [10], while Brodmann area 46 and inferior parietal cortices have been associated with monitoring within working memory and manipulation, respectively [11,12]. In a similar context, there have been studies showing that the dorsal or ventral streams that are associated with language are critical in other, more "basic" cognitive functions.For example, the third branch of the superior longitudinal fasciculus which connects prefrontal, premotor and parietal areas (SLF III) is involved in phonological processing, but it is also assumed to control orofacial action, even in non-verbal tasks [13].On the other hand, the extreme capsule fasciculus, which is also present in the macaque monkey [14], has been associated with semantic language processing, while there are studies that support its role in the actively controlled retrieval of information [15]. Lesion studies on aphasia have also shown that patients with acquired language disturbances commonly face difficulties in other cognitive domains, such as short-term memory, working memory [16,17], or other executive functions (for a review, see [18]).Overall, an aphasia-producing lesion will inevitably result in deficits in cognitive domains other than language, and these deficits have been shown to be related to the severity of language.This notion is further supported by lesion studies that do not focus on aphasia per se but investigate lesion loci that affect language-related areas.It should be noted that the latter term is not used a strict sense here, and thus it is not limited to the traditional regions identified as "Broca's" and "Wernicke's" areas but rather extends to a quite broad perisylvian region that includes cortices or even white matter pathways, which have been associated with various aspects of language processing, as indicated by brain-imaging studies (for a review, see [19]).In this context, there have been studies showing that such perisylvian lesion sites may affect several cognitive skills.For example, Baldo and Dronkers [20] showed that damage to the inferior parietal cortex and the inferior frontal cortex may differentially affect different components of working memory tasks.Leff et al. [21] argued that the superior temporal gyrus in the left hemisphere is a shared neural substrate for both auditory comprehension and short-term memory.Furthermore, Chapados and Petrides [22] highlighted the importance of the ventrolateral prefrontal cortex for selective retrieval.This notion was further supported by a recent study which showed that a lesion specifically affecting fundamental components of the ventral "language" stream, including pars triangularis and the temporo-frontal extreme capsule fasciculus, has detrimental effects on lexico-semantic processing and active selective controlled retrieval [23]. Regarding these advances that delve even more deeply into the neurobiology of language and, ultimately, raise doubt about the traditional dogma of the neural organization of language, there has been, in recent years, an ongoing debate regarding how (or even if) findings from basic neuroscience studies can be exploited in order to optimize language treatment [24].In this vein, neuroscience research has revealed a universal characteristic of human and animal brain-neuroplasticity-which potentially serves as a bridge between basic research and clinical practice [25,26].This emerging field, i.e., cognitive neurorehabilitation, is founded on a set of specific neural principles that could probably be translated and applied to human recovery from language and cognitive deficits [27].This translational approach in rehabilitation inevitably leads to two major questions.The first question is whether clinicians specialized in the rehabilitation of cognitive disorders, and particularly aphasia, can manipulate the principles of neuroplasticity in order to maximize language treatment, based on findings from animal research.The second question is broadly related to the possible links between language and other cognitive domains.Animal studies usually examine sensory and motor functions, but there are also sparse data on cognitive functions such as object recognition or spatial memory [28].From that perspective, it is essential to take into account the idea that the grounding evolutionary foundations for language to root were probably other domain-general cognitive mechanisms [3].Consequently, the second question is formulated as such: are there studies with stroke-induced aphasia patients which designate the significance of non-linguistic functions in language rehabilitation?In the following sections of this paper, we will attempt to describe a potential translational framework in aphasia rehabilitation (see Figure 1). Biomedicines 2023, 11, x FOR PEER REVIEW 3 of 14 neurorehabilitation, is founded on a set of specific neural principles that could probably be translated and applied to human recovery from language and cognitive deficits [27].This translational approach in rehabilitation inevitably leads to two major questions.The first question is whether clinicians specialized in the rehabilitation of cognitive disorders, and particularly aphasia, can manipulate the principles of neuroplasticity in order to maximize language treatment, based on findings from animal research.The second question is broadly related to the possible links between language and other cognitive domains.Animal studies usually examine sensory and motor functions, but there are also sparse data on cognitive functions such as object recognition or spatial memory [28].From that perspective, it is essential to take into account the idea that the grounding evolutionary foundations for language to root were probably other domain-general cognitive mechanisms [3].Consequently, the second question is formulated as such: are there studies with stroke-induced aphasia patients which designate the significance of non-linguistic functions in language rehabilitation?In the following sections of this paper, we will attempt to describe a potential translational framework in aphasia rehabilitation (see Figure 1). Neuroplasticity in Animals and Aphasia Research Several animal studies in the broader field of evolutionary biology confirm that mammalian species demonstrate differences but also substantial similarities in cerebral organization and function [29].Based on this line of research, a fundamental attribute of the brain has emerged, i.e., neuroplasticity.This term refers to the neurons intrinsic capacity to reorganize their structure and function in response to environmental stimuli and injuries [30].It is well documented that humans have a larger cortical surface area compared to other animals; however, this is not the primary impetus of brain plasticity [29].In their seminal paper, Rockel et al. [31] compared specific properties of cortical neurons such as number and density, in cat, macaque, rat and human.They concluded that the core difference across the aforementioned species was not the distribution of neurons in each section but instead the pattern of synaptic connections among brain areas.Based on that notion, it has been theorized that the ability to 'sculpt these connections is the cornerstone of neuroplasticity and, more interestingly, the underlying mechanisms of this neural modification are parallel between humans and animals [32].This hypothesis has formulated Neuroplasticity in Animals and Aphasia Research Several animal studies in the broader field of evolutionary biology confirm that mammalian species demonstrate differences but also substantial similarities in cerebral organization and function [29].Based on this line of research, a fundamental attribute of the brain has emerged, i.e., neuroplasticity.This term refers to the neurons' intrinsic capacity to reorganize their structure and function in response to environmental stimuli and injuries [30].It is well documented that humans have a larger cortical surface area compared to other animals; however, this is not the primary impetus of brain plasticity [29].In their seminal paper, Rockel et al. [31] compared specific properties of cortical neurons such as number and density, in cat, macaque, rat and human.They concluded that the core difference across the aforementioned species was not the distribution of neurons in each section but instead the pattern of synaptic connections among brain areas.Based on that notion, it has been theorized that the ability to 'sculpt' these connections is the cornerstone of neuroplasticity and, more interestingly, the underlying mechanisms of this neural modification are parallel between humans and animals [32].This hypothesis has formulated the basis of translating results from animal research to humans [25].In general, neuroplasticity is a dynamic process underlying normal development or learning, and it includes various atrophic and trophic processes, such as neurogenesis, synaptogenesis, and the removal of unused synapses [33].In this context, neuroscientific research has suggested that the refinement and alteration of behavior via neuroplasticity is primarily influenced by a wide variety of stimuli and experience [34].Similar studies have indicated structural alterations in brain areas following cognitive training in animals and humans [35,36].As Turkstra and colleagues [26] have highlighted, 'there is an ongoing process of modification in both directions: experience to brain and brain to experience' (p.604).On the grounds of this interaction, it has been argued that structural mechanisms underlying experience-dependent plasticity in the cortex, such as axonal sprouting or the growth of new dendritic spines, could be manipulated toward the reorganization of cognitive functions and language following stroke [36].Thus, the study of the principal rules governing neuroplasticity in the intact or the injured brain of both animals and humans could provide valuable guidelines for understanding how the neural circuits are remodeled following stroke either during the course of recovery or in the context of rehabilitation. In the case of aphasia, there is accumulating evidence suggesting that spontaneous neuroplastic brain changes following stroke could result in language reorganization [36].In general, neuroimaging studies indicate that the compensation for impaired language functions relies on the increased activation of residual undamaged left hemispheric areas or the recruitment of homologous right hemispheric areas [37].For instance, Fridriksson [38] showed a correlation between improved naming performance and increased cortical activation in left undamaged areas in untreated post-stroke aphasia.On the other hand, patients with aphasia (PWAs) have been shown to exhibit a right-lateralized activation pattern during a silent word-generation task, which is a pattern similar to that of left hemispheric regions of healthy right-handed individuals [39].It should be however noted that right hemisphere changes have also been reported to be maladaptive, and increased activation in those areas could be associated with impaired performance [40]. The involvement of neuroplasticity in language reorganization has been addressed not only as an important aspect of spontaneous recovery but also in the context of rehabilitation research.Although sparse, there are functional imaging studies which have demonstrated brain changes as a result of treatment programs.Thompson et al. [41] have shown that training in producing specific sentence structures may result in increased right-hemisphere activity during verb production in PWAs; the sites of such increased activation were different from those usually identified in neurologically intact individuals.Therefore, these results provided indications of remapping language functions to previously uninvolved brain regions, such as the superior parietal cortex.Furthermore, in a study by Fridriksson [42], twenty-six left stroke survivors received an intense aphasia treatment focusing on object naming.The results showed that even though damage to the left middle temporal lobe and the temporal-occipital junction had a negative effect on performance, increased brain activation in the anterior and posterior regions of the left hemisphere was correlated with improved outcomes.There are also findings highlighting treatment-induced activity changes in brain connectivity patterns involving language-related tracts, such as the arcuate fasciculus [43]; however, this line of evidence is still inconclusive [44]. Apart from functional changes, there have also been sparse reports of structural brain alterations following language rehabilitation.One study found an increase in the number of fibers and volume of the right arcuate fasciculus after melodic intonation therapy in PWAs [45].It has also been shown that an improvement of word retrieval may be associated with increased structural integrity of the left arcuate fasciculus [24].Furthermore, improved naming performance has been associated with different patterns of gray matter density in specific right hemisphere areas, such as the precentral gyrus or the temporal lobe [44].A study by Allendorfer et al. [46] reported increased axonal density in left frontal areas following transcranial magnetic stimulation over the left hemisphere; nevertheless, more research is required to clarify the effect of gray and white matter changes on specific language domains.In summary, a surge of basic and neuroimaging research indicates that neuroplasticity is the cornerstone of cognition and language recovery after brain damage.However, only recent studies have focused on specific principles of neuroplasticity that could be manipulated in order to maximize language treatment (Figure 2) [30]. more research is required to clarify the effect of gray and white matter changes on specific language domains.In summary, a surge of basic and neuroimaging research indicates that neuroplasticity is the cornerstone of cognition and language recovery after brain damage.However, only recent studies have focused on specific principles of neuroplasticity that could be manipulated in order to maximize language treatment (Figure 2) [30]. Generalization, Environmental Enrichment, and Salience in Rehabilitation In the last few decades, animal research has suggested that specific rehabilitation principles promote neuroplasticity and functional recovery [30,47].Sparse experiments have demonstrated that treatment focused on one particular function can generalize to the improvement of untrained behavior in animals [48].For example, Liu et al. [49] have shown that cognitive training in rats via a T-shaped maze may improve memory after a 4-week program; that improvement was accompanied by enhanced functional activity of the hippocampus and the medial-prefrontal cortex.In a similar context, there has been evidence of increased dendritic patterns in both hemispheres of rats following sensory-motor intervention during a skilled one-paw reaching task, which was also 'transferred to reaching with two paws [50].Other researchers have proposed that such generalization could be influenced by the complexity and richness of training surroundings [51].In animal research, environmental enrichment generally refers to a more challenging environment (e.g., group housing, toys, diverse food), and it facilitates neurogenesis and synaptic plasticity [52].It has been also argued that a more complex intervention environment may affect memory and learning.Hamm et al. [53] have shown that the training of rats in an enriched environment may result in better performance regarding spatial memory, while other studies have highlighted the recovery of motor coordination [54].Moreover, enriched environments are considered to promote salience, which is an important factor of neuroplasticity [30].Salience is the perceived value or relevance of the experience to the individual [27] and has been associated with motivation and attention in animals [55].Animal research using auditory tunes has demonstrated that there could be an alteration and reorganization of auditory maps in rats when training is salience based [56]. Based on the aphasia literature, the generalization of language treatment has been a perennial issue for clinicians [57].The implications for language reorganization is that training a specific language modality could influence the neural capacity to improve in Generalization, Environmental Enrichment, and Salience in Rehabilitation In the last few decades, animal research has suggested that specific rehabilitation principles promote neuroplasticity and functional recovery [30,47].Sparse experiments have demonstrated that treatment focused on one particular function can generalize to the improvement of untrained behavior in animals [48].For example, Liu et al. [49] have shown that cognitive training in rats via a T-shaped maze may improve memory after a 4-week program; that improvement was accompanied by enhanced functional activity of the hippocampus and the medial-prefrontal cortex.In a similar context, there has been evidence of increased dendritic patterns in both hemispheres of rats following sensorymotor intervention during a skilled one-paw reaching task, which was also 'transferred' to reaching with two paws [50].Other researchers have proposed that such generalization could be influenced by the complexity and richness of training surroundings [51].In animal research, environmental enrichment generally refers to a more challenging environment (e.g., group housing, toys, diverse food), and it facilitates neurogenesis and synaptic plasticity [52].It has been also argued that a more complex intervention environment may affect memory and learning.Hamm et al. [53] have shown that the training of rats in an enriched environment may result in better performance regarding spatial memory, while other studies have highlighted the recovery of motor coordination [54].Moreover, enriched environments are considered to promote salience, which is an important factor of neuroplasticity [30].Salience is the perceived value or relevance of the experience to the individual [27] and has been associated with motivation and attention in animals [55].Animal research using auditory tunes has demonstrated that there could be an alteration and reorganization of auditory maps in rats when training is salience based [56]. Based on the aphasia literature, the generalization of language treatment has been a perennial issue for clinicians [57].The implications for language reorganization is that training a specific language modality could influence the neural capacity to improve in other untrained language behaviors [30].Several studies have examined generalization effects in other language functions when rehabilitating confrontation or picture naming (for a review, see [58]).Hillis and colleagues [59] have reported significantly better semantic and comprehension performance following naming rehabilitation, although there are ap-proaches which doubt the methodological processes that lead to generalization gains [60].In the domain of syntax and speech production, the training of sentences could result in generalization gains of untrained sentences when they exhibit similar grammatical and semantic properties [61].On the other hand, the importance of salience has not been systematically studied in the field of aphasia rehabilitation.However, it is well known that PWA may demonstrate a lack of motivation in daily activities and even depression, especially when language disturbances are severe [1].A recent study that could shed light on this subject is that of Janssen et al. [62].The authors designed an enriched environment in a rehabilitation setting with stroke patients.The primary outcome was that patients in the enriched environment had higher engagement compared to the control group (rehabilitated in a non-enriched environment), and they also demonstrated improvement in cognitive functions.The principle of salience in aphasia should be further investigated with intervention protocols that promote motivation and are meaningful for the participant [24]. Repetitio Est Mater Studiorum or "Repetition Influences Recovery" There are animal studies which support the idea that the training and acquisition of a learned behavior after brain injury is not sufficient for the reorganization of function [63].Research on the principles that facilitate neuroplasticity highlights repetition and intensity as key elements for the maintenance of neural changes in the brain [64].For instance, Monfils and Teskey [65] have reported that an increase in synaptic strength and number can be observed in rats only after several days of training.In addition, a motor map reorganization can be achieved in rats after an intense and repetitive training program [63].However, there is still no gold standard concerning the number or the duration of trials that animals should undertake in order to achieve improved functional outcomes [24,25].Microstimulation and functional mapping studies have also shown that repetitive exercise can influence the activity of neural circuits (for a review, see [66]).Repetitive motor training combined with brain stimulation could lead to functional improvements by reducing activity in specific brain areas [67].It is noteworthy that repetition and intensity, although theoretically distinct principles of neuroplasticity, are often not separated in animal studies [24,68].However, some studies have proposed that exaggerated intensity and repetition of training in rehabilitation could lead to tissue loss and reduced functional gains [69]. Based on these animal studies, aphasiologists have examined the issue of intensity in language treatment [70].Greater intensity of rehabilitation, when reported, is shown to have positive functional outcomes for PWA in naming [71] or spoken language [72].In a similar vein, there are studies which have reported an improvement of language following treatment of 8.8 h per week for 11.2 weeks [73], while others do not confirm such a positive effect [70].It has also been noted that intensity may have positive effects on language-related functional and structural reorganization: Meinzer et al. [74] have shown increased activation in perilesional areas in PWA after an intensive 2-week training program, while Schlaug, Marchina, and Norton [45] have reported increased volume of the arcuate fasciculus after a longer intensive rehabilitation program. In summary, the existing studies on humans, although scarce, have provided indications about the benefits of intensity; however, similarly to animal research, the specifics of such programs are yet to be fully understood [70].Future studies should provide guidelines for the optimized duration of intervention protocols, focusing on specific language domains of PWA. Rehabilitation of Cognitive Functions and Its Reflection to Language It has already been established that sensory-motor and memory functions in animals can be improved following neurorehabilitation protocols [34].Until the field of translational research expands further, researchers can only formulate theories about possible parallels between humans and other animals concerning the structural and functional mechanisms involved in rehabilitation [25,27].Within this context, the notion that language is supported by 'basic' cognitive domains (e.g., action, memory, etc.) has led scholars to investigate if the rehabilitation of non-linguistic functions also present in animals can optimize language treatment.This idea is supported by researchers who explore the critical aspect of cognitive mechanisms in the rehabilitation of language in humans [75]. Over the years, the elucidation of the brain-language relationship has proven to be a Sisyphean task, which is mainly due to the lack of a robust consensus for creating an accurate and comprehensive functional neuroanatomy model [76].This nebulous picture has also affected recovery studies which primarily focus on impaired language modalities and their neural substrates and eventually ignore or underestimate the impact of nonlinguistic factors on the behavioral manifestation of aphasia [77]. The idea that other cognitive mechanisms, which are obviously present in animals, can contribute to the structural and functional reshaping of neural networks supporting language is not new [78].In recent years, there has been growing support of the notion that PWA exploit various cognitive functions for language processes, including-but not limited to-short-term or working memory [79,80], attention [81,82] or other executive functions [83], and praxis [84]. This rationale has paved the way for the investigation of the presumable interrelation between attention and language recovery in PWA.Perhaps the most intriguing observation supporting this relationship is that the majority of these training studies have shown that subcomponents of attention, e.g., sustained or divided, may affect access to lexical representations [85].Helm-Estabrooks, Connor, and Albert [86] have developed a rehabilitation program consisting of different non-verbal simple or complex attention alteration tasks.Their results have shown a significant improvement as well as generalization effects on auditory comprehension and visual analytic reasoning.There have been also findings indicating neural changes in attention pathways following language treatment [87], with increased connectivity on parietal regions of the default mode network associated with naming gains.Beyond the attention domain, early lesion studies have revealed that short-term (STM) and working memory (WM) may share common neural substrates with language [20].This notion has been further supported by subsequent studies which have shown that it is an aphasia-producing lesion-rather than any left-lateralized lesion-that leads to STM/WM deficits [88].In this framework, one could arguably ask whether language recovery outcomes may be affected by training verbal STM and/or WM.For instance, in their case study, Koenig-Bruhin and Studer-Eichenberger [89] reported an improvement in the delayed recall of nouns and sentences following intervention in STM and WM.It has been also suggested that reduced memory span, which is usually accessed by repetition tasks, is strongly correlated with lexical deficits and increased aphasia severity [16].Another piece of evidence that further fortifies the argument that non-linguistic functions are of essence is that there have been studies highlighting the prognostic value of cognitive factors in language recovery [90].For example, Gilmore, Meir, Johnson and Kiran [91] have reported that WM, inhibition and processing speed predicted language improvement in PWA, following naming and sentence comprehension rehabilitation, whereas visual STM was associated with the maintenance of naming gains after a 12-week no-treatment phase. Discussion As stated before, the short history of aphasia rehabilitation [1] has demonstrated that treatment strategies in general have been significantly influenced by the presumed neurobiological model for language of a particular time period, while neuroplasticity has been highlighted as an important rehabilitation factor only recently.The Wernicke-Lichtheim paradigm has been severely doubted by more recent theoretical accounts based on accumulating research evidence derived from studies involving patients with aphasia, but it has not yet been completely replaced [76] by other, more concrete, and modern language models which focus on neural language networks [92].In this context, as has been thoroughly described in the previous section, it is undeniable that aphasiologists have only recently started to focus on the impact of fundamental cognitive functions in language therapy [78].However, it is also undeniable that we have yet to delineate an integrated framework of aphasia rehabilitation.This could be attributed to limited research focus on the neural bases of spared, non-linguistic functions and the implementation of neuroplasticity principles (derived from animal studies) as well as their interaction with recovery variables which are essential in therapy strategies. In general, post-stroke aphasia studies have examined the impact of clinical and demographic factors on language recovery, which are theorized to differentially affect brain plasticity [93].In the past few years, there have been several inconsistencies concerning the influence of demographic factors such as age, sex and educational level on language spontaneous recovery or rehabilitation induced by intervention programs not only in the chronic but also in the acute or subacute phase (for a review, see [94]).It is generally accepted that younger brains have greater plasticity and ultimately a greater capacity for recovery [50].Accordingly, it has been assumed that younger patients are more likely to recover than older patients [95].However, more recent studies have not found a significant association between age and recovery (see for example [96]).Future research is thus required in order to thoroughly investigate and hopefully clarify the specifics of the process by which older adults with acquired aphasia demonstrate different patterns of recovery and reorganization compared to younger patients, and also how age interacts with other predictors of recovery, such as motivation or personality traits [24].On the other hand, most researchers have confirmed an inverse relationship between recovery and lesion size, while lesion location has been shown to be rather more critical [97,98].The degree of white-matter integrity, in both the left and right hemisphere, has also been documented to affect language rehabilitation [24].Diffusion tensor imaging techniques have revealed that the disruption of specific white matter tracts of the left cerebral hemisphere such as the arcuate fasciculus or the superior longitudinal fasciculus may lead to speech production impairment [59].However, there is still limited data regarding how rehabilitation methods can 'reformulate', structurally or functionally, specific white matter pathways.In sum, it is crucial to understand how aphasia-producing lesions may affect other cognitive domains (keeping in mind that language-related neural networks are not language specific and may be involved in other aspects of cognition), how neuroplasticity principles (repetition, environmental enrichment, generalization) may mediate observable post-stroke language recovery, and how neuroplastic mechanisms may interact with demographic, lesional, cognitive, or other variables [27]. Despite the interrelation between language and other cognitive domains, there have been sparse studies exploiting the key elements which facilitate brain plasticity in specific language modalities, such as word finding or auditory comprehension in the translational field (for a review, see [99]).In addition, the available findings regarding the impact of neuroplasticity in the enhancement of non-linguistic factors are still very limited.Thus, more data are needed in order to create efficient intervention protocols that focus on specific language domains.There have been some recent efforts, such as Semantic Feature Analysis or Phonomotor Treatment, which target the mental lexicon and phonological speech sounds, respectively; however, this line of research is still in its infancy [100,101].Although the clinical relevance of rehabilitating specific functions is undoubted, the complexity of language material in aphasia treatment has also been shown to be beneficial in several domains such as syntax or lexical semantic impairments [61].There have also been studies which explore the effect of non-language behaviors in aphasia recovery.For example, there have been promising results which demonstrate that rhythm and melodic intonation may lead to structural changes in the right hemisphere [45], while intention treatment has been reported to improve word retrieval following left-hand movements [102].However, this is a field which has not been sufficiently studied.Given the potential to improve recovery outcomes with non-invasive and cognitively oriented methods, further research is required; such research attempts could focus on the neuroplasticity-induced structural and functional brain changes. As the field of neurorehabilitation progressively unfolds, more and more researchers are recognizing the importance of the key parameters of neuroplasticity and the critical need for the design of a neurobiological approach to aphasia therapies [27].Animal models allow analysis of brain injuries and strokes at a molecular level and may thus provide insight to the core mechanisms of functional recovery [26]. In the context of this ongoing effort, researchers have developed stroke models; however, these are limited to motor recovery [103].In this translational continuum, future animal studies should be more reflective of human cognitive deficits and recovery, while clinicians and aphasiologists could apply concepts derived from basic neuroscience more systematically [36].In relation to the latter issue, throughout the history of post-stroke aphasia rehabilitation, important variables that facilitate neuroplasticity, such as intensity or timing of treatment [99], were often disregarded or characterized by a significant degree of variability among patients [1].It has been recently reported that a higher intensity of treatment protocols may induce neuroplasticity, which eventually may lead to improved language outcomes [104].Moreover, the issue of the timing of therapy deliverance has been revealed to be critical for rehabilitation protocols, since early intervention could be either beneficial or maladaptive [105].However, more research is necessary to understand the interaction between intensity and timing of rehabilitation across different stages of recovery as well as the optimization of neural mechanisms which respond to treatment schedules. Except for neuroimaging advances, which in the last decades can identify structural and functional changes following language treatment, the rise of neuromodulation technologies such as transcranial direct current stimulation and repetitive transcranial magnetic stimulation has allowed the immediate manipulation of training-induced neuroplasticity [44].This effect can be achieved by facilitating activity in brain regions or by suppressing maladaptive neural processes [106] and is also combined with behavior treatment [44].These stimulation methods have also been applied to modulate specific language domains, such as naming, even before intervention, with quite promising results [44,107].Recent meta-analyses have suggested that the aforementioned neurostimulation techniques may also be associated with the timing of intervention, as positive treatment outcomes have been indicated in both subacute and chronic patients with aphasia [44].However, there is still a lack of consensus with regard to the optimal choice of neuromodulation method depending on the possible implications posed by lesion size or location [108]. Even though scholars working on language rehabilitation have achieved a significant theoretical and practical development, translational aphasia research is still at its origins.Overall, the present review aimed to highlight basic principles stemming from the evidence available in the animal and human literature, in a translational framework, focused on aphasia rehabilitation.However, translational research is not a panacea and still remains rather challenging regarding not only aphasia rehabilitation but also other fields of neuroscience (for a review, see [109]).We are aware of the main impediment to this aim, i.e., the major difficulty of translating findings from animal studies to human patients with aphasia.This difficulty can be attributed to obvious reasons: brain differences between human and non-human mammals and, most importantly, the uniqueness of language in Homo sapiens.However, we argue that there are possible reciprocal gains from this effort: the field of aphasiology could benefit from basic neuroscience and, in turn, animal research could be inspired from the field of language treatment, thus forming a new translational direction in aphasia rehabilitation. Conclusions This study has highlighted findings derived from animal and aphasia research that could influence future studies in developing neurorehabilitation approaches emphasizing the improvement of cognitive factors and their reflection on language modalities based on neuroplasticity optimization.From a contemporary neuropsychological perspective, we argue that people with aphasia should not be treated as "aphasics" but as stroke patients with prominent language difficulties as well as significant deficits in other cognitive domains, which, in turn, may contribute to-or even be the root of-their language impairment.More and more researchers are recognizing the need for a holistic approach in aphasia rehabilitation; however, further progress is required in deciphering common parallels between animals and humans.This rationale, combined with treatment protocols that focus on the enhancement of neuroplasticity, via specific neural principles, and their association with language and non-language domains, could provide an innovative, neurobiological, and multi-modality foundation for aphasia rehabilitation. Figure 1 . Figure 1.The reciprocal relationship between animal and aphasia research. Figure 1 . Figure 1.The reciprocal relationship between animal and aphasia research.
8,148
2023-10-01T00:00:00.000
[ "Medicine", "Linguistics" ]
The Extraction of White Ginger by Using Microwave Ultrasonic Steam Diffusion Method as the Essential Oil Substance Zingiberene oil (C15H24)) is one of diversification products which has a high level of economic values. However, the rate of export number toward the export number has just recently attained 0,3% . Moreover the number of ginger oil as the export product does not fulfill the export standart, such as the Essential Oil Association of USA (EOA). This condition happens since the Hydro Distillation is applied as one of the method in refining process. Nevertheless, this method is considered the best method in refining process although it takes more time to gain the result of the refining process. Another extraction process is called as Microwave Distillation and Simultaneous Solid-Phase Microextraction (MDSS-PM). By applying this method, the total time estimation is reduced significantly, but the final result in refining process is not as good as by applying Hydro Distillation. This research applies Microwave Distillation as the extraction process of white ginger. Furthermore, this kinds of method is modified in such a way by adding optical ultrasonic (MUSDf). The variable used in this research is Steam Diffusion (SDf), Microwave Extraction (ME), Microwave Steam Diffusion (MSDf), Microwave Ultrasonic Steam Diffusion (MUSDf). Moreover, this result takes 30, 50, 70, 90 and 110 minutes. Furthermore, the extraction temperature is 90, 95, 100 dan 105oC. The final research shows that MUSDf is considered the best method in extracting the ginger oil with the yield result about 0,952% and zingiberene 6,38%. Hence, the total price for each essential oil costs Rp 17.964 by gaining 100oC of the optimum extraction temperature. INTRODUCTION Processing ginger becomes essential oil makes a high level of economic values for ginger plants. The specific aroma of ginger is predominantly related to zingiberene. Therefore, it takes an appropriate of extraction method to produce ginger oil with a good quantity and quality. According to (Yu, Huang, Yang, Liu, & Duan, 2007), an efficient method in the extraction process is Microwave Distillation and Simultaneous Solid-Phase Microextraction (MDSS-PM). The advantages of this method are short extraction times and don't need for organic solvent, but the application of this method is limited. While (Sansan, Shuangming, Xiu, & Xiao, 2012) in patent CN102676299A, doing the research about the extraction of lavender using Ultrasonic Steam Extraction (USE) method. The result is USE method can produces more amounts of extract lavender than conventional distillation method and has a short extraction times, but energy consumption required is relatively large. Thus, in this study auxiliary techniques as Ultrasonic Steam Extraction and Microwave Distillation and Simultaneous Solid-Phase Microextraction have been innovated a new method called Microwave Ultrasonic Steam Diffusion (MUSDf) in order to enhance extraction performances. Materials and Design of Equipment In this study, materials used are dried gingers (moisture content was 10%) and water. For the equipment used in this study are ultrasonic scaler, steam generator, and microwave. Ultrasonic scaler with a frequence at 30 ± 3 kHz, and output power 3-30 watt. Steam generator power is 1800 watt and microwave power is 450 watt. MUSDf method illustrated in Figure 1. Extraction Process Ginger rhizome was pre-treated by cleaning its rhizome and dried at 80 o C for 13 hours (moisture content was 6.7%). The raw material was 70 grams of dried ginger and the solvent was 500 mL of aquadest. Extraction of ginger was done by maceration for Steam Diffusion (SDf), Microwave Exctraction (ME), Microwave Steam Diffusion (MSDf) for 30 minutes. In maceration process for the Microwave Ultrasonic Steam Diffusion (MUSDf)is added an ultrasonic wave for 30 minutes. After that, the extraction method of SDf, ME, MSDf, and MUSDf was performed using variation extraction time which are 30, 50, 70, 90 and 110 minutes. Then, continued with yield tests and equivalent relative amounts of zingiberene in ginger essential oil. Effect of SDf, ME, MSDf, and MUSDf Methods to Ginger Oil Yield This researches involves the extraction methods of ginger oil by using four methods, they are Steam Diffusion Steam Diffusion (SDf) methods is a conventional extraction method with heater by steam which produced by steam generator. This method used for extracted bioactive components. ME method is a extraction method that developed and applied to volatile and active compound in plant that use micro wave energy. Furthermore, MSDf method that used the microwave as heater. In order to micro wave that fuction as heater with equitable distribution, combined by steam also use for this method. The combination between micro wave with steam can help the release of the compound essential oil that trapped in a plant cell. MUSDf method is doing by the same process with MSDf method however at the maceration process added ultrasonic wave simultaneously. By applying SDf, ME, MSDf, and MUSDf methods, the process of extraction can be seen in Table 1 Table 1. is shown that extraction process of ginger oil during 90 minutes using SDf, ME, MSDf, and MUSDf methods, each of them produces yield about 0,127%, 0,508%, 0,571%, and 0,952%. The quality of extraction method can be seen in Fig 2. Figure 2. can be known that extraction methods using microwave as heater is produces a good quality of yield than conventional extraction (SDf). It is caused by the synergy combination of the two transfer phenomena mass and heat acting. For microwave extraction, the two transport phenomena are in the same direction from the inside to the outside, which facilitates oil diffusion from the inside of the ginger to the outside. MUSDf method produces the highest oil yield compared to the other three methods. Furthermore, if the MUSDf method compared with MSDf method will produces the highest yield about 0,952% and 0,571%. The results explain that MUSDf method have the highest yield compared MSDf method at the same time. The increasing yield on the MUSDf method caused by addition of ultrasonic wave in maceration process. Ultrasonic power in the chemical process is not directly contact with substrate sample, but through the liquid media. Ultrasound wave resulting from electrical power (through transducer), continued by the liquid media to the substrate sample through cavitation phenomena. Ultrasonic cavitation creates shear forces that break cell walls mechanically and improve material transfer. This effect is being used in the extraction of liquid compounds from solid cells. That explanation is appropriate with the research by (Khan, Abert-Vian, Fabiano-Tixier, Dangles, & Chemat, 2010) about extraction of polyphenols from orange peel by using Ultrasound-assisted Extraction. The result showed that the extraction of bioactive compounds under ultrasound irradiation is one of the upcoming extraction techniques that can offer high reproducibility in shorter times. Analysis of Ginger Oil's Quality by Using MSDf and MUSDf Methods Ginger oil's quality is determined by various parameter, one is increased zingiberene value in ginger oil. Its composition was determined using GC-MS (Gass Chromatography Mass Spectrometry) test. The GC-MS test result can be seen in Table 2. Based on the GC-MS test result in Table 2 showed that the zingiberene value in the ginger oil extracted has a different percentage amounts by using MSDf about 8,93% and MUSDf about 6,38%. This result showed decrease zingiberene value amount in ginger oil ± 2%. Thus, ultrasonic irradiation accelerate the oxidation zingiberene's process. If the oxidation process were continous over time. Moreover, ultrasonic irradiation caused extreme conditions that cause degradation of chemical compounds. Energy Consumption and Cost Analysis by Using MSDf and MUSDf Methods Energy consumption and costs during the extraction process is also important to know besides yield and zingiberene value. Comparison of energy consumption and costs either MSDf and MUSDf methods showed in Table 3 Table 3 shows that MUSDf method gave the minimum energy and low cost than MSDf method at the same time of extraction. The energy consumption to perform the two extraction methods are 3,105 kWh for MUSDf and 3,075 kWh for MSDf method. From that explained, the cost required for 1 gram of ginger oil by using MUSDf and MSDf methods are Rp 19.500 and Rp 32.400. From the explained showed that the energy consumption of MUSDf method is more efficient 66% than the MSDf. This further confirms that MUSDf better in terms of yield and cost rather than MSDf. Furthermore, the cost required for 1 gram of ginger oil by using MUSDf method is also cheaper than ginger oil in market about Rp 19.500 and Rp 56.300.
1,940.4
2018-07-30T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Dietary fibers inhibit obesity in mice, but host responses in the cecum and liver appear unrelated to fiber-specific changes in cecal bacterial taxonomic composition Dietary fibers (DF) can prevent obesity in rodents fed a high-fat diet (HFD). Their mode of action is not fully elucidated, but the gut microbiota have been implicated. This study aimed to identify the effects of seven dietary fibers (barley beta-glucan, apple pectin, inulin, inulin acetate ester, inulin propionate ester, inulin butyrate ester or a combination of inulin propionate ester and inulin butyrate ester) effective in preventing diet-induced obesity and links to differences in cecal bacteria and host gene expression. Mice (n = 12) were fed either a low-fat diet (LFD), HFD or a HFD supplemented with the DFs, barley beta-glucan, apple pectin, inulin, inulin acetate ester, inulin propionate ester, inulin butyrate ester or a combination of inulin propionate ester and inulin butyrate ester for 8 weeks. Cecal bacteria were determined by Illumina MiSeq sequencing of 16S rRNA gene amplicons. Host responses, body composition, metabolic markers and gene transcription (cecum and liver) were assessed post intervention. HFD mice showed increased adiposity, while all of the DFs prevented weight gain. DF specific differences in cecal bacteria were observed. Results indicate that diverse DFs prevent weight gain on a HFD, despite giving rise to different cecal bacteria profiles. Conversely, common host responses to dietary fiber observed are predicted to be important in improving barrier function and genome stability in the gut, maintaining energy homeostasis and reducing HFD induced inflammatory responses in the liver. composition of the gut microbiota 16 . Our study aimed to firstly establish whether seven different dietary fibers (barley beta-glucan, apple pectin, inulin, inulin acetate ester, inulin propionate ester, inulin butyrate ester or a combination of inulin propionate ester and inulin butyrate ester), selected following in vitro fermentation studies 17 , were all equally protective in preventing diet-induced obesity in mice and whether this is related to specific cecal bacterial profiles. Since one potential mechanism for the action of dietary fibers is via SCFA production we also included inulin SCFA esters to assess the role of individual SCFA 18,19 . A second aim was to determine associated dietary fiber altered regulation of gene expression in the cecum, the first organ impacted by differences in the cecal bacteria, and the liver, the gatekeeper organ between the gut and the systemic circulation and whether this could be linked to cecal bacterial profiles. To do this we used a well-defined model of diet-induced obesity. We have previously shown that 12 week old (24-25 grams body weight) C57Bl/6J male mice rapidly and predictably gain bodyweight, adiposity and liver lipid content when fed a HFD 20 . Using this model for the current study we manipulated dietary carbohydrate to replace a proportion of corn starch and cellulose in the HFD with different fermentable dietary fibers (barley beta-glucan, apple pectin, inulin, inulin acetate ester, inulin propionate ester, inulin butyrate ester or a combination of inulin propionate ester and inulin butyrate ester). We measured food intake and adiposity at the same time as analysing post-intervention profiles of bacteria collected from the cecal contents using Illumina MiSeq sequencing and associated host gene expression in the cecum and liver. Circulating hormones and inflammatory markers. Circulating leptin, resistin and insulin levels measured in plasma from cardiac puncture were lower in HFD + DFs and LFD mice ( Fig. 2A-C). Gut hormones, PYY, GIP and ghrelin were measured in hepatic portal vein. Plasma PYY (p = 0.041) was higher in HFD + bglucan, HFD + inulin and HFD + inul B ester vs. HFD mice (Fig. 2D). There were no differences in GIP or ghrelin (data not shown). GLP-1 was undetectable. Cecal bacteria. Illumina MiSeq sequencing of 16S rRNA gene amplicons derived from cecal contents revealed a strong influence of dietary fiber supplementation on bacterial composition. Phylum-level analysis showed that Firmicutes were dominant in HFD + bglucan, HFD + inul B, HFD + inul PB, HFD and LFD groups, whereas Bacteroidetes were the most proportionally abundant phylum in the HFD + pectin, HFD + inul A and HFD + inul P groups (Fig. 3A). HFD + inulin had equal proportions of each of the two phyla (Fig. 3A). HFD + pectin and HFD + inulin had the highest percentage levels of Proteobacteria (mostly Deltaproteobacteria), but there was large individual variation (Fig. 3A). Mice fed the LFD had higher proportions of Actinobacteria belonging to the Bifidobacteriaceae family relative to those fed HFD diets with or without added dietary fibers (Fig. 3A,B). Large differences were also observed at family (Fig. 3B, Supplementary File S3) and Operational Taxonomic Unit (OTU) level (Fig. 3C, Supplementary File S4) between different groups, some of which were associated with specific diets (Supplementary File S4), including Ruminococcaceae and Lachnospiraceae in the HFD group, Bacteroidaceae in the HFD + pectin group and Porphyromonadaceae in the HFD + inulin acetate ester group (Fig. 3B). The 38 most abundant OTUs (≥0.5% of total sequences; data for all OTUs are given in Supplementary File S4) are shown in the heat map of relative abundance (expressed as average percentage to total sequences per diet) (Fig. 3C). Cecum bacterial profiles clustered separately from one another based upon diet for most animals. LFD and HFD with no added fiber clustered together, indicating that presence of fiber is a major driver of overall community composition. Metastats and LEfSe (Linear discriminant analysis effect size) analyses confirmed that there were a large number of significant differences in constituent taxa between these two groups and all the fiber-containing groups together (Supplementary File S4). In contrast, the two dietary groups with no added fibers (LFD and HFD) showed fewer significant differences between them (Supplementary File S5). We also observed that the four inulin esters separated into two clear groups, with HFD + inul A/HFD + inul P and HFD + inul B/ HFD + inul PB clustering together (Fig. 3B, Supplementary File S3). Two different clustering methods, (Jaccard, which incorporates only presence/absence of OTUs, and Bray Curtis, which also incorporates proportional abundances of each OTU when comparing dissimilarities) revealed a highly significant effect of diet (P < 0.001). Pairwise comparison of all dietary groups showed that clustering in both trees was significant (P < 0.001) with the SCIentIFIC RePoRts | (2018) 8:15566 | DOI:10.1038/s41598-018-34081-8 exception of HFD + inul A compared to HFD + inul P (P = 0.018 Jaccard; P = 0.071 Bray Curtis) and HFD + inul B compared to HFD + inul PB (not significant). AMOVA analysis essentially resulted in the same outcome (HFD + inul A compared to HFD + inul P P < 0.001 Jaccard; P = 0.058 Bray Curtis; HFD + inul B compared to HFD + inul PB not significant; all other comparisons P < 0.001). LEfSe analysis including all individual dietary groups revealed the highest number of significant associations for the HFD fed group, followed by HFD + pectin (Supplementary File S4). Thus there were no consistent changes in phylogenetic (16S rRNA-based) community composition with weight gain and fiber intake on the HFD diets, despite the significant impacts of individual fibers on bacterial composition when incorporated in the HFD. However, the possibility of common changes in some functional microbial group/groups that impact on weight gain, or in some low-abundance taxon that has a major effect on physiology cannot be ruled out. Bacterial diversity was higher in HFD fed mice compared to mice on all other diets and there were also significant differences in diversity indices between different diets incorporating different dietary fibers that were consistent with the Bray-Curtis clustering (Supplementary File S5). All diets containing dietary fibers had higher total bacterial 16S rRNA gene copies compared to either HFD or LFD, indicating greater bacterial numbers per g of cecal contents. HFD fed mice had the lowest bacterial abundance (Fig. 3D). Whole genome modulation in liver and cecum in response to dietary fibers. Global microarray analysis was conducted on liver and cecum of mice (n = 6) from HFD, LFD, HFD + inul and HFD + inul PB groups. The HFD + inul PB were chosen as the lowest weight gain and analysis of the HFD + inulin group allowed identification of any inulin vs. inulin ester effects. Comparison with HFD and LFD mice distinguished the effects of dietary fibers from body weight/adiposity effects. Principal component analysis (PCA) of normalised microarray data indicated diet associated cecum and liver gene expression, explaining 22.99% and 19.22% of the variation in the data respectively (Supplementary File S6 Fig. S2A,B). PCA analysis of cecal gene expression indicated a tendency for the HFD + inul and HFD + inul PB Figure 1. Mouse body weight, composition, cumulative food intake and cecal content (n = 12). (A) Body weight accumulation of the high fat diet (HFD) fed mice differed significantly from those of the HFD where 10% of the carbohydrate by weight (5% corn starch, 5% cellulose) was replaced by the following dietary fibers: beta glucan (HFD + bglucan), apple pectin (HFD + pectin), inulin (HFD + inulin), inulin acetate ester (HFD + inul A), inulin propionate ester (HFD + inul P), inulin butyrate ester (HFD + inul B), inulin propionate and butyrate ester, 5% each (HFD + inul PB) and low fat diet (LFD) fed mice from 2 weeks onwards until the end of the experiment. (B) Body weight of HFD fed mice at week 8 was significantly greater than mice consuming HFD + DF or LFD. Bodyweight of mice at week 8 fed HFD + DF was closer to that of LFD fed mice. (C) The increase in body weight seen in (A,B) is attributed to fat mass, which was increased in the HFD fed mice, while lean mass did not significantly differ with the dietary interventions as measured in mice at week 8. (D) Cumulative food intake measured over the course of the study did not differ in mice receiving either a HFD or HFD + DF. Mice consuming LFD consumed significantly more food. (E) Liver fat (F) Cecal content. Significant (p < 0.05) differences assessed by ANOVA with Fisher's correction are indicated using lower case letters to distinguish differences between the diets. Subsequent analysis was applied to identify probe IDs indicating a > 1.5 fold difference (P < 0.01) in gene expression when compared to HFD mice. Using the cut off set at > 1.5 fold difference (P < 0.01), 741 probe IDs in HFD + inulin, 1614 in HFD + inul PB ester and 151 in LFD fed mice showed differences in expression levels compared to HFD fed mice (Supplementary File S6 Fig. S2C-E, GEO Accession no. GSE106375). In liver, there were 68 probe IDs in HFD + inulin, 53 in HFD + inul PB and 196 in LFD mice (GEO Accession no. GSE106375) showing differences in gene expression compared to HFD mice. Greater numbers of probe IDs were identified in the cecum of mice consuming HFD + DFs compared to LFD, indicating that differences in gene expression are greater with DF supplementation, rather than simply altered in association with body weight/adiposity. The same selection criteria identified a greater number of differences in gene expression regulated by LFD in liver compared to HFD supplemented with DFs. Validation of selected gene targets in cecum and liver and regulation in response to dietary fibers. Validation of microarray data was conducted to confirm altered regulation of selected gene targets in response to HFD + inulin and HFD + inul PB. Comparison of the selected gene targets was assessed in the microarrayed samples and also in response to the other diet interventions using a custom designed RT Profiler PCR Array of cecum and Taqman assays of cecum and liver. Genes were selected on the basis of large differences in response to consumption of HFD + inulin or HFD + inul PB ester diets, and involvement in gut barrier function in the case of cecum microarray targets. RT Profiler PCR Array confirmed that cecal genes were associated with dietary fiber supplementation rather than adiposity, with no differences observed in target gene expression in LFD fed mice (Fig. 4A). Selected gene targets were regulated similarly in response to all HFD + DF diets, including higher levels of cecal Tex19.1 (Testis expressed 19.1) (Fig. 4A) and Muc16 (Mucin 16), except HFD + inul A and HFD + inul B (Fig. 4A). Higher Cldn23 (Claudin 23) expression was not confirmed by RT Profiler PCR. (Fig. 4A). Lower Cldn5 (Claudin 5) expression was observed in HFD + pectin, HFD + inulin, HFD + inul B and HFD + inul PB (Fig. 4A). Tff3 (Trefoil Factor 3) was lower in HFD + pectin, HFD + inul P, HFD + inul B and HFD + inul PB (Fig. 4A). In summary, the majority of microarray identified genes were similarly regulated by all diets containing dietary fibers (Fig. 4A). Microarray analysis revealed increased Enho (Energy Homesostasis Associated) expression in liver for all HFD + DF diets relative to the HFD diet, with the greatest levels observed in response to HFD + inul PB. Enho expression in LFD mice compared to HFD fed mice was not altered (Fig. 4B) and levels of adropin, encoded by Enho, were not increased in liver (Fig. 4C). Molecular interaction networks and integration of gene responses to inulin and inulin propionate and butyrate esters in cecum and liver. The validation of gene expression using RT Profiler PCR Array and Taqman assays revealed common responses to dietary fiber consumption in cecum (Fig. 4A) and liver (Fig. 4B) irrespective of the type of dietary fiber or dietary fiber specific differences in cecal bacterial composition (Fig. 3). Pathway analysis was carried out on a sub-set of normalised microarray gene expression data of known genes (predicted genes and unnamed transcripts were excluded) that showed >1.5 fold differences P ≤ 0.01 in gene expression compared to HFD in cecum and liver in response to HFD + inul and HFD + inul PB, but not LFD. This gene sub-set consisted of 168 genes with higher and 102 with lower levels of gene expression in cecum and 2 with higher and 6 with lower levels of gene expression in liver (Supplementary Files S7 and S8) when compared to HFD fed mice. Common transcriptional responses to HFD + inul and HFD + inul PB are shown (Fig. 5A). A Cytoscape network of genes showing common transcriptional responses to HFD + inul and HFD + inul PB illustrates the categories these genes fall into, including anion transport, lipid, small molecule, single-organism, long-chain fatty acid, cellular lipid, monocarboxylic, arachidonic, icosanoid and fatty acid metabolic processes being upregulated (Fig. 5A), while sulphur compound metabolic process epithelium, gland and tissue development were downregulated in the cecum (Fig. 5B). GO (Gene Ontology) terms enriched in liver in response to HFD + inul and HFD + inul PB were localisation, locomotion, immune system, metabolic and single-organism processes (Fig. 5C). The Cytoscape network revealed SAA1 (Serum Amyloid A1) and SAA2 (Serum Amyloid A2), involved in inflammatory responses, differed in response to consumption of HFD + inul and HFD + inul PB esters in both cecum (higher) and liver (lower) (Fig. 5E) compared to HFD. The differences in gene expression were confirmed by real-time PCR (Fig. 6). However, higher SAA1 levels in cecum failed to reach significance for HFD + pectin and HFD + inul P, as did SAA2 for HFD + inul P (Fig. 6A,B). Lower levels of SAA1 in liver were observed in response to consumption of HFD + DFs, reaching significance in HFD + bglucan and HFD + inulin fed mice. While the lower level of SAA2 in liver was significant for all HFD + DFs, except from HFD + pectin and HFD + inul P. LFD mice also showed comparable differences in SAA1 and SAA2 in cecum and liver measured using real time PCR (these differences were noted in microarray analysis, but had failed to meet the significance cut off) (P < 0.01) (Fig. 6). Discussion This study reports comprehensive analysis of seven dietary fibers combined with HFD on cecal bacterial profiles and host gene expression in the cecum and liver of mice. While there were fiber-specific differences in cecal bacterial composition, all dietary fibers tested prevented obesity and yielded similar responses in body composition and host gene expression in cecum and liver of a number of gene targets identified by microarray. The responses to dietary fiber of the gene targets selected for further analysis confirmed a similar outcome, implying that while cecal bacterial profiles differ specific to each dietary fiber, this results in collective outcomes in the expression of certain host genes. Despite the differences in bacterial profiles associated with specific fibers we established common gene expression differences in the host irrespective of which fiber was incorporated in the HFD. This finding was a significant outcome of our study and implies that bacterial composition per se may not be causal in protecting against HFD-induced weight gain. However, there is a possibility that common changes in microbial , where 10% of the carbohydrate by weight (5% corn starch, 5% cellulose) was replaced by beta glucan (HFD + bglucan), apple pectin (HFD + pectin), inulin (HFD + inulin), inulin acetate ester (HFD + inul A), inulin propionate ester (HFD + inul P), inulin butyrate ester (HFD + inul B), inulin propionate and butyrate ester, 5% each (HFD + PB) and low fat diet (LFD). (A) RT Profiler PCR Array of selected gene targets showing altered gene regulation in response to HFD + inul and HFD + PB from microarray data analysis of cecum. Fold change was calculated relative to HFD fed mice using mean gene target normalised to UBE2D2 (n = 6). (B) Enho gene expression in liver of mice fed HFD + DF or LFD relative to HFD. Fold change was calculated relative to HFD fed mice using mean Enho normalised to UBE2D2 (n = 6). (C) Adropin levels in liver. A Student's t-test based on delta CT values was applied to test comparisons with HFD fed mice. * P < 0.05, ** P < 0.01, *** P < 0.001. . Gene expression of SAA1 and SAA2 relative to high fat (HFD) fed mice in cecum and liver in mice fed a HFD where 10% of the carbohydrate by weight (5% corn starch, 5% cellulose) was replaced by beta glucan (HFD + bglucan), apple pectin (HFD + pectin), inulin (HFD + inulin), inulin acetate ester (HFD + inul A), inulin propionate ester (HFD + inul P), inulin butyrate ester (HFD + inul B), inulin propionate and butyrate ester, 5% each (HFD + PB) and low fat diet (LFD). Gene expression was calculated relative to HFD fed mice using mean gene target normalised to UBE2D2 (n = 5-6). A Student's ttest was applied to test comparisons with HFD fed mice. * P < 0.05, ** P < 0.01, *** P < 0.001. SCIentIFIC RePoRts | (2018) 8:15566 | DOI:10.1038/s41598-018-34081-8 groups producing particular metabolites or signalling molecules, or a low-abundance taxon that induces major effects on physiology, could be contributing to the observed effects on weight gain and metabolism. Nonetheless, replacement of dietary starch by dietary fibers in these defined diets was predicted to decrease the supply of carbohydrate-derived calories in the upper GI tract by 12.4%. Calories arising from bacterial fermentation of dietary fibers cannot be calculated exactly, but iso-caloric replacement is based on the assumption that dietary fibers provide 50% of the calorific value of digestible carbohydrates 21 . The cellulose incorporated in the diets (International Fiber Corporation) has no calorific value for the host and was replaced with dietary fibers in the HFD + DF diets. Given that there was no detectable change in cumulative food intake (Fig. 1D) between the HFD and the HFD diets incorporating dietary fiber, together with protection against diet-induced obesity (Fig. 1A-C), our results indicate that the net calorie gain may have been lower than this. While composition of the cecal bacteria were different between the different dietary fibers, numbers of cecal bacteria per g cecal contents were enhanced by all dietary fibers. It is known that gut bacterial composition is affected by the addition of dietary fibers to the diet 16,17,22 , but also by dietary fat and protein 23 , by gut turnover/transit time and the gut environment (e.g. pH) 2 . In contrast to earlier reports 24,25 we found little difference between the HFD and LFD diets in the representation of Bacteroidetes and Firmicutes phyla, but Actinobacteria were proportionally less abundant in HFD mice, except for HFD + pectin and HFD + inulin. Major differences in cecal Bacteroidetes and Firmicutes proportions were seen between the HFD + DFs. For the three non-esterified dietary fiber diets, Bacteroidetes were proportionally favoured on HFD + pectin and Firmicutes on HFD + bglucan, while the two phyla were approximately equally represented on HFD + inulin. These differences appear largely due to dietary fiber specific responses at the OTU level (Supplementary File S4) 16 . There were also major differences in the proportions of these two phyla and at the OTU level with the four esterified inulin substrates. It has been reported that SCFA differentially affect the growth of members of these two phyla through pH-dependent stress 26 . However, there is no evidence that these major differences in cecal bacterial composition differentially affect adiposity apart from decreased weight gain on HFD. There was no significant impact of dietary fibers in GIP or ghrelin (GLP-1 levels were undetectable). However, levels of PYY, leptin, insulin and resistin differed in HFD + DF compared to HFD fed mice. There were no consistent patterns observed in observed significant differences with consumption of the different fibers. Leptin levels were significantly higher in mice fed HFD + inulinA compared to HFD + bglucan only. While there was some indication that HFD + bglucan had significantly higher levels of PYY compared to HFD + inulinA and HFD + inulinP. However, notably PYY plasma levels were observed to be variable in mice. While there was a tendency for increased PYY this was only significantly increased in HFD + bglucan, HFD + inul and HFD + inulin B fed mice (Fig. 2D). This is in contrast to reports which have reported SCFA upregulation of PYY 4 . However, there were no differences in cumulative food intake between HFD and HFD + DF mice (Fig. 1D). Likewise there was no transcriptional response of FFAR3 in cecum (measured by microarray analysis) in response to dietary fibers (GEO Accession no. GSE106375), while FFAR2 was only down-regulated in HFD + inul + PB mice. This is not definitive evidence, but may indicate that these pathways are not a major influence on the effects of dietary fiber on adiposity. Levels of leptin, insulin and resistin are directly associated with adiposity and HFD + DF mice with lower adiposity consequently have lower levels of these hormones. In contrast to the diet specific differences in cecal bacterial profiles, the transcriptional responses in cecum and liver are similar. Nonetheless, there were instances of differences in expression of selected gene targets associated with individual fiber diets, but these did not show any consistent pattern that would permit speculation on potential physiological outcomes. Microarray analysis indicated that many genes and pathways modulated in response to dietary fibers are epithelial, with a number involved in gut barrier function (Fig. 5). The gut barrier protects against ingress of harmful agents while allowing nutrient absorption. The myriad genes generating the proteins required to maintain healthy gut barrier function is still not fully understood. However, there are indications that the observed changes in gut barrier gene expression in the current study are favourable to improved barrier function. The altered expression of claudins have potential to alter trans-epithelial and strand tightness of tight junctions 27 . Mucins are another complex group of molecules known to be important components of the gut barrier with Muc2 deficiency, consistently expressed at lower levels in fiber fed mice in our study, reported to protect mice from diet-induced fatty liver disease and obesity 28 . GO also indicated higher levels of anion transport genes, including SLC transporters which are widely expressed in epithelia, particularly associated with barrier function 29 . Both SLC5A8 and SLC26A3, were elevated in cecum (Fig. 5/Supplementary File S7, GEO Accession no. GSE106375) in response to HFD + inul and HFD + inul PB, and are tumour suppressors 30,31 with SLC5A8 also linked to butyrate and propionate uptake 32 . These results, together with the associated evidence of reduced inflammation in the liver, leads us to conclude that observed changes in gut epithelial genes favour an improvement in gut barrier function. SAA1 and SAA2, the main acute phase isoforms of serum amyloid A, are expressed by the lumenal surface epithelium lining the colon 32 and are thought to provide an anti-bacterial role, assisting in maintenance of epithelial immune homeostasis 32 . The encoded proteins are secreted into the lumen, play a role in innate recognition of Gram-negative bacteria, reduce bacterial viability 32,33 and are linked to reduced risk of inflammatory bowel disease 32 . SAA1 and SAA2 were downregulated in liver in response to HFD + DF. HFD increases the inflammatory response in liver with an increase in circulating SAA 20 . The reduced inflammatory response in liver is evidenced by lower levels of SAA1 and SAA2. Studies have shown that damaged gut epithelium results in elevated levels of circulating SAA most likely derived from liver 34 . Thus, the opposing effects of dietary fibers on SAA1 and SAA2 in cecum and liver may be linked (Fig. 5E). Dietary fibers upregulate Tex19.1 which has restricted expression in pluripotent stem cells 35 and inhibits retrotransposons 36 . Tex19.1 potentially stabilises the gut stem cell genome during replication and renewal of the gut epithelium, linking dietary fiber intake to anticancer effects. There was a smaller subset of gene changes in common in the response to consumption of dietary fibers in the liver compared to those identified in the cecum measured by microarray analysis. Enho was chosen for further analysis following reports that its encoded protein, adropin, is a hormone involved in energy homeostasis and lipid metabolism, with adropin deficiency associated with obesity and insulin resistance 36 . The higher level of Enho expression in liver is seen with all dietary fibers tested and may be an important factor in reducing adiposity. Nonetheless, this was not reflected in higher levels of liver adropin, the protein encoded by Enho. It has been reported that Enho expression in liver results in increased circulating adropin 37 and it may be that the elevated Enho expression in the liver may produce increased adropin secretion while liver adropin levels remain stable (Fig. 4C). Adropin is involved in energy homeostasis and lipid metabolism, with deficiency associated with obesity and insulin resistance 36 . Treatment with synthetic adropin reduces weight gain 36,38 in agreement with prevention of weight gain in HFD + DF mice showing higher levels of liver Enho when compared to HFD fed mice. However, Enho gene expression was not altered in HFD compared to LFD mice, indicating that the response was a consequence of dietary fiber intake. Supporting our findings, LFD mice were reported to have reduced levels of adropin compared to mice fed chow, which is a rich source of fiber 36 . The consequences of increased Enho expression specifically in the liver may form the basis for explaining the beneficial effects of dietary fibers on metabolic health. Despite high levels of Enho expression in cecum (detected by microarray) it is not differentially expressed in response to the supplementation of the HFD with dietary fibers. It was noted that changes in other liver targets, such as Lcn2 (Lipocalin 2) and Itagx (integrin subunit alpha X), which are expressed at lower levels in the microarray analysis of HFD + inulin and HFD + inulPB compared to HFD provide further support for the protective effect of dietary fibers on liver. Lcn2 and Itagx are reported to be key inflammatory markers with activation of these markers indicative of metabolic and inflammatory stress in the liver [39][40][41] . This also further substantiates our contention that dietary fibers may improve barrier function and protect the liver from the inflammatory effects of consuming a HFD. Our study provides a novel insight on the impact of dietary fiber on the cecal bacteria and host responses to diet-induced obesity, revealing that of the seven dietary fibers tested, all exert a similar effect on reducing adiposity and cecum and liver expression of the selected gene targets. It should be noted however, that consumption of fibers of different particle size have been shown to differentially affect metabolic and inflammatory responses in mice 42 . Dietary fiber-induced resistance to diet-induced obesity in this study is potentially mediated by the hormone adropin, as indicated by the liver specific increased levels of Enho. Additionally, improved gut barrier function, characterised by regulation of Tex19.1 and altered mucins, claudins and epithelial solute transporters is associated with reduced expression of markers of inflammation and accumulation of fat in liver. The studies reported in the present paper were conducted in mice with further studies in humans needed to determine the effects of dietary fibers on modulating obesity. The already known associations of adropin and human health provide a useful focus for further study of translational potential to humans. In conclusion the effects of fiber consumption on a high fat diet has potential implications for health that are apparently not directly related to cecal bacterial community composition, although we cannot exclude the possibility that they may be related to total microbial populations and, or their overall metabolic activity. Methods Animals and dietary intervention. The animal studies were licensed under the Animal (Scientific Procedures) Act of 1986 and in accordance with the European Directive on the Protection of Animals used for Scientific Purposes 2010/63/E following ARRIVE guidelines and received approval from the Rowett Institute's Ethical Review Committee. Male C57BL/6 mice, 12 weeks of age and 24-25 g in weight (Harlan, Bicester, UK), were randomly assigned to one of nine dietary groups (n = 12) and fed, either: 1. HFD (high fat diet) (45% of energy from fat) (D12451) 2. LFD (low fat diet) (10% fat by energy) (D12450B), or the HFD where 10% of the carbohydrate by weight (5% corn starch, 5% cellulose) was replaced by the following dietary fibers: 3. beta glucan (HFD + bglucan) (Glucagel, DKSH, Milan Italy) 4. pectin (HFD + pectin) (Sigma-Aldrich, Gillingham, UK) 5. Cecal bacterial analysis. Genomic DNA (gDNA) was extracted from the cecum contents using the FastDNA ® SPIN Kit for Soil (MP Biomedicals, Illkirch, France). Total cecal bacterial abundance was estimated following the dietary interventions by quantitative PCR and the V3-V4 region of bacterial 16S rRNA genes were sequenced on the Illumina MiSeq using a v3 flow cell with 2 × 300 bp paired end reads (full details of the subsequent analysis steps used are available in Supplementary File S2). Sequencing data generated during this study are available in the SRA database under SRA accession SRP117745 (accessible at http://www.ncbi.nlm.nih.gov/ sra/SRP117745). Whole genome microarray analysis. Total RNA extracted from liver and cecum using an RNeasy Mini Kit (Qiagen, Crawley, UK) was microarrayed with SurePrint G3 Mouse GE 8 × 60 K Microarray G4852A (Agilent Technologies, UK) (Supplementary File S2). Data are deposited in NCBI's Gene Expression Omnibus 44 and are accessible through GEO Series accession number GSE106375 (www.ncbi.nlm.nih.gov/geo/query/acc.cgi?ac-c=GSE106375). Details of statistical analysis of microarray data is provided in Supplementary File S2. Confirmation of microarray identified differences in gene expression using custom designed RT Profiler PCR arrays. Genes showing altered responses to HFD + inulin or HFD + inul PB esters in cecum were identified from microarray analysis and validated using a custom designed RT Profiler PCR Array (Qiagen) (Supplementary File S2). Real-time PCR. Complementary cDNA templates for real-time PCR assays were prepared from Superscript II (Invitrogen) reverse transcribed total RNA and Taqman assays were conducted with duplex target and reference gene UBE2D2. (Supplementary File S2). Statistical analysis. Details of statistical analysis of cecal microbiota sequencing, cecum and liver microarray data can be found in Supplementary File S2. Other data are presented as mean ± SEM and analysed using GenStat (Gen Stat ® 13th Edition (VSN International, Ltd., Hemel Hempstead, UK) apart from the RT-PCR data which were analysed on a logarithmic (delta CT) scale but presented as fold-changes (anti-logged differences) without standard errors. Comparison of a diet group with the high fat fed group were conducted using t tests. The influence of a single factor and comparisons between the diet groups was tested using one-way ANOVA. Multiple comparisons were tested using either Fisher's protected or unprotected LSD test. Skewed data was log transformed prior to statistical analysis.
7,627.2
2018-10-22T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]
SnSe2 Quantum Dots: Facile Fabrication and Application in Highly Responsive UV-Detectors Synthesizing quantum dots (QDs) using simple methods and utilizing them in optoelectronic devices are active areas of research. In this paper, we fabricated SnSe2 QDs via sonication and a laser ablation process. Deionized water was used as a solvent, and there were no organic chemicals introduced in the process. It was a facile and environmentally-friendly method. We demonstrated an ultraviolet (UV)-detector based on monolayer graphene and SnSe2 QDs. The photoresponsivity of the detector was up to 7.5 × 106 mAW−1, and the photoresponse time was ~0.31 s. The n–n heterostructures between monolayer graphene and SnSe2 QDs improved the light absorption and the transportation of photocarriers, which could greatly increase the photoresponsivity of the device. Introduction Graphene-based electronic and optoelectronic devices have attracted extensive attention [1][2][3]. Mueller et al. demonstrated a vertical incidence metal-graphene-metal photodetector with an external responsivity of 6.1 mAW −1 at 1.55 µm [4]. The photoresponsivity was limited by the low absorption of the graphene. Quantum dots (QDs) can break this limitation. They can act as light absorption spots. The photo-induced carriers in them can transfer into the graphene film, and the charges in the graphene film transport to the electrodes quickly. Thus, the responsivity of a graphene-based device is improved [5][6][7]. Cheng et al. showed a phototransistor based on graphene and graphene QDs with a photoresponsivity of up to 4 × 10 10 mAW −1 , but the response time was 10 s [8]. Sun et al. constructed an infrared photodetector based on graphene and PbS QDs with a responsivity of up to 10 10 mAW −1 and a response time of 0.26 s [9]. Sun et al. demonstrated a UV phototransistor based on graphene and ZnSe/ZnS core/shell QDs. Its responsivity was up to 10 6 mAW −1 and the response time was 0.52 s [10]. In order to fabricate the QD solution with uniform distribution, the wet chemical method was commonly used. Some organic solvents, such as toluene or pyridine, were used in the process [9,10]. The chemical groups can cap the surface of the QDs and modify their charge transfer property, thus influencing the photo responsivity of the device. Synthesizing QDs using facile and green methods and utilizing them in optoelectronic devices are active areas of research. Two-dimensional transition-metal dichalcogenides (TMDCs) have been applied in fluorescent imaging [11], biological sensing [12], and photocatalytic [13] due to their unique optoelectronic properties. Tin diselenide (SnSe 2 ) is a semiconductor in the TMDCs family. SnSe 2 QDs can be used in fast and highly responsive phototransistors since they have a tunable bandgap and high quantum efficiency. In this paper, SnSe 2 QDs were fabricated via sonication and a laser ablation process. The deionized water was used as a solvent, and there were no organic chemicals introduced in the process. It was a facile and environmentally-friendly method. The phototransistor based on monolayer graphene and SnSe 2 quantum dots was demonstrated. The photoresponse time was~0.31 s, and the photoresponsivity was up to 7.5 × 10 6 mAW −1 . The n-n heterostructures between monolayer graphene and SnSe 2 quantum dots enhance the light absorption and the generation of photocarriers. The photocarriers can transfer quickly from SnSe 2 QDs to graphene, thus improving the photoresponsivity of the device. Experiment SnSe 2 QDs were fabricated by sonication and the laser ablation process, as shown in Figure 1. The SnSe 2 bulk was bought from Six Carbon Technology. We put the SnSe 2 bulk in an agate mortar and manually ground it for 15 min to get SnSe 2 powders. Then, we dispersed 20 mg of powder in 30 mL of deionized water. The mixture was sonicated with a sonic tip for 2 h at the output power of 650 W in an ice-bath. The power was on for 4 s and off for 2 s. After sonication, the solution was a mixture of SnSe 2 small particles and flakes. The solution was transferred into a quartz cuvette and irradiated under a 1064 nm pulsed Nd:YAG laser for 10 min (6 ns, 10 Hz). The laser output power was 2.2 W. When the solution was irradiated by the laser pulses, the small particles and flakes absorbed the incident photon energies and formed extreme non-equilibrium conditions (high pressure and temperature) in a short time (~ns). After sustainable irradiation, the particles and nanosheets broke into tiny pieces. Then, the solution was centrifuged for 30 min at a speed of 6000 rpm. After that, the supernatant containing SnSe 2 QDs was collected. properties. Tin diselenide (SnSe2) is a semiconductor in the TMDCs family. SnSe2 QDs can be used in fast and highly responsive phototransistors since they have a tunable bandgap and high quantum efficiency. In this paper, SnSe2 QDs were fabricated via sonication and a laser ablation process. The deionized water was used as a solvent, and there were no organic chemicals introduced in the process. It was a facile and environmentally-friendly method. The phototransistor based on monolayer graphene and SnSe2 quantum dots was demonstrated. The photoresponse time was ~0.31 s, and the photoresponsivity was up to 7.5 × 10 6 mAW −1 . The n-n heterostructures between monolayer graphene and SnSe2 quantum dots enhance the light absorption and the generation of photocarriers. The photocarriers can transfer quickly from SnSe2 QDs to graphene, thus improving the photoresponsivity of the device. Experiment SnSe2 QDs were fabricated by sonication and the laser ablation process, as shown in Figure 1. The SnSe2 bulk was bought from Six Carbon Technology. We put the SnSe2 bulk in an agate mortar and manually ground it for 15 min to get SnSe2 powders. Then, we dispersed 20 mg of powder in 30 mL of deionized water. The mixture was sonicated with a sonic tip for 2 h at the output power of 650 W in an ice-bath. The power was on for 4 s and off for 2 s. After sonication, the solution was a mixture of SnSe2 small particles and flakes. The solution was transferred into a quartz cuvette and irradiated under a 1064 nm pulsed Nd:YAG laser for 10 min (6 ns, 10 Hz). The laser output power was 2.2 W. When the solution was irradiated by the laser pulses, the small particles and flakes absorbed the incident photon energies and formed extreme non-equilibrium conditions (high pressure and temperature) in a short time (~ns). After sustainable irradiation, the particles and nanosheets broke into tiny pieces. Then, the solution was centrifuged for 30 min at a speed of 6000 rpm. After that, the supernatant containing SnSe2 QDs was collected. The morphology of the SnSe2 QDs was studied using a high-resolution transmission electron microscope (TEM, FEI Tecnai G2 F30). The structure of the SnSe2 QDs was characterized by X-ray diffraction spectroscopy (XRD, Bruker D8 Advance) and the Raman spectra (Horiba Labram HR Evolution). The absorption spectra were measured by a UV-vis spectrometer (Shimadzu UV-1700). The chemical vapor deposition (CVD)-grown monolayer graphene was wet-transferred onto a p + Si/SiO2 substrate [14,15]. The thickness of SiO2 was 285 nm. The highly doped p-type silicon served as the back-gate electrode. Then, the Cr/Au (10 nm/90 nm) source and drain electrodes were deposited on top of the graphene film by the thermal evaporation method. The channel length and width were 0.2 mm and 2 mm, respectively. The optoelectronic properties were studied using a probe station equipped with a semiconductor parameter analyzer (Keithley 4200). The illumination LED light wavelength was 405 nm. The morphology of the SnSe 2 QDs was studied using a high-resolution transmission electron microscope (TEM, FEI Tecnai G2 F30). The structure of the SnSe 2 QDs was characterized by X-ray diffraction spectroscopy (XRD, Bruker D8 Advance) and the Raman spectra (Horiba Labram HR Evolution). The absorption spectra were measured by a UV-vis spectrometer (Shimadzu UV-1700). The chemical vapor deposition (CVD)-grown monolayer graphene was wet-transferred onto a p + Si/SiO 2 substrate [14,15]. The thickness of SiO 2 was 285 nm. The highly doped p-type silicon served as the back-gate electrode. Then, the Cr/Au (10 nm/90 nm) source and drain electrodes were deposited on top of the graphene film by the thermal evaporation method. The channel length and width were 0.2 mm and 2 mm, respectively. The optoelectronic properties were studied using a probe station equipped with a semiconductor parameter analyzer (Keithley 4200). The illumination LED light wavelength was 405 nm. Figure 2a shows the transmission electron microscope (TEM) image of SnSe 2 QDs as-fabricated. It shows an e-a size distribution in the range of 5-11 nm, and the average size is 9.8 nm, as indicated in Figure 2b. The average size of the QDs comes from the statistical analysis of the sizes of 200 QDs measured from TEM images. A high-resolution TEM image of a single SnSe 2 QD is shown in the inset of Figure 2a. The lattice spacing is about 0.33 nm, which corresponds to the (1010) planes of a hexagonal-phase SnSe 2 [16]. The result shows that the SnSe 2 QDs are crystalline. In SnSe 2 QDs, these diffraction peaks almost disappear except for a tiny peak at 2θ = 29.1 • . After sonication and laser ablation, the SnSe 2 bulk was cracked into nanoparticles, and there was no constructive interference from the aligned crystal planes [13,17]. The tiny peak at 2θ = 29.1 • corresponds to the (002) face, which may come from the partial restacking of QDs in the process of drying. Figure 3b shows the Raman spectra of the SnSe 2 bulk and QDs. The incident laser wavelength is 514 nm and the spot size is around 2 µm. For the bulk SnSe 2 , two Raman active vibration modes are observed at 110.3 cm −1 and 183.6 cm −1 , which correspond to the in-plane E g and out-of-plane A 1g modes [18]. For the SnSe 2 QDs, the peak of the E g mode is very weak, but the peak of the A 1g mode is observable and has a small blue-shift of~1 cm −1 , which may be due to the surface effect and decrease of SnSe 2 thickness [19]. Figure 3c shows the absorption spectra for SnSe 2 QDs and SnSe 2 nanosheets solutions in the range of 250-1000 nm. The absorption band of the SnSe 2 nanosheets solution is broad, covering regions from the ultraviolet to near-infrared. It is similar to the absorption band reported for the SnSe 2 powders [20]. For the SnSe 2 QDs solution, only strong absorption from 250 nm to 420 nm is observed. The bulk SnSe 2 has an indirect bandgap of 1.0 eV [20]. When the particle size is reduced, the emergence of the quantum confinement effects leads to the discretization of energy levels. As a result, the SnSe 2 QDs show a larger band gap [21]. Nanomaterials 2019, 9, x FOR PEER REVIEW 4 of 9 Figure 4c. The illumination density is 350 µW/cm 2 . As shown in the figure, there is no change between the current in the dark and under illumination, indicating that the photoresponse of pure graphene is negligible. Figure 4d shows the transfer curves (IDS-VG, VDS = 0.5 V) of the device with and without SnSe2 QDs in which the light is absent. The transfer curve of the device without SnSe2 QDs exhibits a typical Vshape. The field-effect mobilities are ~230 cm 2 V −1 s −1 for electrons and ~220 cm 2 V −1 s −1 for holes. The negative, neutral charge point (about −5 V) of single-layer graphene is observed in Figure 4d, indicating an electron dominated conduction in the graphene. The same behavior was also observed by Sun et al. [10]. Graphene is very sensitive to the surroundings. The defects in the SiO2 substrate, residues from processing and handling, charged impurities, and substrate surface roughness can cause the shift of the neutral charge point [22]. The SnSe2 QDs solution was dropped on the top of Figure 4a schematically shows the photodetector decorated with SnSe 2 QDs on a p + Si/SiO 2 substrate. The Raman spectra of the pure graphene on a p + Si/SiO 2 substrate is shown in Figure 4b. Two Raman peaks at 1582 cm −1 (G line) and 2698 cm −1 (2D line) are observed. The ratio of the integrated intensities of the G line and 2D line is~0. 25. The peak at 1350 cm −1 (D line) in the spectra is very weak, indicating that the graphene is a monolayer with good quality. The I-V curves for the monolayer graphene phototransistor in the dark and with illumination under zero gate voltage (V G = 0 V) are shown in Figure 4c. The illumination density is 350 µW/cm 2 . As shown in the figure, there is no change between the current in the dark and under illumination, indicating that the photoresponse of pure graphene is negligible. while increasing the illumination density. Figure 4f represents the responsivity (R = I /(WLE )) of the photodetector as functions of drain voltages at different illumination densities. The responsivity decreases while increasing the illumination density, which is consistent with the reported UVdetectors [23]. The maximum responsivity of the device is about 7.5 × 10 6 mAW −1 (VDS = 5 V) at an incident power density of 31.7 µW/cm 2 , which is higher than that reported in graphene-based UV phototransistors [24][25][26]. Figure 5b. The shift of the transfer curve (ΔVG) changes linearly with the light illumination density (Ee), indicating that the photo-induced carrier density in SnSe2 QDs increases with increasing illumination density. This illumination density-dependent shift does not appear in the pure graphene phototransistor. The existence of SnSe2 QDs leads to this photoresponse behavior. As shown in Figure 5a, the electron mobility in the SnSe2 QD-decorated device is higher than that of holes at different illumination densities. The photo-induced electron-hole pairs are separated at the interface between Figure 4d, indicating an electron dominated conduction in the graphene. The same behavior was also observed by Sun et al. [10]. Graphene is very sensitive to the surroundings. The defects in the SiO 2 substrate, residues from processing and handling, charged impurities, and substrate surface roughness can cause the shift of the neutral charge point [22]. The SnSe 2 QDs solution was dropped on the top of graphene film and heated at 40 • C for 30 min in a glove box filled with N 2 gas. The transfer curve of the photodetector with SnSe 2 QDs becomes asymmetric, and the Dirac point converts to a negative gate voltage (about −22 V). The shift indicates that the SnSe 2 QDs are n-type semiconductors, which are the same type as the bulk SnSe 2 [20]. The electron and hole mobilities decrease to~160 cm 2 V −1 s −1 and~130 cm 2 V −1 s −1 , respectively. Results and Discussion In order to study the optoelectronic properties of the device, we measured the photocurrents at different illumination densities with zero gate voltage (V G = 0 V). Figure 4e shows the relationship between the photocurrent (I Ph = I Light − I Dark ) and the applied drain voltages. I Light is the drain current under illumination, and I Dark is the drain current without illumination. The photocurrent increases while increasing the illumination density. Figure 4f represents the responsivity (R = I ph /(WLE e )) of the photodetector as functions of drain voltages at different illumination densities. The responsivity decreases while increasing the illumination density, which is consistent with the reported UV-detectors [23]. The maximum responsivity of the device is about 7.5 × 10 6 mAW −1 (V DS = 5 V) at an incident power density of 31.7 µW/cm 2 , which is higher than that reported in graphene-based UV phototransistors [24][25][26]. Figure 5a shows the transfer curves of the photodetector at different illumination densities. The Dirac point of the device shifts to a lower negative gate voltage while increasing illumination density. The shift of the transfer curves (∆V G ) is plotted as a function of illumination densities in Figure 5b. The shift of the transfer curve (∆V G ) changes linearly with the light illumination density (Ee), indicating that the photo-induced carrier density in SnSe 2 QDs increases with increasing illumination density. This illumination density-dependent shift does not appear in the pure graphene phototransistor. The existence of SnSe 2 QDs leads to this photoresponse behavior. As shown in Figure 5a, the electron mobility in the SnSe 2 QD-decorated device is higher than that of holes at different illumination densities. The photo-induced electron-hole pairs are separated at the interface between SnSe 2 QDs and monolayer graphene. The SnSe 2 QDs/graphene heterojunction facilitates the injection of photo-generated electrons from SnSe 2 QDs into the graphene, leading a local n-doping in the graphene channel. Since the transfer rate of holes is lower that of electrons, net positive charges remain in the SnSe 2 QDs. Then, a lower negative gate voltage is required to obtain the charge neutral point (Dirac point) in the detector. A similar process was reported in a p-doped graphene/PbS QDs phototransistor by Sun et al. [9]. Figure 5c shows the current response to on/off light illumination and Figure 5d shows the photocurrent response time of the device (VG = 0 V, VDS = 0.05 V, illumination density: 155.2 µW/cm 2 ). The photocurrent increases with time when the illumination is on and decreases with time when the illumination is off. As shown in Figure 5d, the photocurrent increases to 80% with a response time of 0.31 s, which is faster than that reported in graphene devices [9,10,24,26,27]. The response time includes charge generation time, charge transfer time in heterojunctions, and charge collection time. In our experiment, the measured graphene charge mobility is smaller than the value for perfect graphene (up to 200,000 cm 2 V −1 s −1 ), which may be due to the defects induced in the graphene film while transferring to the substrate, and the response time can be improved by optimizing the graphene transfer process. When the light is turned out, the photocurrent decreases to 20% with a time of 1.31 s. The photocurrent of the detector is influenced by the SnSe2 QDs density. We have measured the AFM pictures and photocurrents for detectors with different SnSe2 QDs densities. As shown in Figure Figure 5c shows the current response to on/off light illumination and Figure 5d shows the photocurrent response time of the device (V G = 0 V, V DS = 0.05 V, illumination density: 155.2 µW/cm 2 ). The photocurrent increases with time when the illumination is on and decreases with time when the illumination is off. As shown in Figure 5d, the photocurrent increases to 80% with a response time of 0.31 s, which is faster than that reported in graphene devices [9,10,24,26,27]. The response time includes charge generation time, charge transfer time in heterojunctions, and charge collection time. In our experiment, the measured graphene charge mobility is smaller than the value for perfect graphene (up to 200,000 cm 2 V −1 s −1 ), which may be due to the defects induced in the graphene film while transferring to the substrate, and the response time can be improved by optimizing the graphene transfer process. When the light is turned out, the photocurrent decreases to 20% with a time of 1.31 s. The photocurrent of the detector is influenced by the SnSe 2 QDs density. We have measured the AFM pictures and photocurrents for detectors with different SnSe 2 QDs densities. As shown in Figure 6, the photocurrent increases with an increase of the SnSe 2 QDs density under the same irradiation density (illumination density: 350 µW/cm 2 ). When the SnSe 2 QDs thickness is larger than 40 nm, the photocurrent tends to decrease, which may be due to the decrease of the charge transfer between the QDs layers. Conclusions In summary, uniformly distributed SnSe2 quantum dots were synthesized at room temperature using a facile and environment-friendly method. The UV-detector based on monolayer graphene and SnSe2 quantum dots was demonstrated. The device showed fast photoresponse time of ~0.31 s, and its photoresponsivity was up to 7.5 × 10 6 mAW −1 . The n-n heterostructures between monolayer graphene and SnSe2 QDs improved the light absorption and the transportation of photocarriers, which have promising applications in optoelectronic devices.
4,747.4
2019-09-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Bioconversion of stilbenes in genetically engineered root and cell cultures of tobacco It is currently possible to transfer a biosynthetic pathway from a plant to another organism. This system has been exploited to transfer the metabolic richness of certain plant species to other plants or even to more simple metabolic organisms such as yeast or bacteria for the production of high added value plant compounds. Another application is to bioconvert substrates into scarcer or biologically more interesting compounds, such as piceatannol and pterostilbene. These two resveratrol-derived stilbenes, which have very promising pharmacological activities, are found in plants only in small amounts. By transferring the human cytochrome P450 hydroxylase 1B1 (HsCYP1B1) gene to tobacco hairy roots and cell cultures, we developed a system able to bioconvert exogenous t-resveratrol into piceatannol in quantities near to mg L−1. Similarly, after heterologous expression of resveratrol O-methyltransferase from Vitis vinifera (VvROMT) in tobacco hairy roots, the exogenous t-resveratrol was bioconverted into pterostilbene. We also observed that both bioconversions can take place in tobacco wild type hairy roots (pRiA4, without any transgene), showing that unspecific tobacco P450 hydroxylases and methyltransferases can perform the bioconversion of t-resveratrol to give the target compounds, albeit at a lower rate than transgenic roots. attempts have been made to develop the biotechnological production of t-R in cell factories 12 . In general, t-R production in grapevine cell cultures is very low and needs to be enhanced by elicitors. Methyl jasmonate (MeJA) and methylated-β -cyclodextrin (MBCD) have been reported as strong inducers of t-R biosynthesis and accumulation, acting synergistically when added together to the plant cell cultures 13,14 . Metabolic engineering contributes a potent set of tools for increasing plant secondary metabolite production in cell cultures. The use of strong promotors to overexpress genes involved in biosynthetic pathway bottlenecks is currently a common strategy for improving the production of target compounds in engineered biological systems 15 . In this scenario, Martinez-Marquez et al. 16 recently showed that constitutive expression of resveratrol O-methyltransferase in Vitis vinifera led to the production of t-Pt, and the heterologous expression of the human cytochrome P450 hydroxylase 1B1 (HsCYP1B1) increased t-Pn accumulation in elicited grapevine cell cultures. Also recently, Li et al. 17 described the production of t-R and t-Pt in engineered yeast after feeding the culture with phenylalanine, and Wang et al. 18 reported the production of t-Pt from t-R and p-coumaric acid in two systems, engineered yeast and Escherichia coli. Thus, the stilbenoid biosynthetic pathway can be partially reproduced in these microorganisms by means of metabolic engineering tools. Plant cell cultures have also been used to bioconvert exogenous substrates by exploiting the regioselective and stereospecific properties of plant enzymes as well as the vast potential of plants for biochemical reactions, including oxidation, reduction, hydroxylation, methylation and glycosylation 19 . Hairy root cultures obtained by genetic transformation of plant material with Agrobacterium rhizogenes can be an efficient alternative to plant cell suspensions for bioconversions due to their greater genetic/biochemical stability, high growth capacity in hormone-free culture media and relatively low cost. Transgenic cultures have been successfully used for the esterification, glycosylation, hydroxylation, etc. of various substrates, producing known or new compounds, some of them with improved biological activities 20 . Hairy root cultures have also proved useful for the expression of ectopic genes with the aim of bioconverting an abundant natural compound into a scarcely distributed derivative. An example is the efficient bioconversion of hyoscyamine into scopolamine in transgenic tobacco hairy roots carrying the hyoscyamine-6-hydroxylase gene from Hyoscyamus muticus 21 . The aim of the present study was therefore to develop a biotechnological platform based on tobacco transgenic hairy roots and cell cultures and explore their capacity to bioconvert exogenous t-R into its hydroxylated derivative t-Pn and its methylated derivative t-Pt by the heterologous expression of the human cytochrome P450 hydroxylase 1B1 (HsCYP1B1) or Vitis vinifera resveratrol O-methyltransferase (VvROMT) genes, respectively. According to the current SIGMA prices, t-Pn and t-Pt are 25-and 15-fold more expensive, respectively, than t-R 22 . Our results show that both types of engineered hairy roots were able to bioconvert t-R to produce t-Pn or t-Pt, and unexpectedly, the target compounds, together with piceid, a glucosylated derivative of t-R, were also generated by the biosynthetic machinery of tobacco wild type hairy roots (pRiA4). Materials and Methods Bacteria and plasmids. To infect the plant material, three strains of Agrobacterium rhizogenes A4 were used: wild type and two engineered strains carrying together with the pRiA4 the binary plant expression vector pK7WG2_CYP1B1 or pJCV52_ROMT (Fig. S1) for the HsCYP1B1 or VvROMT genes, respectively. These were preceded by the constitutive Cauliflower mosaic virus 35S promoter, as described in Martinez-Marquez et al. 16 . Stable transformation of tobacco and hairy root cultures. Leaf segments of Nicotiana tabacum cv Xhanti plantlets grown in vitro on Murashige and Skoog (MS) medium 23 were infected by direct inoculation with a needle with a wild type A. rhizogenes A4 strain (pRiA4), and the engineered A. rhizogenes (pRiA4+pK7WG2_ CYP1B1) or A.rhizogenes (pRiA4+pJCV52_ROMT). The hairy roots began to appear after 2-4 weeks (Fig. 2). Small roots (1-2 cm) were excised and individually cultured on MS solid medium with 30 g L −1 of sucrose and 500 mg L −1 cefotaxime to eliminate the agrobacteria. After 6 rounds of subculture in MS medium supplemented with cefotaxime, the antibiotic was removed, and PCR for the virD gene was performed to confirm the elimination of Agrobacterium (Fig. 3C). In the case of roots obtained after infection with the recombinant A. rhizogenes, kanamycin (50 mg L −1 ) was used for the selection. Hairy root lines were kept in the dark at 25 °C, and after at least 6 rounds of selection by subculturing every 2 weeks in media with antibiotics, they were transferred to MS medium without antibiotics for confirmation by PCR. The growth capacity of the hairy root cultures was measured as the Growth Index (GI, harvested fresh weight/inoculum fresh weight, after 28 days of culture). Only root lines with a high GI were selected for further experiments. A. rhizogenes A4 carrying the empty plasmids pK7WG2 or pJCV52 was also used to obtain hairy root cultures, but as the GI and the t-R bioconversion of these root lines in preliminary experiments were very similar to those of the wild type A. rhizogenes, the latter was used for comparison with the engineered hairy root lines. PCR analysis. The hairy root lines were checked by PCR. The analysis was performed using the DreamTaq Green PCR Master Mix (Thermo Fisher Scientific Inc) with 1 μ g DNA. Previously, genomic DNA was isolated from hairy root samples according to Dellaporta et al. 24 . Specific primers were used (Table S1) in the amplification of the rolC, HsCYP1B1, VvROMT and virD genes. The amplification reactions were as follows: 1 cycle at 95 °C for 5 min followed by 35 cycles at 95 °C for 1 min, 57 °C for 40 sec, 72 °C for 1.30 min and an extension cycle of 10 min at 72 °C. PCR products were analyzed by electrophoresis on 1% agarose gels. qPCR analysis. Expression of the HsCYP1B1 and VvROMT genes was verified by qPCR in the lines used in the experiments. Total RNA from plant material was isolated with TRIzol reagent (Invitrogen, Carlsbad, CA). For qRT-PCR, cDNA was prepared from 2 μ g RNA treated with DNase I (Invitrogen, Carlsbad, CA) and synthesized with SuperScript III reverse transcriptase (Invitrogen, Carlsbad, CA). qRT-PCR was performed using the iTAqTM universal SYBR Green Supermix (BioRad, Hercules, CA, EEUU) in a 384-well platform system (LightCycler_ 480 Instrument; Roche), and each sample was run in triplicate, under the following conditions: 95 °C for 2 min, 40 cycles (95 °C, 10 s; 60 °C, 20 s; 72 °C, 20 s) followed by a melting curve. Gene-specific primers were designed with Primer-BLAST (Table S1). Expression levels were normalized to those of the Elongation factor 1 α (EF-1 α). Stable expression of EF-1α in the different hairy root clones and their derived cell lines was confirmed by the obtained coefficient of variation (CV) of 0.027, which is included within the CV ranking for potential internal reference genes described by Schmidt et al. 25 . Extraction and determination of stilbenoids. To extract stilbenoids from the culture medium, 1 mL of ethyl acetate was added per 4 ml of the medium, stirring vigorously, and the apolar phase was collected. The extraction was repeated once more, and the apolar phases were combined and evaporated as described in Martinez-Marquez et al. 16 . The roots were frozen, freeze-dried and crushed. 50 mg of lyophilized plant material was placed in a tube with two volumes of 100% methanol, sonicated for 30 min to allow the methanol to penetrate the plant tissues, and the supernatant was collected. Again, two volumes of methanol were added and sonicated for 15 min. The methanolic extracts were pooled and evaporated. In order to measure the accuracy of the extraction method, a precisely weighed quantity of t-R was added to the culture medium, with or without MBCD, and extracted at different times. At time 0, just after the t-R addition, the t-R recovery was higher than 95%, demonstrating the efficiency of the extraction method employed. For stilbenoid extraction from cells, four parts of 100% methanol were added per g of fresh weight, with stirring at 115 rpm for 24 h. The methanol extract was filtered and brought to dryness 16 . All samples were resuspended in 1 mL of 80% methanol and sonicated for 30 min and filtered through a 0.22 μ m PVDF filter just before analysis. Stilbenes were determined by a Linear Ion Trap Quadrupole LC/MS/MS Mass Spectrometer, 4000 Q TRAP of AB Sciex Instruments with MRM scan type in negative mode. Standards of t-R, t-Pn, t-Pt and piceid from LGC STANDARDS, S.L.U. were used to prepare the calibration curves described in Table S2. The gradient used in this system is described in Table 1. The mobile phases were A: H 2 O + 0.05% acetic acid and B: acetone:acetonitrile (70:30). The column was a Luna 3 μ m C18 (2) 100 A 50 × 2.00 mm s/n: 008-4251-B0 with a temperature of 60 °C and injection volume of 10 uL. The transitions and retention time are described in Table S3 and Fig. S2. Stilbenoid contents are expressed as μ g L −1 in both cells and culture medium to facilitate the calculation of the total amount of stilbenoids in the cultures. Statistical analysis. This was performed with Excel software. All data are the average of three measurements + SE. The multifactorial ANOVA analysis followed by the Tukey multiple comparison test were used for statistical comparisons. A p-value of < 0.05 was assumed for significant differences. Results Establishment of transgenic root cultures of tobacco. Tobacco hairy root cultures were established by the infection of leaf segments with Agrobacterium rhizogenes, harbouring the pRiA4 plasmid alone (wild type), or together with pK7WG2_CYP1B1 or pJCV52_ROMT. All the A. rhizogenes were able to induce hairy roots after a period of 2-4 weeks (Fig. 2). Fast-growing root lines (GI > 4, Table S4), wild type or carrying the recombinant plasmid, were selected and their transgenic nature was determined by PCR. Fig. 3 shows a band of 534 bp corresponding to rolC of A. rhizogenes in both wild type (pRiA4) and transgenic roots (pRiA4+pK7WG2_CYP1B1), whereas the band of 245 bp corresponding to the HsCYP1B1 gene was observed only in root lines genetically transformed with the binary vector (Fig. 3A). Also, the band of 1100 bp corresponding to the VvROMT gene was only observed in the hairy root cultures infected with the corresponding agrobacteria (Fig. 3B). All the lines tested negative for the virD gene, indicating the absence of agrobacteria in the hairy root cultures (Fig. 3C). These transgenic lines, as well as some of the wild type lines, were selected for further analysis. All the obtained root lines showed the classical hairy root phenotype, a high growth capacity (Fig. 2, Table S4) and the corresponding gene expression (Fig. 4). Step From these, two selected transgenic root lines carrying the HsCYP1B1 gene (CYP1B1L8 and CYP1B1L27), two lines carrying the VvROMT gene (ROMTL3 and ROMTL7) and two wild type (pRiA4 alone) lines were fed with 2 mM (456.4 mg L −1 ) of t-R, and samples were taken at different intervals of the culture during a period of 4-56 h. Bioproduction of t-piceatannol in hairy roots and their derived cell lines. The selected transgenic root lines heterologously expressing the HsCYP1B1 gene were able to actively bioconvert the added t-R into t-Pn, especially when treated with MBCD (Fig. 5A). The highest bioconversion levels were achieved in the MBCD-supplemented CYP1B1L8 line at 8 h, when the t-Pn content was higher than 7 ± 0.46 mg L −1 . At the same time, the CYP1B1L27 line reached a t-Pn content of 4.7 ± 0.29 mg L −1 , which increased up to 5.2 ± 0.24 mg L −1 at 24 h, after which levels decreased significantly (p < 0.01). In the transgenic cultures treated with MBCD, most of the t-Pn was released to the culture medium, whereas the significantly lower (p < 0.01) levels of t-Pn produced by the untreated cultures remained mainly inside the roots (Fig. 5A). Unexpectedly, wild type hairy root cultures (without the HsCYP1B1 gene) were also able to biotransform t-R into t-Pn, although at a lower rate (0.4%) than the transgenic lines CYP1B1L8 (1.6%) and CYP1B1L27 (1.4%). As before, t-Pn levels were also significantly higher (p < 0.05) in the MBCD-supplemented wild type cultures and accumulated mainly in the culture medium (Fig. 5A). Regarding the fate of exogenously added resveratrol in the hairy root cultures (Fig. 5B), t-R taken up by the cells was partially metabolized into t-Pn and probably other compounds, but this stilbene was also found in the culture medium. In most cases, the remaining t-R contents were lower in the MBCD-treated than in the untreated cultures and accumulated mainly outside the cells. For example, the remaining t-R in transgenic CYP1B1L27 root cultures was 52 ± 3.26 mg L −1 after 24 h of treatment, 77% of which accumulated in the culture medium, whereas when the same line was treated with MBCD, the t-R decreased to 40 ± 2.3 mg L −1 , 83% being found in the culture medium (Fig. 5B). The presence of piceid, the glucoside of t-R, was detected in both transgenic and wild type hairy root cultures (Fig. 5C). Piceid levels peaked 24 h after the addition of the substrate and then decreased until the end of the culture period (56 h). Glucosylation of t-R in the cultures devoid of MBCD was higher than in MBCD-treated cultures, and it was probably a way of detoxifying the excess of exogenously added t-R. Levels of piceid were significantly lower (p < 0.05) than t-Pn in the transgenic cultures (pRiA4+pK7WG2_CYP1B1), whereas wild type cultures (pRiA4) showed a similar content of both t-R derivatives. In contrast with t-Pn, piceid accumulated mainly intracellularly, even in the MBCD-treated cultures (Fig. 5C). The presence of t-Pn in the wild type tobacco cultures, and piceid and t-Pt in both wild type and CYP1B1 root cultures suggests that unspecific hydroxylases and glucosidases from tobacco can transform the exogenous substrate t-R into these derivatives. However, the efficiency of these bioconversions was up to 24-fold lower compared with, for example, the capacity of the transgenic root CYP1B1L8 line to biotransform t-R into t-Pn. Despite the considerable variability among the different control and transgenic root lines, we can infer that the high t-Pt production was due to the ectopic expression of the HsCYP1B1 gene, since the average yield of the transgenic CYP1B1 lines (1888 ± 427 μ g L −1 ) was significantly higher (p < 0.05) than that of the control (819 ± 138 μ g L −1 ). It was thus demonstrated that the transgene expression effectively increased the bioconversion of t-R into t-Pt. The most productive hairy root line (CYP1B1L8) was subjected to a hormonal treatment for dedifferentiation and callus induction. The friable calli were then disintegrated and a cell suspension line obtained (Fig. 2). Transgenic cell suspension cultures grew actively, reaching a growth rate similar to the parental hairy roots (Table S4). The cell line, with or without MBCD, was fed with the same concentration of t-R as the hairy root cultures to investigate its capacity to bioconvert this substrate to the hydroxylate derivative t-Pn. Like the CYP1B1L8 root line, its derived cell suspension was able to convert t-R into t-Pn but the production of this system was 8-fold lower than that of the original root line (Fig. 6A). In the MBCD-treated cell cultures, t-Pn accumulated in small quantities in the medium. Its maximum accumulation was at 8 h after feeding the culture with t-R, after which it decreased significantly until the end of the culture (p < 0.01). In absence of MBCD, only a low amount of t-Pn was detected, 24 h after the addition of the precursor (Fig. 6A). Overall, the cell suspension derived from the hairy root line L8 showed only a limited capacity to bioconvert t-R into t-Pn. Dedifferentiation of the roots to obtain the cell suspension also affected the exogenous t-R accumulation pattern (Fig. 6B). In contrast with the hairy roots, the derived cell suspension culture accumulated t-R mainly inside the cells. When treated with MBCD, a small amount of t-R remained in the culture medium 4 h after feeding, and only traces were detected inside the cells (Fig. 6B). These results suggest a very low stability of t-R outside the cells, Bioproduction of t-pterostilbene in hairy root cultures. Heterologous expression of the VvROMT gene in hairy root cultures fed with t-R led to the bioconversion of this stilbene to its methoxylated derivative t-Pt, which was found both inside the roots and in the culture medium, with a maximum production reached by the transgenic root line L3, 24 h after t-R feeding (Fig. 7A). In this experiment, the incubation period was not extended because we had previously observed that after 24 h the newly produced stilbene contents in the cultures decreased (data not shown). As well as the control and HsCYP1B1 hairy roots (Fig. 5A), the cultures carrying the transgene VvROMT were also able to synthesize t-Pn and piceid in even greater quantities than t-Pt. In particular, root line L3 reached a t-Pn content of 2 ± 0.14 mg L −1 (Fig.7B), which was very similar to that of the control root line described in the previous experiment (Fig. 5A). The same line VvROMTL3 also showed the highest t-Pt production (Fig. 7A). In this experiment, MBCD significantly increased (p < 0.01) the t-Pt content in the culture medium, generally without increasing the total yield of the target compounds in the cultures (Fig. 7A). In hairy roots carrying the VvROMT gene, as in the case of the HsCYP1B1 gene, partial t-R degradation occurred, although t-R quantities of up to 70 ± 2.16 mg L −1 remained in the culture 24 h after feeding (Fig. 7D). Discussion Tobacco is a model plant system easily transformed by A. rhizogenes to produce hairy root cultures. This trait may be harnessed for the heterologous expression of foreign genes harbored in engineered A. rhizogenes 26 . The derived genetically transformed cultures exhibit a high growth capacity and genetic stability for long periods 27,28 . The tobacco hairy root cultures we engineered to ectopically express the human CYP1B1 gene had the capacity to bioconvert t-R into t-Pn with a yield of up to 7 ± 0.46 mg L −1 , and others expressing the ROMT gene from V. vinifera were able to biosynthesize t-Pt, reaching a content of 2.6 ± .019 μ g L −1 . Although low, the concentrations of t-Pt were more than 25-fold higher than those achieved in the control hairy roots. This biotechnological system thus proved to be suitable for the production of t-Pn and, on a lower scale, t-Pt, both compounds with promising biological activities and scarcely distributed in nature 1 . Recently, in a similar approach, Martinez-Marquez et al. 16 reported a 200-fold enhancement of t-Pn production in grapevine cell cultures by heterologous expression of the HsCYP1B1 gene, and the presence of t-Pt in transgenic cell lines overexpressing the VvROMT gene, when both cultures were elicited with MeJA and MBCD. Although the t-Pn production achieved in the transgenic V. vinifera cell line was higher (up to 20 mg L −1 ) than in our study, the greater t-Pt production of our transgenic hairy roots, as well as the inherent genetic stability of the system 20,28 , confirm that this bioconversion process is also suitable for the production of t-R derivatives. Although examples are few, hairy roots have been previously used for the bioconversion of exogenous substrates. Through the heterologous expression of the hyoscyamyne-6β -hydroxylase gene from H. muticus, Häkkinen et al. 27 obtained the alkaloid scopolamine after feeding tobacco hairy root cultures with its precursor hyoscyamine. Similarly, hairy root cultures of Peganum harmala expressing tryptophan decarboxylase of C. roseus produced high levels of serotonin 29 , and Beta vulgaris hairy roots expressing the p-hydroxycinnamoyl-CoA hydratase/lyase (HCHL) gene from Pseudomonas fluorescens produced vanillin when the cultures were fed with ferulic acid 30 . Thus, our results further confirm the capacity of engineered hairy root cultures to biotransform exogenous substrates to target products with interesting biological activities. In this work, the enhancing effects of MBCD on the bioconversion of t-R into t-Pn in tobacco hairy root cultures were also demonstrated (Fig. 5A). MBCD can act as a precursor solubilizer in biotransformation processes. For example, cell cultures of Mucuna pruriens bioconverted 17β -estradiol into 4-hydroxyestradiol when solubilized in β -MBCD, and Podophyllum hexandrum cell cultures converted a coniferyl alcohol MBCD complex into podophyllotoxin 31 . In our experiment, considering the poor solubility of t-R in water, the solubilizing effects of MBCD may have contributed to the improved efficiency of the hairy root cultures in biotransforming t-R to t-Pn. Other factors are also likely to have been involved, especially since MBCD did not clearly show any positive effects on t-Pt production. MBCD has also been used as a permeabilizing agent acting on plant cell membranes, thus increasing the release of plant secondary metabolites such as taxol in Taxus spp. cell cultures 32 . This effect could be responsible for the higher extracellular t-Pn accumulation in the MBCD-treated cultures compared with the untreated control. MBCD may also facilitate the movement of substrates and products through cell membranes during the biotransformation processes and improve the uptake of t-R by the hairy roots, thus facilitating the metabolism of this compound inside the cells and its conversion to other stilbenoids like t-Pn. However, the positive effect of MBCD on the release of reaction products to the culture medium could negatively affect the production of the system if they are less stable in the medium. This may be the case with t-Pt or piceid, which were absent in grapevine cell cultures even after elicitation with MeJA and MBCD 16 . Wild type hairy root cultures have been widely used to biotransform a range of exogenous substrates for the production of pharmaceutical ingredients, including products with enhanced solubility after hydroxylation and glycosydation 20 . In this context, the presence of small quantities of t-Pn and t-Pt, even in the control cultures, suggests that non-specific tobacco enzymes can also biotransform the supplied t-R into its derivatives in the absence of the corresponding transgene. Similarly, the presence of piceid (a t-R glucoside) in both transgenic and wild type hairy root cultures confirms the capacity of the tobacco hairy roots to glycosylate t-R. Globally considered, our results show that in tobacco hairy root cultures, t-R and its derivatives are not metabolic end-products and may be transformed by unspecific tobacco enzymes into other known or new products. This could explain the lower t-Pt contents of our cultures compared with those of other biotechnological platforms based on engineered microorganisms. When the VvROMT gene was expressed in transgenic yeast and E. coli 18 , concentrations of 170 mg L −1 and 150 mg L −1 of pterostilbene, respectively, were reached when the cultures were fed with resveratrol 18 . Recently, the t-Pt biosynthetic pathway from phenylalanine was transferred to yeast, which required a dozen genetic modifications, and the engineered cultures produced up to 34 mg L −1 of pterostilbene 17 . However, only the non-bioactive t-R derivative pinostilbene 17 was detected in these cultures and not the high added value t-Pt. These results suggest that bioconversion in metabolically engineered microorganisms can yield a high amount of a target compound, but the metabolic complexity of plant organisms can provide a wider range of compounds, probably including new products. As mentioned before, plant cell cultures are widely employed for the bioconversion of naturally abundant substrates to scarcer secondary metabolites with important biological activities 19,33 . According to our results, the tobacco transgenic cell cultures carrying the HsCYP1B1 transgene were considerably less able to biotransform t-R into t-Pn than the parental transgenic roots. Nevertheless, the low levels of t-R remaining in the cell cultures, especially when MBCD was added, compared with the root cultures, suggests the transgenic cells have a high capacity to metabolize t-R. A greater capacity to bioconvert hyoscyamine to scopolamine in hairy roots compared with the corresponding derived cell lines was also found in tobacco transgenic cultures heterologously expressing the hyoscyamine-β -hydroxylase gene from Hyoscyamus muticus 21 . In contrast, when comparing tobacco hairy roots and cell cultures expressing the geraniol synthase gene of Valeriana officinalis, Vasilev et al. 26 obtained higher levels of geraniol in the cell cultures. However, in this case, the substrate (geranylgeranyl diphosphate) was generated by the plant cells and not added exogenously to the culture. Conclusions Taken as a whole, our results show the possibility of developing a t-Pn-producing biotechnological platform based on metabolically engineered tobacco hairy roots heterologously expressing the HsCYP1B1 gene, with MBCD playing an important role as a solubilizing/permeabilizing agent. The t-Pt production achieved was low, but this is an extremely scarce compound, even in its richest natural sources, such as blueberries, which only accumulate ng/g 7 . Thus, the developed system, based on the heterologous expression of the VvROMT gene, has potential as a biotechnological source of t-Pt after an optimization process. Finally, both untransformed systems were also able to biosynthesize t-Pn, t-Pt and piceid using the natural genetic capacity of the host plant to perform non-specific hydroxylations, methoxylations and glycosylations, thus demonstrating the immense capacity of plant cells to carry out biotransformations and generate known or even new products. As previously mentioned, metabolically engineered yeast and E. coli cultures have been developed for t-R production from simple and abundant precursors such as phenylalanine and p-coumaric acid. However, production in these systems requires the introduction of the whole gene set of the metabolic pathway for stilbenoid synthesis. In contrast, since the direct natural t-R precursors, malonyl CoA and p-coumaryl CoA, are already found in plant tissues, heterologous production in biotechnological platforms of plant origin has the advantage of requiring the introduction of only one or two genes. Therefore, and according with our results, it is conceivable that in the near future new biotechnological systems based on plant cell or hairy root cultures will be designed to produce t-R by heterologous expression of the stilbene synthase gene, as well as the resveratrol derivatives t-Pn and t-Pt, if they also carry the transgenes VvROMT and/or HsCYP1B1. In support of this hypothesis, Xiao et al. 34 dramatically activated rosmarinic acid biosynthesis by the genetic manipulation of only two genes of the metabolic pathway in hairy root cultures of Salvia miltiorrhiza.
6,442.8
2017-03-27T00:00:00.000
[ "Biology", "Environmental Science" ]
Measuring Current in a Power Converter Using Fuzzy Automatic Gain Control : The accuracy of current measurements can be increased by appropriate amplification of the signal to within the measurement range. Accurate current measurement is important for energy monitoring and in power converter control systems. Resistance and inductive current transducers are used to measure the major current in AC/DC power converters. The output value of the current transducer depends on the load motor, and changes across the whole measurement range. Modern current measurement circuits are equipped with operational amplifiers with constant or programmable gain. These circuits are not able to measure small input currents with high resolution. This article proposes a precise loop gain system that can be implemented with various algorithms. Computer analysis of various automatic gain control (AGC) systems proved the effectiveness of the Mamdani controller, which was implemented in an MCU (microprocessor). The proposed fuzzy controller continuously determines the value of the conversion factor. The system also enables high resolution measurements of the current emitted from small electric loads ( ≥ 1 A) when the electric motor is stationary. Introduction Current measurements play a very important role in energy management systems, as well as in vector control of electric motors. The harmonics in the current of an input power converter for a railway vehicle were presented in [1]. The harmonic spectra of traditional and modern trains were presented in [2]. Data acquired from a current pantograph of a 3 kV DC railway locomotive and a 1.5 kV metro vehicle were presented in [3]. In [4], changes in the current peak values of a DC substation were presented. An important example of the use of electronic systems to record the consumption of electrical energy is to calculate the cost of electricity consumed by business entities, such as the supplier (electricity distribution company) and the customer (e.g., an electric vehicle). These calculations are usually made using electricity meters, which are not without limitations. To determine the important operating parameters of a power converter, the current must be measured. The dynamically changing value of the current in a power circuit can affect the metrological properties of the measurement system, depending on the load of the drive system. The main challenge is to properly match the peak current to the input range of the analog-to-digital converter (ADC). When measuring the signal, a dynamic error should be expected. This dynamic error depends on the selection of measuring current transducers, and also on the signal processing procedures. The gain of the measured signal can be measured using a programmable gain amplifier (PGA) or a precision amplifier with a fixed gain factor. This method does not allow use of the upper limit of the dynamic range of the ADC, which operates at the end of the measuring circuit, and therefore does not enable high resolution measurements. Several methods that allow the value of the measured signal to be adjusted to the measurement range are discussed in the literature. An automatic gain control system has a feedback loop to control the gain factor. In modern applications, AGC circuits are equipped with microcontrollers that are compatible with many signals. An example of an AGC based on a variable gain amplifier (VGA) is given in [5]. In [6], the authors used a programmable amplifier loop control based on input signal amplitude. In [7], an AGC system is described based on a VGA connected in a cascade with a PGA. Programmable gain amplifier circuits are often used in front-end circuits, as demonstrated in [8] for quartz tuning fork (QTF) signal conditioning. The structure of a PGA system controlled by a serial interface with the control algorithm is presented in [9]. A method of designing PGA-based amplification systems using Matlab and Spice is presented in [10]. An accurate method of signal processing using programmable amplifiers can be found in [11]. Fuzzy controllers can also be used to implement an AGC system, as described in [12]. Such systems enable the creation of a hyperplane that acts as an algorithm controlling the operation of the AGC system, without the need for mathematical transformations. These capabilities have led to the implementation of fuzzy controllers into AGC systems. Intelligent current measurement systems increase the reliability of the systems in which they process signals. Smart energy meter are supported by fuzzy logic, as discussed in [13]. An example of the use of a fuzzy logic in an energy management system for food manufacturing processes is presented in [14]. Another use of fuzzy logic to minimize energy consumption in residential buildings was described in [15]. In [16] the author of the current work proposed a measurement system using a fuzzy controller. The circuit contained an analog multiplier, which caused a measurement error. An additional problem with this system was the limited output range of the high-voltage DAC. The present article is a continuation of that work. A precise AGC system is proposed that can be implemented with various algorithms. The inspiration for the creation of the intelligent measurement system came from applying a cybernetic perspective to the use of artificial intelligence (AI) for solving issues related to the problem of measuring current in power inverters with dynamically changing loads. The proposed intelligent system is not based on mathematical relationships that describe the gain factor of the current measurement circuit in a power inverter, but on linguistically formulated rules that allow for the rapid creation of an accurate measurement system. This is an advantage over traditional measurement solutions. Expert knowledge of the current signal trend, thus, allows for precise modeling of the AGC system. The proposed AGC system has the additional advantage of enabling the introduction of an additional input. Usually, speed feedback is not used in automatic gain control systems. Fuzzy logic allows additional signals to be introduced into the system by creating a premise linked by an appropriate logical operator, as the control variable uses the derivative of the signal (dt(V oIV (t))/dt), which is the product of the value of the shunt resistance R sh and the measured current. This derivative provides information on the rate of change in the measured current. In the case of a high value for V/s, the AGC fuzzy system loses accuracy and does not amplify the measured signal. A steep rise in the peak value of the measuring current increases rapidly towards the upper range limit of the measuring circuit. Measuring this signal at time t 1 using the AGC fuzzy system causes error. The system generates (A) gain only at t 1 +t agc and strengthens V oIV 's signal only at V oIV (t 1 +t agc ) with a higher peak than V oIV (t 1 ). An AGC system operating in this way may result in saturation of the signal, which is associated with the loss of measurement information. The solution using fuzzy logic is able to generate accurate values continuously for the voltage gain A d = var., and therefore allows for high resolution measurements. The AGC system is able to accurately measure current ≥1 A resulting from a load other than the electric motor. This is important for energy management, motor control, and other systems that use current measurement. To summarize, in current measuring systems in which the measured quantity changes across the whole measurement range, an appropriate system should be used to adjust the gain. Appropriate gain adjustment increases the resolution of the measurement. AGC systems based on programmable amplifiers or intelligent algorithms are used primarily in converter systems for controlling the operation of electric motors (e.g., in electric vehicles). Here, a system is presented that is created primarily to measure small values for current in the power supply circuit of a power inverter. This current may appear when a power inverter controlled electric motor is not running but other circuits are on. The Fuzzy AGC system enables the measurement of values representing 2% of the rated current. A novel feature of the proposed AGC is the possibility of implementing the system in any structure of microprocessor system. Moreover, a multiplication operation was used in the microprocessor, so an analog multiplier was not needed and the precision was increased. The main problems with using an analog multiplier are the settling time and the phase shift between input and output [17]. Multiplier errors consist primarily of input and output offsets, scale factor errors, and nonlinearity. An expression for the output of a real analog multiplier is given by where k is the scale factor multiplier; ∆k is the scale factor error; V in1 , V in2 is the input signal; V 1os , V 2os is the input offset voltage; V o is the multiplier output offset voltage; f(in, out) is nonlinearity). Materials and Methods The main reason for using gain correction circuits in measurement systems is to increase the resolution. This section describes the AGC structures used in different implementations. Metrological problems are caused by the change in the measured values across the entire measuring range. In this case, low-peak signals are not measured accurately, due to the lack of appropriate signal amplification. For example, a 12-bit ADC powered by 5 V reference voltage can measure an input voltage equal to 0.0048 V. The appropriate gain of this signal allows the signal to be recorded using more digital levels. To accomplish this task, computer analysis was performed on various AGC systems. On the basis of the results, a system of signal gain control using a fuzzy controller was selected. To achieve high measurement resolution, the measured signal should be amplified to the upper limit of the dynamic range of the ADC converter. Due to the measurement error of the AGC system and the rapidly changing values, the input range was reduced to 80% of the reference voltage value. Programmable amplifiers are the most common AGC structures used in measurement systems. These amplifiers are controlled digitally, based on an algorithm implemented in a microprocessor system. The algorithm determines the gain value and selects the closest value from the available gain in the PGA amplifier, in such a way that the measurement signal reaches the maximum value in the ADC range. The operation of a programmable amplifier in a loop is presented in Figure 1 and the tasks are listed below. Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 17 using the PGA circuit or amplifier with a fixed gain factor does not allow the measurement signal to reach the upper limit of the dynamic range of the ADC converter. In an AGC PGA, electronic circuits need time to perform the following tasks: • The PGA amplifier measures the input voltage over time (t p ) in V inPGA (t p ); • The ADC converter supplies conversion data (t ADC ) in (V inPGA (t p +t ADC )); • Determine the gain value (A PGA ) in MCU (t alg ); • Send the order from MCU to the PGA amplifier (t conv ). According to the signal processing times given in the AGC system, the input signal V inPGA (t p ) is amplified after the time t k = t p + t alg + t conv has elapsed. For this reason, a 20% reduction in the measuring range protects against information loss due to too much amplification of the measured signal. The algorithm implemented in MCU (Figure 1) determines the gain value and selects the closest value from the available gain in the PGA amplifier. The main disadvantage of this system is its inability to generate any gain value. Digital regulation of the conversion factor of the measuring current in the power inverter using the PGA circuit or amplifier with a fixed gain factor does not allow the measurement signal to reach the upper limit of the dynamic range of the ADC converter. The waveforms in Figure 2a were obtained in a computer simulation of voltage response V outPGA of the PGA amplifier for the input signal V inPGA . Figure 2b shows the voltage gain generated by the PGA amplifier controlled by a microprocessor system. The large discrepancy between the available gain in the PGA amplifier and the calculated gain does not allow the desired increase in the resolution of the measurement of current in the power inverter. Figure 2b shows the voltage gain generated by the PGA amplifier controlled by a microprocessor system. The large discrepancy between the available gain in the PGA amplifier and the calculated gain does not allow the desired increase in the resolution of the measurement of current in the power inverter. Structure of the Automatic Gain Control The main role of the AGC corrector system is to select a suitable value for the gain of the output signal from the current transducer, and thereby increase measurement accuracy. For this purpose, electronic circuits (i.e., a DAC converter) were used. This system operates on the basis of the value of two input signal peaks and its derivative. On the basis of these signals, the AGC system generates appropriate gains and sends this value through feed-back loops to the amplifying stage. The input signal should be close to 80% of the level of its dynamic range. Figure 3 shows two different electric circuits used to correct the gain of the output signal (V oIV ) from the current transducer. Each of the systems contains a separate ADC M1 or ADC M2 converter, which is used to measure the current signal. Measurement outputs are marked as (ch 1_agc1/ ch 1_agc2 ). of these signals, the AGC system generates appropriate gains and sends this value through feed-back loops to the amplifying stage. The input signal should be close to 80% of the level of its dynamic range. Figure 3 shows two different electric circuits used to correct the gain of the output signal (VoIV) from the current transducer. Each of the systems contains a separate ADCM1 or ADCM2 converter, which is used to measure the current signal. Measurement outputs are marked as (ch1_agc1/ch1_agc2). The AGC corrector system is equipped with isolation amplifiers, shown in Figure 3 as Iso1. Their main role is to provide galvanic isolation between the circuits of the current converter (IV) and the digital circuits of the current meter and the AGC corrector. The current converter converts the measured Ish current into a proportional low-voltage VoIV signal. The first realization procedure for determining the desired value of the voltage gain (Ad) relies on measurement of the peak input voltage VoIV by an auxiliary circuit (auxiliary channel aux_c with a ADCIA converter ( Figure 3a) or ADCd). The obtained information is transmitted to the microprocessor unit (MCU1) memory. Once the values for a few voltage samples VoIV(nT) have been measured (where n is the sample number and T is the sampling interval), their derivative is determined (where * is the crisp value (input or output signal from the fuzzy controller), derivative operator of the I th order). These values are provided at the inputs of the fuzzy system implemented in the microprocessor (MCU1). According to expert knowledge given via rules, the fuzzy controller generates the required gain. This value is converted into an analogue signal (Ad) by the digital-toanalogue converter (DAC1). The required conversion factor of the current measurement The AGC corrector system is equipped with isolation amplifiers, shown in Figure 3 as Iso1. Their main role is to provide galvanic isolation between the circuits of the current converter (IV) and the digital circuits of the current meter and the AGC corrector. The current converter converts the measured I sh current into a proportional low-voltage V oIV signal. The first realization procedure for determining the desired value of the voltage gain (A d ) relies on measurement of the peak input voltage V oIV by an auxiliary circuit (auxiliary channel aux_c with a ADC IA converter ( Figure 3a) or ADC d ). The obtained information is transmitted to the microprocessor unit (MCU1) memory. Once the values for a few voltage samples V oIV (nT) have been measured (where n is the sample number and T is the sampling interval), their derivative is determined (where * is the crisp value (input or output signal from the fuzzy controller), derivative operator of the I th order). These values are provided at the inputs of the fuzzy system implemented in the microprocessor (MCU1). According to expert knowledge given via rules, the fuzzy controller generates the required gain. This value is converted into an analogue signal (A d ) by the digital-to-analogue converter (DAC 1 ). The required conversion factor of the current measurement circuit (ch_agc1) in the analog form and V oIV are applied to the input terminal of the analog multiplier MULT (Figure 3a). The multiplier performs an output function V oIV *A d . Consequently, the ranges between the current converter (IV) output and the current input channel ch_agc1 are matched. The main problems with the proposed fuzzy collector AGC system (shown in Figure 3a) are the high voltage of the DAC 1 converter, which has limited ability to generate a voltage signal on its output (e.g., the output range for the AD5540 converter is equal 60 V), and the small range of input voltages for the analog multiplier (e.g., for the MPY 634 multiplier the input range is 10 V). The DAC 1 is connected to the input of the multiplier. For this reason, the same fuzzy controller was used in the AGC system as had been implemented in the CPU of the microprocessor (MCU2, see Figure 3b). The value of the desired gain (limited to the floating-point number range: float or double defined in the IEEE 754 standard) is sent to the memory of the microprocessor system. The ADC d converter (Figure 3b) from the structure of the microprocessor was used to measure the V oIV signal. The digitally generated gain value (A d , see signal #2 in Figure 3b) and the peak value (V oIV , see signal #1 in Figure 3b) are multiplied digitally in the memory of the microprocessor. The product is given to the input of the DAC 2 converter (signal #3, see Figure 3b). The DAC 2 transmits the properly amplified input voltage (signal #3, see Figure 3b) to the input of the ADC M2 converter, which allows use of its dynamic range. To determine the signal measured (V oIV ) in the microprocessor system (MCU3, see Figure 3b), information about the designated gain, A d (signal #4, see Figure 3b) is needed. The fuzzy system shown in Figure 3 consists of three controllers operated in parallel. The shapes of the input/output planes of these controllers differ from each other only in the range of the considered interval (X1-axis V oIV ) for each controller, as follows: The output value of this three-stage structure is selected by the structural program acting as a switch-case. The selection of one output value out of three fuzzy controllers (s, l, h) depends on the peak value of the signal V oIV , as measured directly by the ADC IA converter ( Figure 3). The input ranges of the controllers result from the possibility of spacing the modal of the membership function of fuzzy sets containing the voltage V oIV and its derivative. There are reports in the literature of AGC circuits based on intelligent algorithms, such as in [12]. Most of the AGC structures are used to amplify signals such as radio waves. To improve the metrological properties of current measurement systems in power inverters, AGC correctors based on programmable amplifiers can be used. The family of integrated electricity meters provides an example of such solutions. Structure of the Fuzzy Controller An important feature of fuzzy systems is their robustness under conditions when the system parameters are uncertain. The disturbance attenuation problem of fuzzy large-scale systems was shown in [18]. The robustness of the designed fuzzy controller with complexity reduced was shown in [19]. The grid division of the input space and the overlapping sector should be properly designed [20], and therefore ensure a smooth transition of the measurement point between the sectors of the input space. In fuzzy control, the stability of the system operation can be checked by means of a state phase analysis known as the geometric method [20]. This method reveals how robust the controller can be with an appropriate set of database parameters, especially the premises of the rules. The sliding control ensures the existence of sliding motion, which results in constancy with regard to parameter changes [20]. The sliding law of a fuzzy controller is a combination of sliding control and fuzzy logic. The robustness of fuzzy regulators is a consequence of the proven resistance of the sliding regulator. Another example of a controlled AGC system is the cascade connection of a PGA amplifier with a VGA. In this structure, the PGA amplifier matches the measured signal to the input VGA. The signal from the shunt output terminals at 1 A current is 125 µV and is amplified by the input amplifier to 0.1 V. then, it is amplified by the PGA with a constant factor of 8 v/v to the level of 0.8 V, while the VGA amplifier with a set gain of 13 dB amplifies the signal to the value of 3.57 V. The measured signal is matched to 80% of the dynamic range of the ADC. The main problem with this AGC solution is that the two integrated circuits are controlled by an algorithm consisting mainly of conditional instructions, which extend the gain control time. The SR (slew rate) coefficient for this structure of AGC can be determined from the formula min{SR OA , SR PGA , SR VGA }. A significant disadvantage of the AGC solutions described above is the changing gain every 2 dB or as a function of 2 n , which makes it impossible to generate a gain signal in a continuously way. The bandwidth is dependent on the gain set by the amplifiers. An additional error in the system comes from the settling time from the amplifier. The use of two programmable circuits allows for better adjustment of the measurement signal to the dynamic range of the ADC. A detailed description of the proposed fuzzy controller is given in [16,[20][21][22][23][24][25][26]. The input/output signals are as follows: (1) For the module (implemented Mamdani-type inference): where V oIV (nT s )* is the instantaneous value of the voltage sample from power inverter measurement current circuit at a discrete time nT s ; n is the sample number, T s is the sample interval; is the derivative operator of the I th order; * is the crisp value; T is the vector transposition; A d * is the output signal of the fuzzy controller, representing the desired gain value; N [24]. The proposed fuzzy model uses two inputs (V oIV ) and its derivative (d(V oIV )/dt) and contains six fuzzy sets (the database contains 64 rules (2 6 )). The membership functions selected for the construction of the fuzzy controller are triangles, due to their easy implementation in a microprocessor system. The placement of rules in controller space is a more important parameter influencing the accuracy of the AGC system than their number. Algorithm 1 presents the principle of operation of the AGC for two exemplary activated rules, R 1 and R 2 (see Equation (1)). The V oIV data and the derivative d(V oIV )/dt are entered into the AGC system (see Algorithm 1, points 1 and 2). The V oIV values are checked by conditional statements (see Algorithm 1,points 4 or 14,15). This data is then entered into the input of the controller as crisp values (V oIV (nT) * ; d(V oIV (nT) * /dt) (see Algorithm 1, point 5), where it is subjected to fuzzyfication (see Algorithm 1, point 6). The data is assigned to fuzzy sets with a determination of the degree of membership ([µ A1 (V oIV ), µ A2 (V oIV ), µ B1 (V oIV '), µ B2 (V oIV ')]). Operator MIN was used to determine the weight w j of the rule (see Algorithm 1, point 8). The decision is a fuzzy set C j ' (see Algorithm 1, point 9) whose membership function is determined by equation µ cj' (A d ) = w j *µ cj (A d ). A conclusion from the two rules is given by the equation µ cj ' (A d ) = max(w 1 *µ c1 (A d ), w2*µ c2 (A d )) (see Algorithm 1, point 10). The conditional if statement implements a sub-scope of the AGC system. Algorithm 1 shows the sub-range for the voltages V oIV > 0.2 V (see Figure 4) Input of fuzzy controller: V oIV (nT) * ; d(V oIV (nT) * )/dt; 6. Fuzzification of input data 7. A d *=(∑µ C (A d(i) )* A d(i) )/(∑µ C (A d(i) )) 13. } Resistance measuring shunts Rsh (metal alloy with low inductance low and thermal electromotive force EMF < 1 μV/°C) allow for accurate measurement of DC and AC signals, which vary across the whole measuring range. The input range of the resistance shunt is from 1 A to a maximum of 50 A. The current below 1 A is impossible to measure from the 125 μΩ shunt using classical methods and measuring elements. The minimum Resistance measuring shunts R sh (metal alloy with low inductance low and thermal electromotive force EMF < 1 µV/ • C) allow for accurate measurement of DC and AC signals, which vary across the whole measuring range. The input range of the resistance shunt is from 1 A to a maximum of 50 A. The current below 1 A is impossible to measure from the 125 µΩ shunt using classical methods and measuring elements. The minimum current value appears when the electric motor connected to the DC/AC power inverter is turned off, while the main power circuit is loaded by the 1 A current device. Two amplifiers operating in non-inverting configuration are connected to the output terminals of the measuring resistor R sh , creating a system with a differential input and output with a voltage gain constant of 800 V/V. These amplifiers (zero drift) have offset voltage and temperature drift minimization, due to the small measurement signal output from shunt at the minimum current of 1 A (see Figure 4-I L ). The measurement circuit must have an isolated galvanic power supply (e.g., using a push-pull converter consisting of a transformer driver connected to a transformer with a reference voltage). The gain in the measurement circuit should be such that it matches the value of the measured signal to the range of the ADC converter (12-bit SAR type) (see Figure 4). In the case of large changes in the value of the measured current, instead of an operational amplifier a PGA system with programmable gain PGA or a cascade of PGA circuits with a VGA should be used (see Figure 4). The programmable amplifier has the possibility of voltage gain from 1 v/v to 2048 V/V. In the case of a measuring shunt with a resistance of 125 µΩ, a current of 1 A is amplified by the input amplifier to a voltage level 0.1 V. Next, this signal is amplified by a programmable amplifier so that it reaches a value equal to 80% FS (4 V at V ref = 5 V, and full scale (FS) = V ref ) of the ADC converter. For this purpose, the PGA sets the gain at 32 V/V. In this case, the voltage at its output is equal to 3.2 V, which means that the 80% of the FS range is not used (unused 0.8 V). There are 655 unused transitions (3276 − 2621 = 655). This implies a loss in the converted signal accuracy. The maximum rate of voltage change is described by the slew rate. The measurement structure is estimated on the basis of the relationship min{SR OA , SR PGA } (where SR OA is the slew rate of the input amplifier and SR PGA is the slew rate of the PGA amplifier). These parameters should not differ from each other, otherwise the system will be determined by the inferior parameter. Using a AGC fuzzy system allows for continuity of amplification and adjustment to the ADC dynamic range with an error resulting from the properties of the elements used for its implementation. The proposed solution is simple to implement and enables the measurement of small values of the current. Simulations of AGC System To verify the error in the fuzzy corrector's conversion factor (Figure 3b), a family of first-order polynomial signals with various slopes (converted into geometric degrees) was applied to the input terminals (in+, in−: terminal, Figure 3). Matlab/Simulink and Fuzzy Logic ToolBox were used to carry out this analysis. The results are presented in Figure 5. The voltage waveforms f 2 , f 4 , f 6 shown in Figure 5 Figure 5 (marked in red) show the qualitative advantage of the operators PROD (the type of the implication of tuned fuzzy sets) over conventional mathematical operators used in fuzzy logic. The circles c 1 , c 2 , c 3 in Figure 6 mark the area where a rapid increase appears in the waveforms f 2 , f 4 , f 6 caused by the switchover (switch-case statement) between fuzzy controllers operated in parallel (s, l, h, Figure 4). The tuned membership function and the PROD operator enables the waveform to be smoothed. Fuzzy AGC tuning relies on appropriate membership function spacing while observing the output signal. The main advantage of a tuned fuzzy controller is to generate a more accurate gain factor in the AGC system. This advantage can be seen between the signals f 1 -f 2 , f 3 -f 4 , f 5 -f 6 shown in Figure 5. The output signals f 1 , f 3 , f 5 in the tuned fuzzy controller increase much faster than in a conventional controller. Thanks to this, they reach 80% of dynamic range of the ADC faster. On the basis of the results shown in Figure 5 and compared to the possibilities for digitally controlled amplifiers, fuzzy controllers have an obvious potential for use in AGC systems. It should be noted that fuzzy controllers and other technologies related to artificial intelligence are increasingly being used in the development of microprocessor systems. The main advantage of fuzzy controllers is the ease with which the input/output plane can be corrected, by changing the parameters of the rules defined by Equation (1) without the need for mathematical modeling. This type of change parameter in the fuzzy controller allows for rapid responses and high accuracy without impacting the computational complexity of the microprocessor system. The main advantage of a tuned fuzzy controller is to generate a more accurate gain factor in the AGC system. This advantage can be seen between the signals f1-f2, f3-f4, f5-f6 shown in Figure 5. The output signals f1, f3, f5 in the tuned fuzzy controller increase much faster than in a conventional controller. Thanks to this, they reach 80% of dynamic range The main advantage of a tuned fuzzy controller is to generate a more accurate gain factor in the AGC system. This advantage can be seen between the signals f1-f2, f3-f4, f5-f6 shown in Figure 5. The output signals f1, f3, f5 in the tuned fuzzy controller increase much faster than in a conventional controller. Thanks to this, they reach 80% of dynamic range Results and Analysis The fuzzy AGC system used a 32-bit microprocessor system equipped with a 12-bit successive-approximation ADC. The AGC test system corresponds to the structure shown in Figure 4. Figure 4 shows the AGC system implemented in the microprocessor. Other examples of the implementation of fuzzy drivers in microprocessor systems have been presented in [27,28]. To verify the operation of the proposed AGC system, the analog experimental signal (V oIV ) shown in Figure 6 (channel 2) was given at the input. The output signal (V ch2 ) of the AGC circuit receives 3.98 V, which is close to the desired value of 4 V ( Figure 6, channel 1). The voltage drop shown in Figure 6 (Cs) is a result of switching between controllers working on different ranges (see Figure 4, marked as: l, s, h). Analysis of the signals shown in Figure 6 reveals that the AGC systems implemented on the microprocessor allow for a measuring range of 4 V. To verify the dynamic operation of the AGC system, a square signal was provided at the input ( Figure 6, channel 2). The output (V ch2 ) of the DAC (Figure 6, channel 1) is shifted in phase to the given signal (V oIV ), due to the processing time required by the AGC algorithm. This should be considered in the second measuring circuit (voltage channel) of the electricity meter. A comparison of the results for AGC systems based on different structures is given in Table 1. The results were obtained by a computer simulation in Matlab/Simulink software for three types of AGC structures, based on a PGA amplifier, a PGA connected to a VGA, and the fuzzy controller. Table 1. Errors in the measurement system using a 12-bit ADC converter supplied with a reference voltage of 5 V. (PGA, gain can be programmed by 2 n with a max value 4096 and VGA, gain range 2-32 dB with 1 dB step, R sh = 125 µΩ, and OA = 800 V/V). The NOT_P symbol in Table 1 indicates that measurement is not possible. This problem occurs with a cascaded connection of two amplifiers (PGA and VGA) and is related to the permitted input range of the VGA. The data in Table 1 prove the effectiveness of AGC systems based on a fuzzy controller. The proposed structure does not have an analog multiplier, which is associated with greater dynamics in relation to other AGCs. An important advantage of the AGC fuzzy system is its ability to measure each signal thanks to a switchable range. Input The workbench of AGC circuit with fuzzy logic were presented in Appendix A in Figure A1. Accuracy of AGC Determining the measurement error and uncertainty is very important. The estimated errors in this case show the possible measurement spread. Thanks to this analysis, it is possible to reduce the error of the correction system, by finding the element that most affects it. This analysis is used to estimate the safe range within which the signal can be amplified, and therefore not exceed the measurement range. The main reason for not considering AGC systems containing a PGA amplifier is the lack of continuity in the generation of gain factor, as presented in Figure 2b, A PGA . For this reason, the measurement uncertainty related to the AGC structure was determined based on a microprocessor containing the fewest possible analog electronic elements, which increases its accuracy. In simplified form, the structure from Figure 3b can be presented as shown Figure 7. amplified, and therefore not exceed the measurement range. The main reason for not considering AGC systems containing a PGA amplifier is the lack of continuity in the generation of gain factor, as presented in Figure 2b, APGA. For this reason, the measurement uncertainty related to the AGC structure was determined based on a microprocessor containing the fewest possible analog electronic elements, which increases its accuracy. In simplified form, the structure from Figure 3b can be presented as shown Figure 7. Table 2 shows the most important individual errors for the AGC systems shown in Figure 7. The data given in least significant bits (LBS) in Table 2 are converted into volts using the formula Error in (V) = Error in LBS*(Vref/2 number of bits ). The measured value depends on the measurement function: f = AGC (V1 = Voiv, V2 = VoutADC, V3 = ADSP, V4 = VDAC2) (see Figure 7). To calculate the limiting error in the considered AGC structure, the error of propagation should be used with a confidence level p = 0.95 given by the relationship: where f is the measurement function, Δxj is the individual limiting errors of the electronic component included in the AGC structure, is the sensitivity factor of the function to changes in the input quantity . The limiting errors (variable in Equation (2)) are given by the relationship [30]: where ei is the errors appearing in electronic devices. Table 2 shows the most important individual errors for the AGC systems shown in Figure 7. The data given in least significant bits (LBS) in Table 2 are converted into volts using the formula Error in (V) = Error in LBS*(V ref /2 number of bits ). The measured value depends on the measurement function: f = AGC (V1 = V oiv , V2 = V outADC , V3 = A DSP , V4 = V DAC2 ) (see Figure 7). To calculate the limiting error in the considered AGC structure, the error of propagation should be used with a confidence level p = 0.95 given by the relationship: where f is the measurement function, ∆xj is the individual limiting errors of the electronic component included in the AGC structure, ∂ f ∂X j is the sensitivity factor of the function to changes in the input quantity X j . The limiting errors (variable in Equation (2)) are given by the relationship [30]: where e i is the errors appearing in electronic devices. According to Equation (3), it is possible to determine the processing error (root sum square (RSS)) of the ADC converter from the AGC structure (see Figure 7), which can be written as [30]: where ∆ V os is offset voltage, gain is gain error, and I NL is integral nonlinearity. The limiting error of the fuzzy logic controller was determined by A system − A model [31,32] (where m is the number of data) and equals 0.05. For the operational amplifier, DAC errors are determined based on Equations (2) and (3) and catalogue data. The limit error of fuzzy AGC was determined and presented in Figure 8. where ∆ is offset voltage, is gain error, and is integral nonlinearity. The limiting error of the fuzzy logic controller was determined by e 1 = ( 1 m ) × ∑ |A system − A model | m i=1 [31,32] (where m is the number of data) and equals 0.05. For th operational amplifier, DAC errors are determined based on Equations (2) and (3) and cat alogue data. The limit error of fuzzy AGC was determined and presented in Figure 8. The error ΔAGC_MULT (Figure 8) for an AGC structure with analog multiplier is constan at 0.3 V across the whole measurement range. When the gain control structure is imple mented in the microprocessor (Figure 4), the error ΔAGC1 increases as the input value in creases Voiv. The error ΔAGC1 depends on the input value and the error of the operationa amplifier (see Figure 8 ΔAGC2 which uses a more precise amplifier). Type B uncertainty is used to estimate the uncertainty associated with the AGC meas urement results in indirect measurements. This uncertainty is defined by the relationship where ∆ is the complex uncertainty of the AGC system, f is the measurement func tion, and is the input quantity. The components of uncertainty B with uniform error distribution are defined by th relationship: where ∆ is limiting error. (Figure 4), ∆ AGC2 is the limiting error of the AGC structure implemented on a microprocessor with a more accurate amplifier (Figure 4), U is the uncertainty, and V oiv is the value of the input signal. The error ∆ AGC_MULT (Figure 8) for an AGC structure with analog multiplier is constant at 0.3 V across the whole measurement range. When the gain control structure is implemented in the microprocessor (Figure 4), the error ∆ AGC1 increases as the input value increases V oiv . The error ∆ AGC1 depends on the input value and the error of the operational amplifier (see Figure 8 ∆ AGC2 which uses a more precise amplifier). Type B uncertainty is used to estimate the uncertainty associated with the AGC measurement results in indirect measurements. This uncertainty is defined by the relationship: where u B ∆ AGC is the complex uncertainty of the AGC system, f is the measurement function, and X j is the input quantity. The components of uncertainty B with uniform error distribution are defined by the relationship: where ∆ x j is limiting error. For example, the uncertainty B of ADC is computed by u B ADC = ∆ ADC √ 3 (where ∆ ADC is defined by Equation (4)). In the last step, the expanded uncertainty determining the uncertainty range of the measurement result should be estimated based on the relationship: In the calculations, k p = 2 was assumed, and assigned a confidence level of p = 0.95. The determined uncertainty is presented in Figure 8 (waveform U). The maximum value for complex uncertainty is equal to 0.95. For this reason, the desired range of the AGC was reduced from 5 to 4 V. In the processing of dynamic signals, the actual sampling frequency should be determined. This parameter can be determined based on the dependencies, f s = 1/(t ADC + tf uzzy + t DAC ), where t ADC , t DAC are the processing times in the ADC and the DAC, and t fuzzy is the execution time required for the algorithm to determine the gain. For this reason, it is necessary to optimize the operation of the fuzzy controller and reduce the t fuzzy . The DAC converter should have low a settling time of around 5 µs. Discussion When using a resistive shunt to measure the current, the output signal must be properly conditioned. The AGC with fuzzy logic provides excellent results with high measurement resolution. This system enables use of all the transitions of the ADC. The SR factor only depends on the input and isolation amplifiers (<2 V/µs). The minimum number of analog components introduces a small settling time (<2 µs) which is important because the system operates in a loop. In the energy meter, when an AGC is used in the current measurement circuit, the loop is first performed, and then in the parallel circuit (voltage channel) voltage is measured (thus, no phase shift is introduced). Current Shunt The selection of the shunt and its resistance value depends on the type of the measured signal, operating temperature, current value, and the rated power. In the analyzed case, where the test current did not exceed the value of 50 A, the shunt was selected with a value of 125 µΩ, power rating 36 W with a tolerance of 5%. The thermal noise was computed using the equation KTR sh ( f 1 − f 2 ) (where K is Boltzmann's constant 1.38 × 10 −23 joules/degree Kelvin, T is the absolute temperature ( • C + 273), and ( f 1 − f 2 ) is the bandwidth in hertz) and does not exceed <1 µV. First Stage of Amplification Special zero drift amplifiers were used to amplify the output signal from the shunt. This block is a differential-input to differential-output stage. Because the current flow of 1 A on the shunt is 125 µΩ, it generates a voltage of 125 mV. Therefore, the amplifier should have a much lower offset voltage V os . The settling time of the amplifier should be small, t s < 2 µs, which influences the correct operation of the AGC loop gain. The common mode rejection ratio (CMRR) of this structure depends on the gain factor (10 V/V) and the isolation amplifier improves signal attenuation. Common voltages appear in circuits with power inverters loaded with electric motors. Therefore, the measuring circuit should be considered accordingly. This means that a precise reference voltage should be used. Automatic Gain Control Using PGA Amplifier The main problem with PGA systems is the constant gain with step changes, which do not allow the input signal to be amplified to the desired signal level. As a result, the input range of the ADC is not used, and the accuracy is reduced. This limits its ability to measure the minimum current >1 A with high resolution. PGA amplifiers should be chosen with a rail-to-rail input range. The error in these systems is determined by the following relationship: System error gain = reference tolerance + gain error. Fuzzy Automatic Gain Control Fuzzy controllers have many advantages in terms of their potential use in AGC systems. Their most important advantages are easy construction and implementation in microprocessor systems. To modify the operation only requires setting the parameters of the rule or changing the parameters of the controller. Their main disadvantage is the speed of operation, which can be minimized by optimizing the number and location of rule entries and by proper processor selection. The accuracy of the controller was computed as having an error level of 0.05, but the error increased in the real system to 0.12. Measurements using the fuzzy AGC in electric motor loaded power inverters allowed for precise monitoring of values ≥1 A from load sources other than the electric drive. Conclusions In this research, a new fuzzy automatic gain control method was proposed that increases the resolution of measurement current in power inverter loads in electric motors and other electric devices. The AGC method uses the full dynamic range of the ADC working at the end of the current circuit. For safety reasons, the dynamic range was lowered to 80% of the reference. The system determines the gain using a fuzzy controller. The intelligent system relies on expert knowledge, given via rules. The use of a PGA circuit allows for the gain to be switched based on conditional instructions using a mathematical model. In a real-time system, this task is not easy to solve quickly. The AGC system was tested experimentally in a microprocessor. Laboratory and simulation tests were conducted. The result are presents in Table 2. The maximum signal value converted by the ADC was 3276 (4 V). When a PGA was used to measure 1 A in the structure shown in Figure 4, there were 655 unused transitions out of 3276. This implies a loss in the converted signal accuracy. The proposed AGC based on a fuzzy controller had only two unused transitions (the result obtained in the simulation). In the real system, due to errors, the result increases to 20. These results proved the usefulness of the proposed system. Moreover, the fuzzy error was minimized, which improved the accuracy of the AGC structure. This concept can be considered to be an extension of current measurement systems. Designing such a system requires acquiring knowledge that describes its operation. Most often, the experience of process operators who can achieve the control aim is used. Due to the computational effort, the system works correctly for two inputs. The selection of experts is the biggest problem in AGC design. In the future, the aim is to use a DSP processor in the AGC system. Conflicts of Interest: The author declares no conflict of interest. Table A1 presents the most important parameters of the systems shown in Figure A1. Table A1. List of elements from the circuit AGC in Figure A1. Resources Parameters DAC 2 12-bit MCU 3 ADC 10-bit SAR MCU 2 ADC 12-bit SAR Table A1. List of elements from the circuit AGC in Figure A1.
11,103.4
2021-06-22T00:00:00.000
[ "Engineering" ]
Smart Data as a Service . Nowadays, smart data emerge as a new research direction to create value from business data in an intelligent way. Smart data are defined as the data gathered and processed that can be used to create new insights for smart solutions to support business strategies. This paper aims at proposing a conceptual model for smart data management. In other words, the model can be used for designing a smart service system based on the perspective of service science that can manage and deliver smart data as a service. Introduction Nowadays, the new development of big data, business analytics, and artificial intelligence has fundamentally changed traditional business processes [1,2]. Enterprises are under pressure to innovate and create unique and exceptional competitive advantages. One of the most important challenges faced by enterprises is how to create value from business data, especially big data [3,4]. Smart data are defined as the data gathered and processed that can be used to generate new insights for smart solutions to support business strategies [5,6]. This paper aims at expanding knowledge regarding the management of smart data in today's business landscape in order to develop new intelligence of smart data and solutions in the era of big data and artificial intelligence. Smart solutions, which are built based on smart data and intelligent systems and services, have the capacity of self-detecting and self-adaptation to users' needs without their explicit requests [5,7]. Big data, business analytics, the Internet of Things, and cloud computing provide a huge source of knowledge that needs to be transformed into smart data, to determine user contexts, and then to enable intelligence capabilities of smart solutions [5,8]. However, there is still a little focus on how to transform big data into a higher level of data that can be used for smart solutions [6,9]. For this reason, this paper aims at proposing a conceptual model for designing a smart service system, which can manage and deliver smart data as a service. Based on the service science perspective, smart data management is an emerging research direction that concerns the management, the science, and the engineering of smart data [10][11][12]. The paper is structured as follows. Section 2 continues with the principles of smart data. Section 3 presents the actionable insights and the challenges of the transformation from smart data to actionable insights. Section 4 proposes the conceptual model for smart data management and Section 5 ends with the conclusion and future work. Smart data Enterprises are overwhelmed with big data; however, big data are not important if no insights are extracted [12,13]. In a sense, smart data mean the "right data" that can reveal insights and make these insights actionable. Smart data are the combination of big data and deep data [8,14]. Deep data can provide the context and calibration of a researched phenomenon that is a challenge of big data [12,14]. To put it another way, big data are perceived as smart data when they can generate meaningful insights and create value [5]. Analytics transforms meaningless numbers into actionable insights. From the perspective of analytic techniques, patterns and insights of smart data are extracted by intelligent algorithms [6]. With the support of state-ofart analytics, smart data can be generated at the point of data collection [8]. Therefore, enterprises can reduce costs for data storage [3]. The real-time attribute of smart data also leverages the value of big data through various decision supports [4,14]. Building from these reflections, this study defines smart data as a subset of big data that can provide actionable insights through the process of analytics [5,6,12]. In terms of significance, smart data respond to the challenges of big data that most enterprises encounter [6]. In particular, smart data deal with the problems of data overload and data quality due to the characteristics of big data such as huge volume, velocity, veracity, and variety [9,10]. In accordance with this view, the study of García, Ramírez-Gallego, Luengo, Benítez and Herrera [11] defines smart data as an important step of data preprocessing to provide a smart dataset in a timely and accurate manner. Accordingly, adopting smart data helps enterprises determine the most current and relevant data sources as all data sources are not equal [15]. Actually, focusing on the right data source can improve the quality of data. From smart data to actionable insights As discussed in the previous part, smart data outperform big data as they provide actionable insights [5,6,12]. Actionable insights are defined as meaningful findings through the process of data analytics. Enterprises rely on actionable insights for data-driven decisions [18]. With the supports of actionable insights, enterprises are enlightened with actions that need to be taken in dealing with complex business situations [6,19]. Not all insights are actionable [6]. Insights are actionable in the sense that enterprises can draw conclusions and take actions upon business situations [18,20]. In fact, about 70% of enterprises struggle with taking data analytics to the next step for action plans [21]. In other words, actionable insights are the gap between data and business value [22]. This stimulates the motivation to clarify factors that support actionable insights. Firstly, actionable insights should be aligned with the goals and strategies of an enterprise to drive actions [23]. Secondly, insights are actionable by being aware of the context or circumstances of service providers (e.g.: organizational culture, strategies, capacities) and customers (e.g.: time, location, preference, etc.) [18,24]. Accordingly, a service solution can be recommended to the right customer at the right time in the right setting [1,10]. Finally, actionable insights should be specific, clear, critical, and innovative so that decision-makers are stimulated to act upon them [25,26]. These characteristics of actionable insights make enterprises comprehensively understand an insight, its importance, priority, and feasibility [27]. The literature points out that insights are not well manifested towards models or business rules/processes [19,28]. Actionable insights need to be visualized through digital dashboards or models to support the decisionmaking process [20,29] Conceptual model for smart data management This section presents a conceptual model for smart data management that set the foundation for designing a smart service system from the perspective of service science. A smart service system is a context-aware service system, which can dynamically adapt to a context and support the decision-making process for a specific business situation [32]. This perspective comprises three elements: science, management, and engineering [30]. Figure 1 illustrates the conceptual model for smart data management, including the engineering, science, and management elements. Engineering element. The engineering element, which aims at capturing different types of data, covers the invention of new technologies to obtain big data and deep data from different data sources and transform them into useful data stored in database management systems [30]. The new sources of data and techniques for data capturing can improve the quality of business services and create new innovative services related to smart data. This element includes components such as data loading, data ingestion, and real-time processing components to process different data sources to support data collection, provision, and distribution model [31]. Data bases Science element. The science element, which focuses on organizing data into useful information, deals with the structure of service systems and facilitates the process of service creation and the application of competencies [30]. A knowledge structure is defined as an interrelated collection of concepts of a domain, relationships between concepts, and relationships between concepts and a data source. In our approach, concepts are defined by different knowledge components such as know-what, know-how, know-why, know-where, know-when, know-who, and know-with [31,32]. The data analytics and data organization components help to discover new types of data and knowledge, to link data sources with relevant concepts and determine relationships between concepts. Management element. The management element, which aims at transforming useful information into actionable insights, concerns methods and techniques to improve services related to smart data through effective management [30]. The objectives of this element focus on the control, discovery, collaboration, learning, and decision support based on actionable insights [31]. Smart data as a service provide a service to a decision-maker based on a particular business situation. The context recognizing and context reasoning components help the business decision component to determine the context of the corresponding business situation. A context is defined as "a stakeholder (know-who) performs actions (know-how) on objects (know-what) at a certain time (know-when) in a location (know-where) because of a contract (know-with) to be consistent with a business rule (know-why)" [32]. Thus, the business decision component may provide the possible solutions and recommend a specific solution based on business intelligence and analytics techniques performing on actionable insights and related data sources. Conclusion This paper proposes a conceptual model for designing a smart service system based on the perspective of service science that can manage and deliver smart data as a service. It is believed that this study is one of the first that focuses on supporting smart data management from the service science perspective. Concerning the implications of our work for practice, the proposed model sets a strong foundation for change management on smart data and for organizational adaptation on business structure and systems to support smart data. In fact, smart data also facilitate the need for managerial, organizational, and technological changes corresponding to the management, science, and engineering elements of the model [10][11][12]. The management changes focus on developing business strategies to offer context-aware smart services [16,17]. The organizational changes address the significance of organizational culture, structure, business process, and leadership for smart data management [5,7]. The technological changes emphasize the need for automation tools for collecting and transforming big data and deep data for smart data capture [8,10]. In summary, the ultimate importance of smart data is its transformation of enterprises that struggle with data into data-driven enterprises for smart solutions [3,4]. Concerning the implications of our work for research, the proposed model can be a starting point for studies on smart data management and application in business. We are currently developing the framework for smart data management based on our previous work [32] under the informational lights of service science for the progression [33].
2,353.4
2021-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Recognizing substrings of LR(k) languages in linear time LR parsing techniques have long been studied as efficient and powerful methods for processing context free languages. A linear time algorithm for recognizing languages representable by LR(k) grammars has long been known. Recognizing substrings of a context-free language is at least as hard as recognizing full strings of the language, as the latter problem easily reduces to the former. In this paper we present a linear time algorithm for recognizing substrings of LR(k) languages, thus showing that the substring recognition problem for these languages is no harder than the full string recognition problem. An interesting data structure, the Forest Structured Stack, allows the algorithm to track all possible parses of a substring without loosing the efficiency of the original LR parser. We present the algorithm, prove its correctness, analyze its complexity, and mention several applications that have been constructed. 1 Introduction the grammar G by adding the rule 5' -• $5$, where S is the start symbol of grammar G, and '$'is a new terminal symbol, not in the original alphabet of grammar G. The non-terminal 5' becomes the new start symbol of grammar G f . From the input string x we construct w = $x$. The output of the reduction is the pair ((£', w). This reduction is constructive in constant time and space. The details of this reduction's correctness proof are omitted, and may be easily filled in by the reader. Also, it can be shown that the set of all substrings of a CFL is itself a CFL. Since the set of CFLs is exactly the set of languages accepted by non-determinitic pushdown automata (NPDAs), one easy way to show this is by constructing an NPDA that accepts all substrings of the language of a given context-free grammar. The NPDA constructed for accepting the language of a given context-free grammar (in Greibach normal form) in [HU79] (page 116) can easily be modified to accept all substrings of the language. Thus, the general problem of recognizing substrings is not any harder than that of recognizing full-strings. However, the set of all substrings of an LR(k) language is not necessarily itself an LR(k) language, therefore a linear time bound for recognizing substrings of LR(k) languages is not trivial. In this paper we show that the substring recognition problem for LR(k) grammars is not any harder than the full-string recognition problem. We present an algorithm for the LR(k) substring recognition problem that runs in linear time, which is similar to that of the original LR parsing algorithm [AU72]. While previous substring parsing algorithms such as Cormack's [Cor89] modified the LR parsing tables to accommodate for substring recognition, our algorithm modifies the parsing algorithm itself, while leaving the original LR parsing tables intact. We introduce a data structure, the Forest Structured Stack (FSS), that keeps track of all possible parses of the substring, while preserving the efficiency of the original LR parsing algorithm. The SLR, canonical LR(1) and LALR parser variants differ only in the algorithms that produce the parsing tables from the grammar, and share a common LR parsing algorithm that is controlled by these tables. Since our substring algorithm replaces this run-time parsing algorithm while using the parsing tables "as is", it is equally applicable to all of the above LR variants. The parsing algorithm for canonical LR(k) grammars (k > 2) differs slightly from the other variants, in order to account for the extended lookahead into the input. Thus, a slightly different version of our substring algorithm handles canonical LR(k) grammars. Section 2 describes the FSS data structure and presents the substring recognition algorithm for LR(1) grammars. In section 3 we prove the correctness of the algorithm. Section 4 analyzes the time complexity of our algorithm. An amortized analysis is used to prove that the algorithm does indeed run in linear time. Section 5 extends the algorithm to the general LR(k) case. Finally, some applications of the algorithm and our conclusions are presented in section 6. The Algorithm In this section we present our fundamental substring recognition algorithm, appropriate for SLR, canonical LR(1) and LALR parsing tables. These LR parsing variants assume that only the single next input symbol is available to the parser at any point (no further lookaheads). The slightly modified algorithm for canonical LR(k) grammars (k > 2) is presented in section 5. The substring recognition algorithm we describe in this section is denoted by SSR. It is a variation of the conventional LR parsing algorithm, denoted by LRP. 2.1 The Forest Structured Stack The Forest Structured Stack (FSS) is a graph, consisting of a set of trees, representing a possibly infinite set of stacks of LRP. The nodes of the graph are labeled by states of the LR machine. The edges that connect the state nodes are labeled by grammar symbols. Each path from a root to a leaf corresponds to the top portion of an LRP stack, in which the node at the root of the path represents the state at the top of the stack. The algorithm simulates the behavior of LRP on all the stacks represented in the FSS, adding nodes in correspondence with actions that push items on the stack (shifts), and removing nodes in correspondence with stack reductions. The tree representation avoids the duplication of stacks which have an identical top part but which differ in content deeper down. An Informal Description of the Algorithm The idea behind SSR is to effectively simulate the behavior of LRP on all possible strings of which the input is a suffix. When parsing a string w, of which our input string x == xix 2 • • • x" is a suffix, LRP is in some state (at the top of the stack) upon shifting xi, the first symbol of x. We are interested in all such states and thus we initialize SSR by building a FSS with a distinct single node tree for each state that can be the result of shifting xi according to the pre-compiled action table. Since each single node tree represents all stacks with that state at the top, the initial FSS represents the set of all possible stacks after the shifting of xi. From here on we continue the parsing of x according to each of the FSS trees. SSR performs a series of alternating Reduce and Shift phases, one pair of phases for each input symbol. During a Reduce Phase, reductions are performed on all trees whose top state indicates that a reduction is to be performed. In LR parsing, reductions remove nodes from the stack. When performed on a tree, they are done on all paths in the tree, starting at the root, to a depth corresponding to the number of symbols on the right-hand side of the rule being reduced. Reductions are a problem only when they wish to remove nodes deeper than the length of some path in the FSS. This corresponds to a reduction that includes symbols derived from parsing the part of the full string that is prior to x. In our algorithm, we refer to such reductions as long reductions, and treat them in a manner somewhat similar to our initialization. A reduction normally removes the right-hand side of the rule being reduced, and then shifts the non-terminal symbol A of the left-hand side of the rule. The new state at the top of the stack is determined from the goto table, and depends on A and on the state revealed at the top of the stack by the reduction. With long reductions, since only a partial stack exists, this state is not known. Our algorithm determines all such possible states by a lookup in the long reduction goto table. This supplemental table specifies for each possible reduction from a state at the top of the stack, the set of states that may be reached as a result of the shifting of the left-hand side non-terminal of the rule being reduced. The table is easily constructed from the parsing tables prior to run-time. Each of the determined goto states corresponds to at least one full string, the parsing of which would have resulted in that state being at the stack top at this point in the parsing process. It is sufficient at this point to add these states to the FSS as single node trees. Long reductions are performed at most once per state in a Reduce Phase, since a second long reduction from the same top state would produce the same new trees, and thus would be redundant 1 . When the action defined by the table on the root node of a tree is error, the entire tree is discarded. These are trees that correspond to prefix string s of x that cannot be completed to strings in the lang uag e. A Reduce Phase terminates when the action indicated by the table, on each of the tree root nodes, is to shift the next input symbol. All the shift operations are done in the consequent Shift Phase of the alg orithm. Upon reaching the end of the input x, if the FSS is not empty, we can safely assume that there exists a prefix string у such that the parsing of the string yx by the LR parser would not have caused a parsing error by this point. Properties of LRP g uarantee the existence of a suffix z , such that w = yxz is accepted. Thus x is confirmed to be a valid substring . To increase the efficiency of the alg orithm, two operations, SUBSUME and CONTRACT, are performed on the FSS structure at appropriate times. When a single node tree is added to the FSS, and the state of the node is identical to that of some other tree root node in the FSS, the larg er tree may be deleted from the FSS, since the sing le node tree represents all stacks of L#Pthat have that particular state at the top of the stack. This set of stacks necessarily includes all stacks that were represented by the larg er tree rooted at a node of the same state. The SUBSUME operation detects such conditions and deletes the larg er tree. Long reductions frequently create sing le node trees that subsume other trees in the FSS. The CONTRACT operation merges two trees, the roots of which are of the same state, returning a sing le tree as a result. The merg ing is done recursively down the two trees, to ensure that no immediate sibling nodes in the FSS are labeled by the same state. This in turn g uarantees that at all times, the branching deg ree of every node in the FSS is bounded by the number of states in the parsing table, a property essential for maintaining a linear bound on the running time of the algorithm. Two trees may end up having the same top state as a result of either a shift operation or a reduction. In the shift case, since prior to the shift the trees necessarily had different top states, they may be simply merg ed at the top node level, and no deeper tree contraction is needed. However, in the case of a reduction, if the result of the reduction is a top state which is the same of that of another existing tree in the FSS, a full CONTRACT operation is performed. The RECLAIM operation is responsible for freeing the dynamically allocated storag e for those nodes and trees that are discarded in the course of the alg orithm. A Formal Description of the Alg orithm We next present a more formal description of alg orithm SSR in a pseudo "hig h-lever lang uag e. We use the following notation : • nodes of the FSS are presented as structures with two fields. A state field containing the parser state, and an action field containing the next parser action to be done upon processing the node. • STATES is the set of all parser states (according to the parsing table). • ROOTS is the set of nodes that are roots of trees in the FSS. • NEW-ROOTS -temporary set of new roots. • EOS -token representing the end of the input string. • geLnextsym(x) : function returning the next input token x. if there exists a node n* in ROOTS with n*.state = s then SUBSUME(n,n*) else add node n to ROOTS with n.action = ACT(s,x); end; end; mark state ts for long reduction ; end; 6 2.4,2 CONTRACT CONTRACT merges two trees that have root nodes of the same state into a single tree. C0NTRACT(nl,n2) if nl is a singleton node then RECLAIM(n2) and return nl; else if n2 is a singleton node then RECLAIM(nl) and return n2; else for each child c2 of n2 do : if nl has a child cl with cl.state = c2.state then CONTRACT(cl, c2) and replace cl with the resulting tree else add c2 as a new child of nl; end; SUBSUME SUBSUME replaces a tree rooted at a node n with a singleton new node that has the same state. RECLAIM RECLAIM deletes all nodes of the tree rooted at given node n from the Forest Structured Stack. RECLAIM(n) for all children nodes c of n do RECLAIM(c); delete node n; An Example To further clarify how the algorithm works, we present a simple example. Figure 1 contains a simple arithmetic expression grammar, taken from [ASU86] (page 218). Table 1 contains the SLR parsing table for Table 2 shows the long reduction goto table for this parsing table. For each state, the long reduction goto table contains the list of states into which the parser may shift after a reduction from that state 2 . Figure 2 r2 sh7 r2 r2 3 r4 r4 r4 r4 4 sh5 sh4 8 2 3 5 r6 r6 r6 r6 6 sh5 sh4 9 3 7 sh5 sh4 10 8 sh6 shll 9 rl sh7 rl rl 10 r3 r3 r3 r3 11 r5 r5 r5 r5 Table 1: SLR parsing table for grammar in Figure 1 Top state Goto states after reduction 0 1 2 1 8 3 2 9 4 5 3 10 6 7 8 9 1 10 2 11 3 Figure 2f. This completes the Shift Phase. The consequent termination test discovers that we have reached the end of the input. Since the FSS is not empty, the input is a valid substring (of an arithmetic expression in the language of our grammar), and the algorithm terminates. Note that due to the simplicity of the chosen example, no CONTRACT or SUBSUME operations occurred in the execution outlined above. Correctness We now prove the correctness of SSR. The reader is referred to Aho and Ullman [AU72] for a comprehensive proof of correctness of the original LR parsing algorithm LRP. In our proof, we rely on the correctness of LRP, namely that for an LR grammar G, given an input string x, LRP accepts x if and only if x £ L(G). We will therefore aim to prove the following theorem : Let G be an LR(1) grammar and x be an input string. SSR accepts x if and only if there exist strings y, z such that w = y • x • z is accepted by LRP. We show that SSR simulates the parsing of x by LRP for all possible prefix strings y. If upon shifting x n , the last input symbol of x, SSR has not rejected x, there exists at least one such prefix string y, for which LRP has not rejected the input y • x after the shifting of x n . The existence of a suffix string 2, for which w = y • x • z is accepted by LJ?P is assured by the fact that LR parsers reject inputs as early as possible [AU72]. We now provide a formalization of the above outline. stack configuration c is a triple (s,x,i), where s = [sti,st2, -->stk] is a stack of states (with st^ at the top), x is the input string of length n, and 0 < i < n is a position within the input string. Definition 1 A The set of stack configurations represented at any point of SSR includes a configuration for each path from a root node to a leaf in the FSS. The LRP stack configurations are those particular configurations that correspond to stacks manipulated by LRP. To formalize the effect of the parsing operations of algorithm SSR on the FSS, we define the function next, from stack configurations to sets of stack configurations. case of a shift or a normal reduction, next(c) is a set containing the single resulting new configuration. In case of a long reduction, next(c) is the set of all stack configurations consisting of single state stacks, the states of which one can reach after shifting the left-hand side non-terminal of the rule being reduced, as determined by the long reduction goto table. If the action is accept 3 or the end of string is reached, we define next(c) = {c}, and if it is reject (a parse error), then next(c) = <f>. c = (s,x,j) is an LRP stack configuration, then for some k, step(x, k) = (s,j), the action cannot be a long reduction, and therefore next(c) contains a single LRP stack configuration c', where c' = step(x, k + 1). Note that if To formalize the effect of the Reduce and Shift phases, we define the extension of next to sets of stack configurations in the following way. Definition 5 Let C = C\ U C 2 be a set of stack configurations such that C\ contains exactly the stack configurations of C whose top state indicates that the next action is a reduction, and C 2 is the rest of C. Thus, reductions have precedence over other actions. Based on this extended definition of next we define for every n > 0 the function next 71 , which is the result of n successive applications of next. Note that a Reduce Phase corresponds to some finite number of applications of next and that a Shift Phase corresponds to a single application of next. Also note that again, if c = (s,x,j) is an LRP stack configuration, then for some A;, step(x,k) = (s,j), the action taken on any of the n following parsing steps cannot be a long reduction, and therefore, for any n > 0, next n ({c}) contains the single LRP stack configuration c f , where c' = step(x, k + n). Lemma 2 The Simulation Lemma : Let C be a set of stack configurations. Then : M(next(C)) = next(M(C)) Proof = (s,x,z) , d G M(next(c)). . Assume c' G next(M(c)). Since the action on c is a reduction, c' must be of the form (r • [st],yx,|y| + t), where the state st is a result of shifting the left-hand side non-terminal at the end of the reduction. Prom it's definition, the long reduction goto table includes state st as a possible result of the long reduction on c. Thus, ([siljXji) G next(c) and by definition of Af • We generalize Lemma 2 to any finite number of applications of next. Lemma 3 The Generalized Simulation Lemma : Let C be a set of stack configurations. For every n > 1 ; M(next n (C)) = next n (M(C)) Proof: By a straightforward induction on n using Lemma 2. By the induction hypothesis we have that M(C) = M(next m~l (C) = next m " 1 (M(C)). The following set of equalities complete the proof of our claim : M(next m (C)) = M{next{next m " l {C))) by def. of next = M(next(C')) by def of C = next(M(C')) by Lemma 2 = next(next m~l (M(C))) by induction hyp. = next m (M(C)) by def. of next This completes the proof of the Lemma 3. • Proof: By induction on i. C\ has both properties due to the way it is constructed. The induction step is proven by the following arguments. Since the next function is a formal modeling of the Reduce and Shift phases of the algorithm (excluding the process of possibly discarding some configurations by SUBSUME and CONTRACT operations), it follows that for some n, d C next n (C,-_i) (with the "missing" configurations being those discarded by the SUBSUME and CONTRACT operations) and since SUBSUME and CONTRACT have no effect on the set of configurations represented by M, M(C t ) = M(next n (C t -i)). The next function has the property that if M(c) / <f> and next(c) / <j>, then M(next(c)) ^ <£, which extends to next n and thus guarantees soundness. By Lemma 3 M(d) = M(next n (C t _i)) = next n (M(Ci-i))> which guarantees completeness. • Corollary 1 If C n is the set of stack configurations represented by the FSS after the nth Shift Phase, where n = then C n ^ <f> iff there exists an LRP configuration c' = (s',yx, \y\ + \x\). Note that the existence of such an LRP configuration c 1 implies the existence of a string w' = yx, such that w' is not rejected by LRP by the time x n was shifted. The soundness property of Lemma 4 guarantees that if C n ^ <f>, such an LRP stack configuration c' exists. The completeness property guarantees that if such a configuration c' exists, C n ^ <j>. We may now proceed to proving the main theorem: Theorem 1 Let G be an LR grammar, and x be a given input string. Algorithm SSR accepts x if and only if there exist strings y, z such that w = y • x • z is accepted by algorithm LRP. Proof: 1. If: Since there exist strings y and z such that w = y • x • z is accepted by algorithm LRP, the string w f = y • x is not rejected by Li?P up to the point of the shifting of x n (where n = \x\). Thus, from the above corollary it follows that the FSS of algorithm SSR is not empty upon entering the nth TERM stage, and x will be accepted by SSR. Complexity Analysis for Grammars Free of Epsilon Rules We now prove that SSR runs in linear time for grammars free of epsilon rules. In the next subsection we will demonstrate that SSR maintains a linear running time even in the presence of such rules. After the initialization of the FSS, the algorithm enters a loop that consists of a termination test for end of input, examining the next input symbol, a Reduce Phase and a Shift Phase. This loop can be executed up to n -1 times, until the end of string is reached. The initialization of the FSS that precedes the loop requires only constant time. It involves scanning a column of the LR action table, and the creation of a constant number of root nodes. The termination check also takes constant time. Since there are only a constant number of root nodes (see Lemma 5 below), each Shift Phase involves only a constant number of shift operations and thus takes constant time. However the time cost of each Reduce Phase is not uniform, and varies from one run through the loop to the next. Each Reduce Phase involves some number of Tree Reductions, which are reductions on all paths of an FSS tree to a constant depth. We will show that each such Tree Reduction is completed in constant time and then use an amortized cost evaluation to obtain a linear bound on the total number of Tree Reductions. Finally, we will argue that the total time cost of all SUBSUME, CONTRACT and RECLAIM operations is also at most linear in the length of the input. In the following analysis, S denotes the set of states of the parser, and \S\ is the size of this set. We distinguish between root nodes of the FSS and internal nodes. Proof: The claim holds after the initialization of the algorithm, and throughout Reduce and Shift Phases SSR explicitly checks for root nodes of identical state, and when detected, merges the appropriate trees, using SUBSUME and CONTRACT as necessary. • Lemma 6 The total number of nodes that become internal in the course of execution of the algorithm on a string x of length n is 0(n). Proof: In the case that the grammar is free of epsilon rules, root nodes become internal only as a result of shift operations. Once a node becomes internal, it never again becomes a root node. Thus, the Lemma is a direct result of the fact that the number of root nodes at the start of any Shift Phase is bounded by |5|, and there are at most n Shift Phases. Thus the total number of shift operations is 0(n). • Lemma 7 No node in the FSS ever has more than \S\ children. Proof: Throughout the algorithm CONTRACT operations are performed whenever necessary so as to maintain this property. • We now concentrate on analyzing the time complexity of Reduce Phases. A normal reduction on a single path of nodes in the FSS is identical to an LRP reduction, and takes constant time. Long reductions are very similar to normal reductions. However, they involve accessing the long reduction goto table in order to determine the possible states that may result from the shifting of the left-hand side non-terminal of the rule being reduced. This table access is done in constant time. New root nodes are created for the resulting states of this process, and each added new node may require a SUBSUME operation, if there already exists a root node of the same state. This condition can be detected in constant time by a linear scan of the set of root nodes, and need be done only a constant number of times per long reduction, since at most |5| new root nodes may be added. We account for the time spent on the SUBSUME operations separately. Therefore, excluding the time spent on all SUBSUME operations, a long reduction on a single path requires only constant time. Thus, any reduction, normal or long, on a single path requires only constant time. A Reduce Phase reduction in SSR operates on a FSS stack tree, and performs the reduction on all paths in the tree that originate at the root node to a depth equal to the number of symbols on the right-hand side of the rule being reduced. Since this is a constant depth, and the fan-out degree of FSS tree nodes is also bounded by a constant, each such Tree Reduction involves only a constant number of reductions (one for each path), each taking constant time. Thus in order to complete the time analysis of Reduce phases, we need only demonstrate that 0(n) Tree Reductions are performed in the course of the algorithm. For the purpose of the analysis, we separate the rules of our grammar into two groups. Grammar rules with a single symbol on the right-hand side are grouped together as non-generative rules and their corresponding reductions are referred to as non-generative reductions. All other rules will be called generative rules and their corresponding reductions generative reductions. We will show that the cost of performing a generative reduction can be charged to internal nodes of the FSS that are discarded by the reduction, and that only a constant number of consecutive non-generative reductions may occur between the generative ones. Thus, the non-generative reductions may be charged to the generative ones, and they in turn can be charged to the nodes. Lemma 8 In a Reduce Phase of algorithm SSR, only a constant number of consecutive nongenerative Tree Reductions may be performed. Proof: Since long reductions are performed at most once per state in a Reduce Phase, we need only consider the normal reductions. Non-generative reductions do not remove internal nodes from the FSS. By a counting argument it can be seen that after a constant number of such reductions on FSS trees, such a reduction is repeated. If this were to occur the non-generative rules that correspond to this series of reductions would form a cycle, in contrast with the fact that any LR grammar must be non-cyclic. First we consider the CONTRACT operations. The CONTRACT operation merges two FSS trees that have root nodes of the same state. The contraction itself is done by comparing the states of the children of the first root node with those of the second root node. Lemma 7 guarantees at most I.SI 2 comparisons. If a child of the first root node has a state identical to that of a child of the second root node, the two subtrees are contracted by a recursive call to CONTRACT. All other children (and their appropriate subtrees) are added as children of the first root node, and the second root node is deleted. Thus, the top level CONTRACT operation requires constant time. Note that any recursive call to CONTRACT will necessarily result in the elimination of an internal node. We may thus charge a unit of cost to the node deleted as a result of each recursive call to CONTRACT, and since the node is deleted from the FSS by the this operation, it may be charged only once. Since CONTRACTUS invoked only after reductions, there are at most 0(n) top level calls to CONTRACT. Lemma 6 guarantees that at most 0(n) internal nodes will be charged, therefore implying at most 0(n) recursive calls to CONTRACT. This provides us with an 0(n) Finally, we observe that we have already accounted for the SUBSUME operations. SUBSUME searches for a root node of a state identical to that of a new single node tree created by a long reduction. This requires constant time. If found, the tree is the reclaimed by the RECLAIM operation, the time for which we have already accounted for. This completes the time complexity analysis of our algorithm, under the assumption that the grammar contains no epsilon rules. Our analysis has shown that the total time cost of all operations in an execution of the algorithm on an input string of length n is 0(n). Extending the Complexity Analysis to Grammars with Epsilon Rules We now turn to deal with the case that the grammar contains epsilon rules. Epsilon rules complicate our algorithm due to the fact that root nodes may become internal nodes as a result of a reduction by an epsilon rule. Thus, Lemma 6 must be re-argued, namely that the total number of root nodes that become internal in the course of an execution of the algorithm continues to be 0(n), even in the presence of epsilon reductions. Since epsilon rules have no effect on the Shift Phase of our algorithm, in order for our entire complexity analysis to still carry through, we need only to prove that the total number of Tree Reductions is still 0(n). Let us note that a grammar may indeed have epsilon rules, and still be LR. For example consider the natural grammar for the language a n b n (for n > 0) in figure 3, which is in fact LR(0). It is convenient to look at epsilon rules as normal grammar rules that generate an "invisible" terminal symbol epsilon. Thus strings in the language generated by the grammar correspond to modified strings that include the epsilon symbols in the appropriate places. For a non-ambiguous grammar we are guaranteed that this is a one to one correspondence (each string in the language corresponds to exactly one string with epsilon symbols). Lemma 11 An LR grammar has the property that only a constant number of epsilons may appear between two non-epsilon terminal symbols in the modified strings that correspond to strings in the language generated by the grammar. Furthermore, if we denote the length of the longest right hand side of all grammar rules by L, and the number of grammar rules by i, this constant number of consecutive epsilons is bounded by L x . Proof: In order to prove this claim we restrict our attention to E, the subset of grammar rules that may produce a consecutive string of epsilons. It is easy to see that if the rules in E can produce an infinite string of epsilons (starting from any rule in E, whose left-hand side non-terminal is reachable), then the grammar is necessarily ambiguous and thus not LR. The fact that E cannot produce an infinite string of epsilons poses several restrictions on the rules in this subset. No rule in E contains a terminal symbol on its right-hand side. Also, no rule in E can be recursive (the left-hand side non-terminal cannot appear on the right-hand side of the rule). Using these properties, by a simple induction on z, the number of rules in E y it can be shown that the number of consecutive epsilons that can be produced by E is bounded by the constant C e = L\ where L is the length of the longest right-hand side of the rules in E. • In order to prove that the total number of Tree Reductions continues to be O(n), it is sufficient for us to show that Lemma 6 still holds. Lemma 12 The total number of root nodes that become internal nodes in the course of an execution of algorithm SSR on a string x of length n is 0(n), even if the grammar has epsilon rules. Proof: For every i: 0 < i < n, let internal(i) be the total number of nodes that have become internal in the course of the algorithm, up until the completion of the Shift Phase of X{. We prove by induction on i that for every 0 < i < n, internal(i) < C * i, where C is a the constant \S\ * (C e + 1). internal(m + 1) < internal(m) + \S\ *C e + \S\ < C * m + C (by induction hypothesis) = C*(ra+ 1) Now since the total number of nodes that become internal in the course of the execution of algorithm SSR is bounded by internal(n), and internal(n) < C * n, the above total has indeed been shown to be 0(n). • In the process of proving the above lemma, we in fact have shown that only 0(n) epsilon reductions may occur in the course of executing SSR on a string x of length n. It thus follows that Lemma 10 continues to hold, and the number of Tree Reductions continues to be O(n), taking into account all three types of tree reductions that now exist, non-generative tree reductions, generative tree reductions, and epsilon rule tree reductions. Combined with the time analysis of the other operations which continues to hold as before, we may again conclude a linear time bound on the total running time of algorithm SSR. The Algorithm for Canonical LR(k) Grammars In this section we consider the implications of generalizing algorithm SSR to deal with the general cause of canonical LR(k) parsing tables. First, let us consider the necessary modifications to the algorithm itself. These turn out to be quite minimal. In fact, only the INIT stage needs to modified. In the INIT stage, instead of reading just the first symbol of the input string, we must obtain the first k symbols for the lookahead. This is due to the fact that the LR(k) action table is defined according to the A;-lookahead on the input. The action table is then searched in order to construct the initial set of root nodes. An obvious complication occurs whenever the length of the input string is less than the needed lookahead (\x\ < k). To handle this case, all possible extensions of the input string x to a string y of length k are considered, and the set of root nodes is constructed as the union of the sets derived for all such y. The algorithm will then terminate immediately in the following TERM stage. If the set of root nodes constructed in the INIT stage is not empty, x is accepted, otherwise x is rejected. Following is the "high level" description of the modified INIT stage : All other stages of the algorithm stay exactly the same as in algorithm SSR, as presented in section 2. In the DISTRIBUTE stage, the actions determined from the LR(k) action table depend on the existing fc-lookahead at that particular point in time. In the Shift Phase, the first symbol of the lookahead (the symbol being shifted) is removed from the lookahead and shifted. The get_next_sym function call in the subsequent TERM stage completes the lookahead from length k -1 to k. The algorithm terminates when the end of string (EOS) is encountered, with k -1 symbols of the input string still in the lookahead. 18 Let us now consider what implications (if any) does the above modification of algorithm SSR have on its correctness and complexity. The proof of correctness presented in section 3 continues to hold for our modified algorithm. Lemma 4 continues to hold with respect to the appropriate LR(k) version of algorithm SSR. Since the property of rejecting an input at the earliest possible opportunity [AU72] holds for general LR(k) grammars, the proof of the main theorem of correctness continues to hold as well. Finally let us consider the complexity analysis. It is easily seen that the revised INIT stage still takes only constant time. The set Sy is a finite set bounded by a constant, thus constructing the initial set of root nodes clearly takes only constant time. The size of this set is still bounded by I*?), the number of states in the LR action table. Since all other stages of algorithm SSR are the same as before, the time complexity analysis of the algorithm remains valid. Conclusions We have presented and proved a linear time algorithm for recognizing substrings of LR(k) languages. The original version of this algorithm was initially developed by the first author in 1980. It did not include the CONTRACT operation for merging trees of the FSS. Tree contractions are crucial to retaining a linear bound on the running time of the algorithm. In the process of trying to prove the linear time bound we discovered this deficiency, and the proper modifications were consequently made. The original algorithm, while in fact not always linear, was used as the basis for a syntax checking modification to the IBM VM/370 editor XEDIT. That modification enabled the IBM editor to check COBOL source code for syntax errors, when users modified lines, screens or files. For instance, when the cursor was moved off a modified line, the editor would beep and display an unobtrusive error message if the line was not a substring of any COBOL program. Though COBOL has a large grammar, this modification had no apparent effect on the speed of XEDIT on machines of the early 1980's. The algorithm was also used to check Pascal programs on an IBM PC editor, and this too had no apparent effect on the speed of the editor. Thus, the original algorithm appeared to be adequately fast in practice. We have implemented our revised algorithm and have tried it on several test grammars. No precise measurements have been performed to compare the actual running time of our substring algorithm with that of the original LR parser. However, in practice, the revised implementation continues to run as fast as before.
9,263.8
1992-02-01T00:00:00.000
[ "Computer Science" ]
Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. In the vertebrate retina, cones are hyperpolarized when illuminated by light, but also receive a depolarizing input when receptors some distance away are illuminated. This antagonistic center-surround response is mediated by amacrine and horizontal cells (Figure 1), through a sign-reversing synapse to the cones often called feedback synapse, the global mechanism being called lateral inhibition [1][2][3]. This surround response is involved in edge enhancement and image contrasting [4][5][6][7][8][9][10][11][12][13][14][15][16] realizing concretely the Mach (boundary brightness overshoot) and the Marr (Laplacian zero-crossing edge-enhancement) effects, used in many image processing applications [17]. A number of contrast illusions (Figures 2, 3, 4) have been described [18] based on the lateral inhibition principle. In order to examine how rod and cone functions are differentially affected during retinal degeneration (abolishing the contrast), many studies have been done on the genetic level showing that these two cell types have complementary roles during both development and degenerative processes [19][20][21]. For understanding the retinal physiology as well as this pathology, many models [22][23][24][25][26][27][28][29][30][31][32][33][34] are now available which try to mimic relevant adaptation behaviours of the human visual system, like lightness/colour constancy and contrast enhancement, corresponding to the ability of the visual system to increase the appearance of large-scale light-dark or inter-colour transitions, similar to how sharpening with an ''un-sharp mask'' increases the appearance of small-scale edges. These models use theoretical developments [35][36][37][38][39][40][41][42][43][44] in dynamical systems, especially the study of their attractors. An attractor represents the ultimate evolution of a dynamical system when time tends to infinity; after perturbations, an attractor recovers its stable dynamical features, like its period and amplitude. That requires a rigorous mathematical framework for defining the continuous flow and its convergence speed to attractors, and after its discrete version, i.e. an iteration process representing the succession of states of the dynamical system. These theoretical advances have permitted the development of fast image processing algorithms used in rapid contrasting methods [45][46][47][48][49][50][51][52][53][54][55][56][57][58] implemented in realtime processors [59][60][61][62][63][64][65][66][67][68], and the development of contouring methods like snakes, snake-splines, d-snakes, which allow a global definition of the boundaries of objects of interest in an image. These algorithms have emphasized the role played by computer implemented procedures, starting from an initial compact, e.g. a sphere, and ending at the final shape of the object's contours after a certain number of iterations [69][70][71][72][73][74][75][76][77][78][79][80]. The corresponding flow is a compact set valued flow, the simplest deriving from a potential [81][82][83][84][85][86]. In general, this methodology allows one to rapidly and automatically obtain 3D contours, which is necessary in medical imaging to perform computer aided medical interventions. If the dynamics are conservative in a neighbourhood of an attractor, the flow becomes Hamiltonian, so we then will define the notion of mixed potential Hamiltonian flow. This flow gives a theoretical support to the Waddington's notion of chreod, particularly relevant in embryonic morphogenesis modeling [87][88][89][90][91], but also serves in image contouring. Using the previously introduced theoretical notions, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a reaction-diffusion partial differential system [92][93][94][95][96][97][98][99]. Indeed, having the goal of providing for a rapid and efficient action in precise surgical robotics as well as in disease diagnosis and satellite control imaging, such pre-treatments are performed for contrasting and then contouring images. The medical community, for example, often uses pre-treated anatomical images coming from imaging devices, like MRI or CT-scanner, whose pre-processing involves two fundamental steps: contrasting and contouring. The natural vision executes these two tasks, the first one being based on the architecture of the retina, which uses lateral inhibition to reinforce the perception of the contours of homogeneous objects in a scene. Because the objects of medical interest are homogeneous with respect to their environment (a tumour or an organ are made of cells coming from the same cellular clone), they are well enhanced by using operators processing as in the natural vision. Therefore, we introduce continuous operators generalizing discrete neuromimetic approaches using lateral inhibition as well as analogs of the Hebbian rule for the evolution of synaptic weights. [19]. Bottom left: segmentation of cones and rods with a cell deficit in the quadrant Left Superior (LS) [34]. Bottom right: histogram of the intercept distances showing an augmentation of the inter-cell distance in the quadrant Left Superior with respect to others Left Inferior (LI), Right Superior and Inferior (RS & RI) [34]. doi:10.1371/journal.pone.0006010.g001 Results The results presented in this Section involve consecutive phases of contrasting and segmenting in order to identify objects of interest in an image. The important features of a scene are the prey, predators, and sexual partners. For the detection of these features, the major characteristics are the ''phaneres'', this word coming from the Greek phaneros: visible. The ''phaneres'' in animals and plants are prominent visible tegumentary formations like feathers, scales, hair, petals, skin spots and stripes of various forms and colours. The role of the contrasting pre-treatment in the retina is to rapidly enhance the characteristics (luminance, colour and texture) on the boundaries of the homogeneous zones in a scene in order to improve their perception and extract the features associated to the vital functions like the nutrition, the survival and the reproduction. This process can trigger very fast actions (like escaping a predator) after a stimulus of about 150 ms [136]. Such fast sensory-motor loops need a very simple and rapid mechanism well encoded in the anatomy and in the physiology of the retina (like the center-surround response of cones and rods [1][2][3]), early before a semantic recognition and denomination of the prey or of the predator. We will give first some results concerning the natural contrasting process both in a natural and in a simulation context. Pathologic retina The lateral inhibition mechanism in the retina is due to the presence of feedback synapses of horizontal cells [1,2], which reverse the sign from activation of the cells surrounding that were illuminated (Figure 1 top left). The retina pathologies provoke a progressive death of rods (as in retinitis pigmentosa) followed by the apoptosis of the cones; then, the non-secretion by rods of a growth factor favouring the cones survival, causes the disappearance of the lateral inhibition, hence of the contrasting ability [4,19,20,21]. As shown in the top right and bottom left of Figure 1 on a confocal slice of a sick retina, we observe an important loss of both rods and cones in the left superior quadrant. An analysis of interdistances among cells in the three other quadrants shows that the mean interdistance between cones in the peripheral retina (about 20 m) is better conserved than the corresponding value between rods (about 3 m), proving the primary rod degeneracy. Contrast illusions The perception of artefactual stripes or spots comes from the lateral inhibition effect, which causes a reinforcement (respectively decline) of brightness in a pixel if its neighbours are black (respectively white). This illusion effect is visible on the Figures 2 to 4. In Figure 2 (top-left), the Hermann illusion is provoked by the local organization of inhibition and activation between retinal cells, which is described bottom right. The illusion shows bright squares at the intersection of grey stripes and grey squares at the intersection of white stripes. In Figure 2 (bottom-left), the Mach bands illusion gives an enhancement of the vertical lines separating the different grey zones. In Figure 3 top-left, the tangential vision (which allows to escape the macular vision) gives the illusion of a bright reinforcement at the extremities and middle of the white stripes. On the top right, a progressive change of the vertical bright stripe into bright spots (in false colours) is observed during the feathers morphogenetic process in chicken due to a lateral inhibition effect between morphogens (model and simulation are given in [91]). On the bottom left, we can observe bright and grey activities respectively near the center (vertical black line) and the extremities of the white horizontal diamonds. For explaining these illusions, we can simulate a very simple threshold formal neural network (cf. infra) made of 7 neurons, with a lateral inhibition mechanism defined by the parameter values h = w ii = 2, w ii21 =w ii+1 = 20.5, and a sequential updating from the left to the right hand side. The spots activity appears after 3 iterations as a stable steady configuration, and is the discrete analog that the feature created by simulating the continuous reaction-diffusion operator used for modelling feathers morphogenesis [91]. In Figure 4, the sensation of seeing a 3D pyramid is the generalization of the well known Kanizsa polygon effect. It is due to the artefactual prolongation of the white square extremities as white (respectively black) lines in a black (respectively white) dominant neighbourhood. The illusion effects described above are easy to simulate by computer and can serve as external efficacy criterion when different contrasting methods are benchmarked. Contrasting and contouring images The enhancement of the grey level on its maximal gradient lines (identical to the geometric locus formed by all the points where the mean Gaussian curvature on the grey surface vanishes) is due to the retinal processing and causes the sensation of contours. By using an enhancement procedure based on the lateral inhibition effect in an formal neural network receiving as input the grey level of an image, we have obtained a good contrast on the boundaries of homogeneous zones either on simulated or on real images. Figure 5 (respectively 6) shows the result obtained after applying a contrasting algorithm on an artificial image (respectively on the NMR slice of a brain tumour). The contouring step follows the contrasting one, and we see in Figures 5, 6 and 7 contours of homogeneous (in grey level) zones resulting from a snake-spline procedure (i.e. an external snakebased procedure with the constraint to keep a spline closed curve at each step) applied over an artificial isolevel square ( Figure 5), a brain tumour ( Figure 6) and a forest ( Figure 7) made of the same species of elements (pixels, cells and trees respectively). The two steps of contrasting and contouring are based on classical algorithms of neural networks [24,31,32] and snake spline [69][70][71][72][73][74][75][76], but they can involve new methods coming from biomimetic procedures. We will describe rapidly four such new methodologies and give examples of their application to real satellite or medical images. 1) A chemotactic operator. If we denote, at time t and pixel x, g(x,t) as the grey level function, we can consider g as a food or substrate, which living entities (like bacteria) can eat, being attracted from the image boundaries (where they are first located) by a chemical gradient linked to the substrate. Let us denote the bacterial concentration by b(x,t). We can consider the following equations, which constitute a new image processing operator [85,142]: with Neumann conditions on the image boundary, where +g j j max denotes the maximum value of the g gradient norm, x is the attractive chemotactic constant, L b (respectively L g ) is the diffusion coefficient of the bacterial concentration (respectively grey level), K (respectively Ke) is the maximal (respectively minimal) grey consumption rate of bacteria. These equations imply that the bacteria move towards the concentration of grey considered as a chemo-attractant to consume. They also diffuse as the grey level with respectively the diffusion constants L b and L g . The Figure 7 bottom shows the progressive treatment of the image of a Chilean forest presenting the same characteristics of internal homogeneity as a tumour (the trees replacing the cells); due to the fact that the trees (like cells) belong to the same genetic lineage. After reaching their asymptotic values, the dynamics of contrasting implemented in a discrete scheme of the partial differential equations (PDE) (1), stops and this processing step can be followed by a snake spline contouring step. [48]. Top right: temporal evolution of the Difference of Gaussian function representing an activation near the central neuron i 0 (green links) and an inhibition (red links) farther from i 0 [47]. Bottom right: same processing in grey level with initial image on the left and contrasted on the right [49] doi:10.1371/journal.pone.0006010.g005 [83]. Top right: contour of Suez Canal [83]. Bottom left: image of Chilean forest. Bottom right: contrasted image using chemotactic operator and snake-spline contouring [85,142] 2) A viability contouring operator. If we minimize the following function, we obtain a new snake operator [75,85], where K(t) is a compact object of interest moving toward a limit set K('), whose external surface S as well as its inner volume V are minimized, allowing a contouring with real gloves (precise contour) contrarily to mittens (convex envelop) often observed with the Mumford-Kass-Terzopoulos algorithm in Figure 7 [69,70]. We see in Figure 7 (top and bottom right) the contouring done by imposing a bicubic spline to the boundary at each time step [71,72], followed by a 3D spline smoothing. Many other approaches can also be used for controlling the active-shape models. This is the case in the level set methods used for computing and analyzing the motion of an interface in two or three dimensions by modelling the velocity vector field through Euler-Lagrange or Hamilton-Jacobi PDE's [77,78,79,80]. These PDE's can be used to model the segmentation of a moving 3D object (like the heart) giving a particular status to the pixels having a maximal velocity or acceleration of their grey levels. This procedure has been used for segmenting the pericardium [131]. 3) A non-isotropic reaction-diffusion operator. If we consider the grey level function g(x,0) as the initial image, we can follow the transient behaviour of the non-linear diffusion operator defined in [93]: Here G is a Gaussian kernel of fixed variance and with Neumann conditions. Its asymptotes correspond to a constant grey level suppressing the objects of interest inside the image. For that reason, we consider now a non-isotropic reaction-diffusion operator defined in [93,95,96]: where L is a 262 matrix and P =g is the orthogonal projection matrix: In the equations above, the diffusion constant L becomes variable with the time t and its evolution equation is similar to the Hebbian rule of a discrete neural network operator. Treated images are obtained at the asymptotic state of the PDE dynamics as for neural networks [48,49] with lateral inhibition ( Figure 6). A comparison done in [96] shows that the asymptotes of this non-isotropic operator are better than for some of the operators described earlier. More generally, we can notice in the other PDE approaches: a) The application of the pure heat operator [145] quickly leads to a constant grey level b) In the Perona-Malik operator [92], the viscosity is different within a region and across its boundary in order to encourage smoothing inside the region of interest; this operator can be used transiently for this purpose before the non-isotropic reaction-diffusion operator c) The Catté-Lions-Morel-Coll algorithm [94] gives a good contrasting during the transient behaviour of the operator, but has the same asymptotes as for the pure heat algorithm (even it is reached more slowly) d) The non-isotropic reaction-diffusion operator [93,95,96] offers a reasonable asymptotic processing e) The Weickert operator [97] permits the completion of interrupted lines or the enhancement of flow-like structures by choosing the appropriate smoothing direction in anisotropic processes in spirit to the Cottet-Germain filter [95] f) The Tschumperlé-Deriche operator [98,99] allows the regularization of velocity vectors fields in 4D imaging (acquired for example during the motion of a 3D camera). 4) An attentional focus operator. For focusing on only one region of interest, we have to change the image input on an artificial neural network [56]. This input can be constant [24,31,32], stochastic [47][48][49][50][51][52][53][54] or deterministic periodic [56]. This last coding mimics the information storage inside the hippocampus in which the functional unit, made of two neurons in mixed inhibition/activation interaction ( Figure 8 top left) has an attractor limit cycle. We can locally synchronize, using an evocation stimulus, and desynchronize, by introducing noise on the inter-unit interactions, the periodic activities corresponding to initially non phaselocked neurons. In this way, we enhance considerably (by forcing the units to add their maximal activities at the same time) the grey level on the zones of local synchronization ( Figure 8 E bottom right). Then, by thresholding and segmenting, we get the parts of the initial image (Figure 8 A top right) on which the attentional focus has been exerted ( Figure 8D, E, F top right). Interest of the biomimetic approach The biomimetic approach used in numerous methods presented in this paper, especially for the contrasting phase, exploits the efficiency of visual data processing procedures that have been selected by natural evolution. These procedures represent an optimum in terms of economy of implementation (small number of living elements involved, like cells, tissues, vessels, etc), speed and precision. They also are based on operations that come after processing by the retina and visual areas, thus providing high level semantic neural networks that define the symptomatology related to the observed medical reality. The extraction of semiotic characteristics of objects of medical interest that have been enhanced and contoured using biomimetic methods allows medical signs and symptoms to be organized in syndromes, thus facilitating the diagnosis process. The concept of biological information encoded in a genetic program that controls development forms a major part of the semiotic metaphor in biology. The development plan is seen as being analogous to a computing program, and ''semiotics of nature'' studies the structural relations as explored by molecular and evolutionary biology [137]. Y.L. Kergosien [138] advocates a semiotics of nature in an epistemological sense for analysing interacting biological systems, in order to increase the precision of terms such as ''signal'' in biology or ''symptom'' in medicine, and to develop new themes of inquiry into the nature of their biological or medical signification. The Kergosien approach indeed allows for a concept of natural signification. The adaptation of an animal to a specific function is seen as the realization of the natural metaphor [137]. This is the case for retinotopically arranged neuronal sets that code for homogeneity features (brightness, colour, texture, etc), oriented contours, and corners of an object. Simultaneous representation by colour neurons, complex model neurons (with oriented receptive fields), and hypercomplex model neurons (responding to corners) makes attention and recognition robust and reliable, in the framework of emergent abilities of optimized complex systems [139][140][141]. The bio-inspired image processing methods also have a tendency to use an information encoding that provides for optimum information storage and query, as done in mnemonic structures like hippocampus. In general these structures possess Figure 8. Image attention processing. Top left: hippocampus-like neural network with lateral mixed action. Top-right: from A to F, progressive attentional focus by locally synchronizing the periodic signal associated to each pixel [56]. Bottom-right: desynchronization process between periodic activities of the neurons X i (i from 1 to n) doi:10.1371/journal.pone.0006010.g008 Figure 9. Computer assisted interventions. Left: Use of the confinement tree for delimitating security regions (red) in an ultra-sound image before computer assisted puncture [131,132]. Right: zone chosen for introducing an external needle for puncturing a pericardial effusion [132] doi:10.1371/journal.pone.0006010.g009 their own formats of information encoded in periodic temporal neuronal activities that we can mimic to optimize both compression and retrieving procedures [40]. All these neural treatments can induce illusions and artefacts. But the knowledge about their origin can be used for preventing such abnormalities in the low level (contrasting and contouring) as well as in the high level (semantic assignation and recognition) image processing steps. The neural treatments need also to avoid pathologic processing, due to a non-optimal number of their neurons and/ or interactions and to a non robust value of their parameters. To that precise purpose, a deep scientific knowledge about the physiology and the pathology of the retina constitutes an unavoidable inheritance. Limits of the biomimetic approach In order to be faster, the methods mimicking the natural process of vision need to be made parallel as in the real neuronal systems. But the attractors of the dynamical systems permitting contrasting and contouring of the images are highly dependent on their modality of implementation, particularly on their updating mode. In general, the fixed configurations obtained by simulating such systems are robust with respect to the mode of updating, but it is not the case for the periodic neural activity we have used in attentional focusing ( Figure 8). Hence it is convenient to be very careful until the final step of algorithmic implementation. We will focus in the next Section on the neural networks techniques which are the closest to the natural vision processing. Definition of a formal neural network. A formal deterministic neural network R of size n is defined by its state variables {x i (t)} i = 1, … ,n , where x i (t) denotes the state of the neuron i at time t (equal to 1 if the neuron fires at this time and to 0 if not). Then the discrete iterative system ruling the change of states in the network is given by the following equations: where V(i) is a neighbourhood of i, H i (t) plays the role of the somatic electric potential, w ij designates the synaptic weight representing the influence of the neurone j on the neurone i and h is a firing threshold. The updating of the neuronal states can be operated: -either sequentially, after having chosen a certain order for the neurones, -or block-sequentially, by operating the updating in parallel in each sub-network of a partition of R and by afterwards activating these sub-networks sequentially, -or in a massively parallel fashion if only one sub-network exists. Input in a neural network If an input I i (t) is sent to neuron i at time t, it is merged with the information coming from the neighbourhood V(i) in order to build the somatic potential H i (t): A very simple way of generating such inputs is to choose, for each time interval E k (supposed to be independent of the others) between the two consecutive inputs 1, the k th and the (k+1) th , the truncated geometric distribution: Prob({E k #T i }) = 0 and Prob({E k = m . T i }) = p i (12p i ) m2Ti21 , where T i and p i denote respectively the refractory period and the spike occurrence frequency on the afferent fiber i bringing the electric input to the neuron i. The truncated geometric processes are independent or correlated between fibers. In Figure 5, we can see the activity of a formal neural network activated by a non-homogeneous input representing the initial image (top-left), and after iterating the neuronal firing, we obtain as mean asymptotic behaviour (bottomleft). The coding is obtained by taking T i and p i proportional to the grey level of the initial image. Image on the top-right is representing the dynamics of the synaptic weights {w i0j (t)} jMV(i) which follows a Hebbian rule reinforcing the weight w i0j (t) if i 0 and j had the same firing activity at time t: where F is a sigmoidal function of arc-tangent type. The initial distribution {w i0j (0)} jMV(i) is chosen dog-like (i.e. a difference of Gaussian distribution centred at i 0 , the negative Gaussian having the greatest variance as shown in the red dog G in Figure 5 (topright)), for mimicking the lateral inhibition. The image treated is shown in grey level in Figure 5 (bottom-right), from initial to treated asymptotic image. We see that the square having a medial activity is enhanced by the lateral inhibition expressed by the dog function and its final level after iterating the network until it reaches its asymptotic firing regime, has a level clearly augmented (see the orange square on the bottom left and the enhanced ''mesa'' on the bottom right). Such a simulation highly suggests that an analogy between pixels and neurons can be made allowing the transfer of neural filtering techniques in image processing [24,31,48]. Gradient enhancement by a neural network Image enhancement procedure. We now present in 4 steps, the essentials of a method, easy to parallelize based on the same principles as proposed in [31]: 1) reduction of a 5126512 NMR image in a 2566256 image by averaging each block of 4 neighbour pixels, in order to obtain the input image (cf. Figure 6 top left). 2) use of this image as the mean configuration of an input geometric random field transformed by a 2566256 uni-layer neural network implemented in parallel; this network has an internal evolution rule, realizing a treatment of the input signal very close to a cardinal sine convolution, mimicking the lateral inhibition and favouring the occurrence of a very steep gradient on the boundary of homogeneous (in grey level) objects of interest in the processed image. In Figure 6, the object of interest is a brain tumour, its homogeneity coming from the same clonal origin of all its tumour cells. 3) use of the gradient, built by the neural network as the potential part of a mixed potential Hamiltonian differential system, whose Hamiltonian part is given by the initial grey level (before the action of the neural network). 4) obtaining boundaries of homogeneous objects as limit cycles of the differential system by simulating trajectories of the system in the different attraction basins. The step 2 consists of defining the input from a geometric random field, i.e. a collection of geometric random processes such that, if p i (t) denotes the probability to generate a spike on the afferent fiber i to the neuron i at time t, we have: p i (t) = 0, if t2s i #T i , where s i is the time of the last 1 on the fiber i before time t and T i denotes the refractory period, chosen as a constant equal to R. p i (t) = a i sin + (v i (t2s i 2R)), if t2s i .R, where sin + denotes the positive part of the sine. In order to incorporate an adaptation learning effect, a Hebbian evolution of the w ij 's is chosen based on the reinforcement of equal grey activities in the same neighbourhood: where w ij (0) values come from a dog (difference of Gaussian) distribution of j centred at i, for each i, for mimicking the lateral inhibition. This formula corresponds to the fact that w ij (t) is just the non-centred covariance function between the p i (s)'s and the p j (s)'s; if v i2 v j and R are small, w ij (t), when t tends to infinity, tends to log((a i a j /2)sin(v i2 v j )/(v i2 v j )). Image coding After normalization of the grey level g(i) in the pixel i between 0 and 1, we take: a i~g i ð Þ and v i~l g i ð Þ and we start the procedure by iterating the deterministic neural network. It is easy to prove that the probability p i to have 1 as output of the neuron i at time t, just before renormalization, is about proportional to: This last formula has been used to make the gradient enhancement visible in Figure 6 (top-middle). The behaviour of the function p' i is similar to a convolution by a cardinal sine function, because of the approximate asymptotic formula: It is easy to verify that this convolution reinforces the ''plateau'' or ''mesa'' activities in grey level (or white if necessary). Such activities correspond, in medicine, to pathological objects to be considered as targets during the treatment (like tumour in which the same clone of cells gives a homogeneous response in absorbance or resonance) or to physiological objects (like a tissue made of cells having the same function) to be avoided during the treatment. Figure 6 shows the result of a gradient enhancement by the network for a brain tumour. Let us finally remark that we get objects treated at the asymptotes of the network dynamics. We do not need a stop criterion after few steps of processing and the method is easy to parallelize [55,61]. Continuous operators The final aim of these methods is to offer a set of continuous operators adapted to segmentation of grey singularities or grey peaks (0-dimensional objects like micro-calcifications), grey anticlines (1-dimensional objects like vessels) or grey ''mesas'' (2dimensional objects like tumours or functional regions). The problem of segmentation of more complicated objects (fractal objects like diffused tumours affecting, for example, the conjunctive tissue) is open and demands that other variables like texture based one's (e.g. the local fractal dimension or the wavelets coefficients) need be taken into account instead of or along with the grey level. Let us consider now a compact state set E included in R 2 and a temporal set T included in R + or N, depending on the continuous or discrete version of time used. Let K(E) denotes the set of all compacts of E. If we provide K(E) with the Hausdorff topology (defined by the Hausdorff distance d between subsets), we can define a compact set valued (csv) flow Ø as a continuous application of K(E).T to K(E), which is a semi-group: is a metric space, which is compact if E is compact, we can apply the operators limit and basin as defined in [36,37] to the set valued flow Ø, and hence define the notions of attractor and of stability basin. We will give some examples of csv flows, whose attractors are objects to be contoured in image processing, or final shapes to be obtained at the end of any morphological development, these targets being often the same. Potential flows. In snake contouring [69][70][71][72], the aim is to obtain the boundaries of an object of interest by progressively deforming the boundaries of an initial well-known set K(0) (e.g., a sphere) placed outside (respectively inside) the object, and whose deformation K(t) causes the decrease (respectively increase) of a potential function P [75] such as: in which S(K(t)), V(K(t)), hK(t), and g(x) denote respectively the external area, the inner volume, the boundary, and the grey level at the point x of the compact K for iteration t. The gradient iterations of P correspond to a discrete potential flow. For obtaining the continuous version, it suffices to use a potential ''mutational'' equation [81][82][83]. We can also add splines-like terms, e.g. d# hK(t) C(x)dx, where C(x) = (h 2 g/hx 1 2 )(h 2 g/hx 2 2 is the mean Gaussian curvature at x (in order to minimize the total variation of the local curvature like for the splines functions), plus a mean square criterion forcing hK(t) to pass in the vicinity of points known a priori with fixed curvatures (in particular singular parabolic or saddle points, if their localization is known a priori). Mixed potential Hamiltonian Segmentation. The continuous modelling allows stable evolution of differential operators such as gradient or Laplacian. Our segmentation consists in building a differential equation system whose stable manifold is the surface of the object we are looking for. Finding this manifold turns out to be a particular case of the surface intersection problem and provides an immediate analytical representation of the surface. The other major advantages of this method are to perform segmentation and surface tracking simultaneously, to describe complex structures in which branching problems can occur if the segmentation is purely local, and to provide accurate and reliable results. Let us first consider the 2D problem. The central idea of the method is based on the Thom-Sebastiani conjecture [35] concerning the differential system: In the neighbourhood of a stable singularity or of a limit cycle of the corresponding velocity vector field supposed to be continuous, let us suppose that we can decompose the system into two parts, a potential and a Hamiltonian one, such as: where the residue R(x,y) tends to 0 when (x,y) tends to the stable singularity or to the limit cycle. Such decomposition has been proven for a large class of Liénard systems [41][42][43][44]. The Thom-Sebastiani conjecture assumes that this result still holds by considering sufficiently regular systems. We will exploit systematically in the following, this possibility to consider a contour as the limit-cycle of a mixed potential Hamiltonian system. In fact, we consider now the boundary surrounding a 2D object with an approximately homogeneous grey level g, thus verifying: The corresponding curve is represented with parametric coordinates by: The continuous modelling implies the existence of the first derivatives of g; so a solution should verify the following equation obtained by differentiation of g(x,y) = k: A particular solution of this equation is: x'(t) = hg/hy, y'(t) = -hg/hx, but this system does not provide a stable solution; a perturbation (due to noise) moving the curve away from the initial contour line could not be corrected. That is why we add a component which brings the curve back to the contour line defined by g(x,y) = k, according to the steepest slope line of the function (g-k) 2 . We thus obtain: This system consists in two parts: the first one corresponds to an ''edge tracking'' component and the second one is a kind of ''elastic force'' which allows noisy image processing. The b parameter allows to balance these two terms. The system may be solved by numerical analysis methods with initial conditions, like the Runge-Kutta-Gear method. The parametric representation of the curve is then directly obtained. This continuous method can be applied in 3-dimensions to look for particular features of the surface of an object of interest. Let us consider such a surface defined by: f(x,y,g) = constant, parameterized by: Our boundary tracking method can be implemented as follows: the algorithm starts with a point on the surface with a grey value h. For each slice of level h, the differential system is solved in order to obtain a closed curve. From some points of this curve, we follow the object surface until the next (k+1) slice by building new 2D differential systems in slice level planes. The algorithm stops when all slices have been processed or when the object surface has been entirely described. This method allows to find automatically all the components of a complex object in which branching problems may occur and to determine how they are linked together. This possibility is one of the major advantages of the method because surface reconstruction from a set of contours is a critical step for complex structures. Classically, interpolation between contours is performed by triangulation techniques or by creating intermediate contours with dynamic elastic interpolation. But these methods need sometimes interaction with the user. In our method the surface modelling is performed in the segmentation step. This algorithm has been tested on MRI images for stereotaxy before stimulation needle introduction or brain tumour puncture [118][119][120][121][122][123][124][125][126][127][128][129]. The remarkable Gaussian line. Homogeneity is not always a stable characteristic of an anatomical structure. So we present now a differential system performing H(g) = 0, where H is an operator similar to the Laplacian or Marr-Hildreth detectors. Let us define the remarkable Gaussian line of a peak as the set of points where the mean Gaussian curvature of the peak vanishes ( Figure 10). Its equation writes [41]: If H' = |H|, let consider the mixed potential Hamiltonian system [42][43][44] obtained as follows: We consider in Figure 10 bottom right the new grey function H(x,y) instead of the function g(x,y) at each pixel (x,y) and we display bottom left the mixed potential Hamiltonian differential system above of which the characteristic line is a limit cycle, called the Hamiltonian contour. Its first term is of steepest descent dissipative nature and along the flow, the trajectories converge to the zeros of H'(x,y). On the set of the zeros of H'(x,y), the second Hamiltonian term of the differential system which is of conservative type, becomes preponderant. Parameters a and b can be used to tune the speed of convergence of the differential system to the limit cycle. The usual Runge-Kutta-Gear discretization scheme yields ultimately for the differential system an algorithm which is quite easy to implement. On each pixel (boundary effects are neglected), the function H(i,j) reads: An important property of the remarkable Gaussian line is that in the case of a Gaussian peak, it contours the projection of a volume equal to 2/3 of the total volume of the peak. This property remains available with a good approximation in case of moderate kurtosis and skewness of the peak. An advantage of this technique is that we do not perform a direct segmentation of the grey level. Thus the segmentation is much finer than the corresponding one performed by the watershed lines method or by its variant with markers [103]. We only segment the upper part of the peak and then we multiply by 3/2 the activity integrated inside the remarkable line. This approach is interesting because the lower part of the peak is often noisy. The method seems particularly efficient when the peaks are well separated. If they are close (see Figure 10 bottom right), then we need to tune the parameters a and b and to start the trajectories inside the peaks. For finding a contour line inside, we can: 1) calculate the total variation V(h) = # C(h) ||=g(x)||dx of the gradient norm||2=g|| along a contour line C(h) of level h 2) both decrease and increase h towards two limits h 1 ,h and h 2 . h in order to find an intermediary value of V(h) greater than the two values V(h 1 ) and V(h 2 ) calculated at the extremities h 1 and h 2 . Then C(h 1 ) and C(h 2 ) constitute an annulus whose intersection with the remarkable line is not empty 3) choose the initial condition on C(h 2 ) for starting the simulation of the differential system. Eventually, we can notice that the remarkable Gaussian lines can serve for matching images or objects of interest, for example, in the case of comparing images to a reference coming from an atlas. They constitute a feature in general more robust than parabolic or saddle singularities sensitive to perturbations causing local skewness of the grey peaks. [111]. Top-middle: level sets of the confinement tree in a brain tumour NMR slice [111]. Top-right: watershed tree [103]. Middle-left: level sets of the watershed tree [110]. Middle-right: watershed tree and landscape with different water levels. Bottom-left: on the left (respectively right) successful (respectively failed) contour of the remarkable Gaussian line in case of one (respectively two) isolated (respectively close) grey level peak(s) [41]. Bottom-right: 3D image of two close peaks [41] doi:10.1371/journal.pone.0006010.g010 Watershed contouring. The watershed line is a concept firstly defined by geographers in order to characterize the main features of a landscape: a drop of rain that reaches the ground will flow down to a sea or an ocean. In the case of France, the watershed line splits the country in two parts, the Atlantic zone and the Mediterranean zone. Those zones are called 'catchment basins', and the oceans are the minima of them, i.e. the attraction basins of the gradient operator which corresponds to the gravitational dynamics of the drop on the steepest gradient lines of the relief surface. They define a partition of this relief, and the boundaries of catchment basins define on the pixels plane the watershed lines [105][106][107][108][109]. These lines are confounded in regular cases with the crest lines surrounding the catchment basin. It is easy to understand the interest of this concept in image processing: grey level images can be considered as relief structures, and the watershed lines are a good way to separate light (low grey level) zones from dark (high grey level) ones. It is particularly interesting to determine the watershed lines of the symmetrical reverse landscape obtained by considering the new grey level 1-g, where g is the initial normalized grey level obtained after the contrasting step and after fixing the maximum of g as a normalized value equal to 1. The watershed lines verify variational principles: i) when progressively fulfilling with water a catchment basin, its inner area passes through a series of inflexion points corresponding to the successive saddle points reached by the water. Each inflexion point corresponds to a local maximum of the second derivative of the inner area; ii) for a given inner area, the watershed lines are those containing the maximum of water. The watershed line is computed on a discrete image, by immersion simulation, locating it on the meeting points of several catchment basins ( Figure 10). First discrete algorithms of watershed lines computed by immersion simulation were proposed in [105][106][107][108][109] with a discrete operator. In [103,110], the watershed line is computed on the reverse image, in order to have one and only one local maximum of the original image into each catchment basin of the reverse image. The resulting labelling (still not a partition) is done on the original image. We used the Vincent-Soille algorithm [105] on discrete images with a linear complexity (about 7,25 n, where n denotes the number of pixels in the image). It can be used also in 3 dimensions. Reaction-diffusion contrasting. Several methods of image contrasting by using differential linear or non-linear operators have been proposed [92][93][94][95][96][97][98][99]. These methods can be parallelized as for the neural networks and we will show in the following that there exists a deep relationship between the discrete neural network approach and the continuous differential operator approach. 1) The Catté -Lions-Morel-Coll non-linear diffusion operator. It is well known that the solution of the heat differential operator: Lu=Lt~k:Du~k:div grad u ð Þ ð Þ is the Gaussian kernel, with variance equal to s 2 = 2 kt, by choosing as initial conditions u(.,0) the grey level. This property has suggested [94] the use of another differential non-linear diffusion operator: where G is a Gaussian kernel and g is a non-negative nonincreasing function on R + verifying g(0) = 1 and g tends to 0 at infinity; in practice, we can choose for g a set function, whose value is 1 on the interval [0,S] and 0 on]S, +'[: there is diffusion if and only if ||grad(G * u)||#S and, after a certain transient, it remains a gradient only on the boundary of sufficiently discriminable objects. For example, Figure 11 presents images after some hundreds of iterations, showing the gradient on the boundary of brain structures. The end of the procedure as for the heat operator ( Figure 11 left (a)) shows that diffusion wins, giving a constant grey level at the asymptotic state. In order to improve the method of getting the contrasted image at the asymptotic state of the simulation, we must add a reaction term in order to obtain the final expected image as the attractor of a differential reactiondiffusion operator, like for the iterative discrete neural network as in Figure 11 left (c). 2) The non-isotropic reaction-diffusion operator. By searching a continuous operator having as discrete finite elements scheme a deterministic neural network system similar to that presented in Section 2, it has been proposed [93,95,96] with direct reference to the discrete neural network approach [48,49,52] a new reaction-diffusion operator. Let us recall the deterministic neural network with threshold 0 defined by: has a natural discretization corresponding to the neural network above, by identifying x i (t) and u(ih,t) and by remarking that the neural network system has the same asymptotic behaviour as the differential system: when l is sufficiently large. In [93], it is shown that, for adapted values of R, homogeneous in grey, 1D objects can be enhanced in a heterogeneous environment, in the same way as for a neural network system. In [96] and in Figure 11 (right), the same proof is given for 2D objects like the internal cavities of the heart, where a snakes splines procedure is used after contrasting. 3) Proposal for a new image reaction-diffusion-chemotaxis operator. In order to have, like for the previous operator, the final treated image as asymptotic of a differential operator, we propose to consider the grey level u as a chemotactic substrate concentration consumed by animals whose concentration will be denoted by v [84,85,144]. The principle of this method consists in locating initially a uniform concentration v(0) of animals on the initial grey level image u(0) or on its boundary: the substrate u can diffuse with a term eDu and is consumed with a saturation rate equal to: 2Kuv/(u+k); the animal concentration v can diffuse attracted by the substrate with the term DDv, is submitted to a drift in the direction of substrate peaks with the chemotactic term -xdiv(vgradu) and increases (because of the reproduction) with the term K'uv(u+k'). Let us remark that the two first terms ruling the animal motion can be replaced, if we do not want to introduce a drift term, by an attractiondiffusion term like: D L 2 v Lx 2 : Lu=LxzL 2 v Ly 2 : Lu=Ly À Á The corresponding differential partial derivative operator is then given by: Lu=Lt~eDu{Kuv= uzk ð Þ, Lv=Lt~DDv{xdiv vgradu ð Þ zK 0 uv uzk 0 ð Þ or by the following PDE: Lu=Lt~eDu{Kuv= uzk ð Þ Lv Lt~D L 2 v Lx 2 : Lu=LxzL 2 v Ly 2 : Lu=Ly À Á zK 0 uv= uzk 0 ð Þ In the two cases above, the asymptote of u is 0 and the asymptotes of v give the ''treated image''. The corresponding image processing leads to a contrast enhancement before segmentation: in Figure 7, we can see the initial image on the bottom left and the contrasted one on the bottom right. The contours have been then easily obtained by applying a snakes splines procedure [71,72]. If we are adding to the second equation of the differential system a Dupin term like Kv/Du, we will encourage animals to follow Dupin lines, i.e. inflexion curves, which is very suitable for a grey anticlines segmentation (for example in vessels segmentation). Conclusion The neuro-mimetic lateral inhibition mechanism and the set valued snakes-like flows allow the generation of various image processing methods (essentially contrast enhancement and contouring). We have given numerous applications of this methodological approach in image processing essentially dedicated to medical imaging and surgical robotics. Further both theoretical and numerical studies have to be completed, in order to show the utility of these new tools in morphogenesis modelling, allowing to generate artificial objects of biological and/or medical interest (like cells, tissues, organs) by using the same operators as for recognizing them in a real image. We conjecture that the spatial information about anatomic organs obtained from the biomimetic image-processing methods, has to do with the morphogens localization, which results from the morphogenetic processes creating these organs combining robust genetic regulatory networks [145,146] ruling their metabolic reactions and cell proliferation, with classical diffusion [147] of morphogens inside their tissues. In particular, the main patterns observed during the embryonic formation can be found in the biomimetic processing of the images by the final adult organ.
11,205.6
2009-06-23T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Performance limitation of networked control systems with networked delay and two-channel noises constraints ABSTRACT The performance limitation of networked control systems (NCSs) with networked delay and two-channel noises constraints are studied in this paper. The networked parameters mainly consider the networked delay in the forward channel and white Gaussian noise constraint existing in forward channel and feedback channel. The tracking performance limitation expression is achieved by using the spectral decomposition technique and selecting the optimal one-parameter structure. The obtained result shows that the tracking performance limitation of NCSs is related to the position of the non-minimum phase zeroes, the position of the unstable poles, networked delay, two-channel noises and input signal. We can also know that the networked delay and white Gaussian noise have influence on the tracking performance limitation of NCSs. Finally, a simulation example is given to demonstrate effectiveness of the theoretical results. Introduction With the development of internet and computer technology, the traditional control system can't adapt to the needs of daily life. We know that the internet has been inextricably linked to us, so it is a development trend to use the network for traditional control systems. The networked control systems (NCSs) emerge as the times require. And then, it has developed rapidly. Until now it is still an important research issue (Almakhles, Swain, Nasiri, & Patel, 2017;Zhang, Shi, Wang, & Yu, 2017;Zhang, Yan, Yang, & Chen, 2011;Zhang, Han, & Yu, 2016). The NCSs have many advantages like: flexibility; low cost; simple installation and maintenance; reduced weight and power requirement, etc. In recent years, much research works have been investigated about the stability analysis of NCSs with communication constraints, e.g. quantization (Fu & Xie, 2005;Xiao, Xie, & Fu, 2010;Yuan, Yuan, Wang, Guo, & Yang, 2017), time delay (Liu, 2010;Luan, Shi, & Liu, 2011), bandwidth (Rojas, Braslavsky, & Middleton, 2008) and packet loss (Chen, Gao, Shi, & Lu, 2016;Pang, Liu, Zhou, & Sun, 2016) etc. The research progress of NCSs from several aspects was summarized in Mahmoud (2016). The stability of quantitative in feedback control system based on event trigger was considered in Boukens, Boukabou, and Chadli (2017). The sufficient conditions to ensure the stability and dissipativity of the filtering error system by establishing CONTACT Jie Wu<EMAIL_ADDRESS>the mode-dependent periodic Lyapunov function was proposed in Lu et al. (2018). The stability analysis of sampling system by a new time-based discontinuous Lyapunov function was studied in Shao, Han, Zhao, and Zhang (2017). In NCSs, the network delay is time-varying or random (Gao, Jiang, & Pan, 2018;Su & Chesi, 2017;Tao, Wu, Su, Wu, & Zhang, 2018;Zhang, Wu, Shi, & Zhao, 2015). A new Lyapunov function and stability results are dependent on both the data packet dropouts and the time delay was proposed in Tao, Lu, Wu, and Wu (2017). The stability of time delay of NCSs was studied in Liu, Zhang, and Xie (2017). The problem of output feedback delay compensation controller for NCSs with random network delay was discussed in Zhang, Lam, and Xia (2014). But sometimes, the authors study the determined time delay to reduce the difficult of a problem. The problem of stability for switched positive linear systems with constant time delay was studied in Zhao, Zhang, and Shi (2013). The authors focused on the problems of robust stabilization and robust H ∞ control for linear systems with a constant time delay in De Souza and Xi (1999). In nowadays research, more and more researchers are interested in the performance study of the control system in the control community, see Wu, Zhan, Zhang, Jiang, and Han (2017) ;Wu, Zhou, Zhan, Yan, and Ge (2017); Zhan, Guan, Zhang, and Yuan (2013) ;Zhan, Wu, Jiang, and Jiang (2015) for details. The optimal tracking performance of single-input single-output (SISO) NCSs with bandwidth and network-induced delay constraints was studied in Zhan, Guan, Zhang, and Yuan (2014). The relationship between bandwidth constraint and optimal modified tracking performance of MIMO NCSs was studied in Sun, Wu, Zhan, and Han (2016). The authors Guan, Chen, Feng, and Li (2013) investigated the tracking performance limitation for MIMO LTI systems with coloured channel noise and limited bandwidth channel. The above results only consider one-way channel constraint. However, the networked delay often appears, the time delay can result in the performance degradation of control systems and even worse cause a system to become unstable. The communication channel constraint both in forward and feedback channels often exist, the tracking performance limitation of NCSs with two-way channel constraints is more difficult to study. It is necessary and important to study the performance limitation of the NCSs with networked delay and two-channel noises constraints. The main contributions of this paper are as follows. This paper introduces a model for the NCSs, the networked delay exists in the forward channel, and the noise exists in the forward channel and feedback channel, simultaneously. In this system, the relationships between the networked delay and white noise constraint existing in forward and feedback channels is investigated. An expression for the tracking performance limitation is then achieved by using the co-prime factorization and the spectral decomposition technique. The achieved results will reveal that the tracking performance limitation of the NCSs is affected by the non-minimum phase zeroes, unstable poles and time delay, white noise and the reference input signal. The results provide a good guidance for the design of the NCSs. Problem statement and preliminaries To discuss the tracking performance limitation of NCSs with networked delay and two-channel noises constraints, we consider the NCSs model as Figure 1. In Figure 1, the signal r is a random reference signal, the variance of r is ϕ 2 1 . K denotes the one-parameter compensator. G denotes the plant model, τ is the networked delay, the signal y is the systems output and n 1 , n 2 are white Gaussian noise and variance are ϕ 2 2 and ϕ 2 3 . For a given plant, a tracking error of the NCSs is defined as e = r − y. According to Figure Then The tracking error is obtained by where It is clearly that the rational transfer function G can be given by: where, N, M ∈ RH ∞ , and satisfied the Bezout identity where, X, Y ∈ RH ∞ . All the compensators which can make the system become stable can be characterized by Youl parameterization It is well known that a non-minimum phase transfer function could factorize a minimum phase part and an all-pass factor where N n and M m are the minimum phase part B z and B p are the all-pass factor B z includes all non-minimum phase zeros z i ∈ C + , i = 1, . . . , n, B p includes all the unstable poles of the given plant p j ∈ C + , j = 1, . . . , m, B z and B p can be represented as The tracking performance of NCSs is defined as Performance limitations of the NCSs with the networked delay and two-channel noises The tracking performance limitation of NCSs is measured by the possible minimal tracking error achievable by all possible linear stabilizing controllers (denote by K), determined as Assuming the input signal has no connection with white Gaussian noise n 1 , n 2 , then the tracking performance can be shown as According to (3), (4), (5), (11) and (12), we can obtain Theorem 3.1: Assume that a given system has many unstable poles z i ∈ C + , i = 1, . . . , n, and NMP zeros p j ∈ C + , j = 1, . . . , m with the networked delay and two-channel noises constraints, the tracking performance limitation of the NCSs be shown as Proof: According to (7) and (13), we can obtain According to (8) and (14), we can obtain According to (6), we can obtain Then we define According to (15) and (17), we can obtain Because of the B p and e −τ s are the all-pass factors, it follows that Based on a partial fraction procedure, we can obtain According to (9), we can obtain M(p j ) = B p (p j )M m = 0, then Then Therefore, we have Then Because of N n and M m are the minimum phase part, we can choose a suitable Q to derive Then we can have According to Toker, Chen, and Qiu (2002) Then This completes the proof. Illustrative examples In this section, the obtained result is illustrated by a classical example. The transfer function of the given plant is For k > 0, we may see that NMP zeros is located at z i = k, the unstable pole is located at p j = 3. Supposing the networked delay is τ = 0.3, the input signal is ϕ 2 1 = 1, k = 1, according to Theorem 3.1, we have J 1 * = 2 + 2 3 e 1.8 + 4 3 e 1.8 ϕ 2 2 + e 0.6 ϕ 2 3 . The tracking performances limitation of the NCSs with different two-channel noises are shown in Figure 2. In Figure 2, the two-channel noises affect the tracking performance limitation of the NCSs, when the two-channel noises become big, the tracking performances limitation of NCSs become bad. The influence of the networked delay is also existing in the NCSs. Thus, the simulation about the influence of the networked delay to the NCSs is necessary. In Zhan, Guan, and WU (2010), the performance limitation of networked control systems with networked induce-delay is discussed. In this paper we consider about the NMP zeros, the unstable poles and the input signal. The choice of date is the same in previous example, the networked delay is τ = 0.3, ϕ 2 1 = 1, ϕ 2 2 = 2 and ϕ 2 3 = 3, according to the Theorem, we can obtain J * 1 J 1 * = 2k + 2 3 e 1.8 −2k 3 − k + 4 3 e 1.8 3 + k 3 − k + 6k e 0.6k 1 k − 3 . According to Zhan et al. (2010), we can obtain J * The tracking performance limitation of the NCSs with different NMP zeros is shown in Figure 3. Figure 3 reveals that the tracking performance limitation of the NCSs is based on the NMP zeros and unstable poles of the given plant. When the NMP zeros move closer to the unstable poles, the tracking performance limitation of the NCSs tends to the infinity. By comparing J * 1 and J * 2 , we can know that the noise exists in the communication channel will make the performance limitation of NCSs become worse. According to the research before, there are a few articles talk about noise in both forward and feedback channel, so it is valuable to talk about the difference of tracking performance limitation by the effect of noise in forward and feedback channel. Assuming that the networked delay is τ = 0.3, the input signal is ϕ 2 1 = 1, k = 1, according to Theorem, Supposing there is no noise in the forward channel, ϕ 2 2 = 0, J 1 * = 2 + 2 3 e 1.8 + e 0.6 ϕ 2 3 Supposing there is no noise in the feedback channel, ϕ 2 3 = 0, J 2 * = 2 + 2 3 e 1.8 + 2 3 e 1.8 ϕ 2 2 When the noise in the forward channel, ϕ 2 2 = 0.5, J 1 * = 2 + 2 3 e 1.8 + 2 3 e 1.8 + e 0.6 ϕ 2 3 When the noise in the feedback channel, ϕ 2 3 = 0.5, J 2 * = 2 + 2 3 e 1.8 + 4 3 e 1.8 ϕ 2 2 + 1 2 e 0.6 Figure 4 shows the tracking performance limitation of the NCSs is effect by the noise, no matter the noise appears in the forward channel or in the feedback channel. we can know the degree of impact of noise in the forward channel or in the feedback channel are different. Also, Figure 4 indicates the tracking performance limitation is more affected by the noise in the forward channel. Form Figure 5, we can find the tracking performance limitation will intersect in somewhere (defined as P), before point P, noise in the forward channel has a greater impact on the tracking performance limitation of the NCSs; after point P, noise in the feedback channel has a greater impact on the tracking performance limitation of the NCSs. Conclusion In this paper, we explore the tracking performance limitation of NCSs with the two-channel noises. The proposed model takes into consideration the channel noise in the feedback channel and the networked delay in the forward path. The analytical expression of the tracking performance limitation is derived by using the method of H 2 -norm and spectral factorizations. The obtained results show that the tracking performance limitation mainly depends on the position of the non-minimum phase zeros, position of the unstable poles, networked delay and two-channel noises. An illustrative example is analysed to demonstrate the effectiveness of the proposed method. Possible future extensions to this work include study of more general plants such as multiple-input multipleoutput nonlinear NCSs, more complex channel models such as the fading channel, and more parameters of communication channel constraints such as quantization effect, bandwidth, time-varying delay and signal-to-noise ratio. Disclosure statement No potential conflict of interest was reported by the authors.
3,176.6
2019-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
The Coordinated Relationship between Investment Potential and Economic Development and Its Driving Mechanism: A Case Study of the African Region : In order to analyze the coordination relationship between investment potential and economic development and its driving mechanisms, this study integrated the entropy weight method, coupling coordination degree model, exploratory spatial data analysis, geographic detector, and geographically weighted regression model. The developed approach was applied using data from 51 African countries from 2008 to 2016. The results showed that: (1) While the level of economic development in the African continent has increased steadily, the overall investment potential needs to be improved. The mean economic development index rose from 0.116 to 0.151, but the economic gap among countries was still highly evident. (2) Uncoordinated development and barely coordinated development level were the dominant types of relationship between investment potential and economic development in African countries. The spatial distribution showed significant agglomeration characteristics; the sub-hot spot and sub-cold point regions maintained strong dependence with their hot spot and cold point counterparts. The hot spot areas gradually formed an agglomeration in Southern Africa and highly fragmented distribution in other areas. The cold spot areas formed a spatial distribution pattern of “one core and one belt” with some countries in Western Africa forming the core, while some Central and East African countries constituting the belt. (3) The coordination relationship between investment potential and economic development was influenced mainly by factors including economic base, residents’ living standard, industrial construction level, information support level, and business friendliness. Using geographically weighted regression coefficient distribution of indicators, the driving mechanisms of spatial distribution could be divided into five types: economic base driven, industry-driven, information application-driven, business convenience-driven, and consumer market-driven. the driving mechanism of the coordination relationship between investment potential and economic development was analyzed. The findings and conclusion of this study can be used as a reference for transnational investors and help in supporting African nations in establishing a clear coordination relationship between investment potential and economic development. Introduction In the era of economic globalization, international investment and trade have become more ubiquitous and profitable, becoming essential engines for stimulating global economic growth. Enhancing the competitiveness of marketable goods, promoting the development of industrial technology, reducing fund shortages in host countries, and optimizing the structure of foreign trade commodities are critical in boosting global trade [1]. With the growing trend of international investment liberalization and the exponential rise of transnational investments [2,3], the difficulties faced by transnational investors and host countries are becoming more and more complex. Transnational investors have limited understanding regarding the host country's economic conditions, trade risks, market operation uncertainties, and government regulatory risks [4], which could lead to difficulties in controlling investment costs and projecting prospects and profits and could eventually result in investment losses. Meanwhile, insufficient consideration is given by host countries towards improving the domestic investment environment, making it challenging to formulate reasonable and attractive foreign investment policies [5]. As a result, critical opportunities in attracting investments and technical upgrades could be overlooked. Establishing a scientific investment potential evaluation index system becomes particularly important in determining investment orientation and avoiding investment risks [6]. The static analysis of investment potential based on the entropy weight method (EWM), the grey correlation degree model (GCDM), factor analysis (FA), and data envelopment analysis (DEA) has percolated into the mainstream of current researches [7,8]. However, this static analysis has largely evaded the impact of economic cycle changes, resulting in a lack of long-term reference for investment potential. Most scholars have used investment hotspots of Western Europe and North America as research objects and have paid little attention to the evolution of investment potential of other regions such as Africa, Latin America, and Southeast Asia in the context of economic globalization. Some scholars have established investment potential evaluation systems using fundamental indicators such as GDP and population size. However, these assessment systems have limited capacity to understand the impact of resource development, economic environment, open environment, entrepreneurial environment, and other development systems on investment potential. Establishing an evaluation index system that comprehensively reflects the investment potential is crucial in analyzing the evolution of investment potential of underdeveloped regions. The investment potential and economic development have the coordination relation of mutual influence, mutual connection, and mutual restriction. The level of economic development of the host country provides an essential guarantee for the improvement of investment potential, which can influence the level of government investment in infrastructure construction, the living conditions of communities, and various market activities and can directly be related to the commercial space of transnational investment and operation. In the context of the relative stability of the international market environment, the global economy, and the political structure, the investment potential is positively related to the international investment in the country. This could have direct influence on the fixed capital accumulation of the host country, the choice of corporate layout, the modernization of the production management concept, and improvement of the technological level of the host country, providing the host country with a driving force needed for economic development. In the era of global trade, the failure to establish a suitable investment environment can lead to significant reduction in foreign capital investments, which is not conducive to the overall development of the domestic market economy and creates difficulties in guaranteeing a stable trend of economic growth. The lack of investment policies, foreign capital utilization, and management levels result in insufficient conversion of investment potential into economic development, which will subsequently have an adverse impact on the host country's economic development and cross-border investment operations. Studies on the synergistic relationship between investment potential and economic development have highlighted the reference value for both the host country and transnational investors. At present, only a limited number of studies have been conducted regarding the relationship between investment potential and economic development [9][10][11][12][13]. In contrast, the relationships between urbanization, economic development, ecological environment, and other subsystems have widely been investigated using the coupling degree model (CDM) and the coordination model (CM). Researchers have become inclined to use analytical techniques, such as trend surface analysis, the Markov chain model, and the standard deviation ellipse model, to study the temporal and spatial evolution characteristics of coordination relations [14]. More recently, the use of GIS technology has provided new paths for analysis in this field [15]. However, the internal driving mechanism of the coordination relationship and differentiation has been investigated sparingly, creating difficulties in providing refined and targeted support for policy and decisionmakers. While scholars have done a lot of work in analyzing and comparing the strengths of the coordination relationship between regions, they have largely neglected to understand the effect of the lag attribute in the subsystem, which impedes the necessary adjustments to the national macroeconomic policies. Thus, more attention ought to be directed towards the classification and determination of coordination relationships and the driving mechanism of spatial distribution difference. African countries were selected as the research subject in this study, as shown in Figure 1. Since the start of the 21st century, Africa has gradually become a hotspot for global investments [16,17]. In 2016, foreign direct investment (FDI) inflows in Africa reached US $59.4 billion. FDI has become one of the essential catalysts driving African growth and development. The efficient reduction of investment risks and the adoption of appropriate investment policies have become principal concerns for African countries in the new era. With these in mind, this study is focused on answering three key questions: First, what are the investment potentials and economic trends among African countries? Second, what is the level of coordination relationship between the investment potential and economic development among African countries? And third, what are the significant factors affecting the differentiation in coordination degrees among countries? In order to answer these research questions, we formulated the following specific objective for this study: (1) to identify the dynamic evolution trends of investment potential and economic development; (2) to specify the temporal and spatial classification attributes of the coordination relationship between investment potential and economic development; and (3) to explore the driving mechanism of spatio-temporal heterogeneity of the coordination relationship. In this study, we integrated the entropy weight method (EWM), the coupling coordination degree model (CCDM), exploratory spatial data analysis (ESDA), and other methods to examine the evolution characteristics of the coordination relationship between the investment potential and economic growth in African countries. Combined with the geographic detector (GD), geographically weighted regression (GWR), and other econometric methods, the driving mechanism of the coordination relationship between investment potential and economic development was analyzed. The findings and conclusion of this study can be used as a reference for transnational investors and help in supporting African nations in establishing a clear coordination relationship between investment potential and economic development. The Index System and Data Sources Investment potential and economic development are complex systems with multiple connotations. The investment potential system emphasizes the benefits of capital investment, which directly affects the fixed capital accumulation, the choice of corporate layout, and the technological advancements of the host country. In order to quantify and analyze investment potential, indicators have to be selected that are able to adequately reflect the country's enterprise operation costs, the level of security for investments, the return capacity of capital investment, and the overall investment environment. Economic development provides market and commercial space for investment behavior and plays a crucial role in improving the people's living standards, upgrading infrastructure, and optimizing the industrial structure. Some of the crucial indicators have to be selected to reflect economic strength, market vitality, and industrial modernization level of the host country. Based on The Global Competitiveness Report 2018, The World Investment Report 2018, and reports from other international institutions, we developed an evaluation system for investment potential and economic development, which included 10 subsystems: resource endowment environment, economic development environment, market health environment, entrepreneurial friendly environment, infrastructure environment, open environment, labor and employment environment, basic development level, industrial construction level, and the people's living standards. The evaluation system comprised 33 indicators, as listed in Table 1. The data inputs were derived from the World Bank Database, the African Statistical Yearbook, Doing Business Report, and The World Investment Report. Due to missing information for some countries (i.e., South Sudan, Somalia, Libya, and Western Sahara), this study focused on 51 African countries, from 2008 (global financial crisis) to 2016 with a 4-year time interval. Methods The EWM was used to analyze the dynamic evolution of investment potential and economic growth. The CCDM and ESDA were then applied to examine the spatio-temporal evolution of the coordination relationship, while the GD was used to investigate the main driving factors affecting the coordination relationship. Finally, a GWR model was used to analyze the extent of the principal driving factors for the various regions. Entropy Weight Method The entropy method is a technique to determine the weight of the index and is often used in calculating the index score. The equations used are as follows: backward indexes: where uij is the standardized value; xmax is the maximum value; xmin is the minimum value; xi is the standardized value. where Mi is the evaluation index; wi is the weight. For more details on the operational steps, refer to Li et al. and Li et al. [18,19]. Coupling Coordination Degree Model The CCDM evaluates the degree of correlation between two or more systems and is often used in research on urbanization, ecologicalization, population growth, and innovation capacity building. Based on the coupling and coordination mechanisms between investment potential and economic development, we used the CCDM to analyze the coordination relationship between investment potential and economic development in Africa, as illustrated in Figure 2. The formulas used are as follows: where D is the degree of coordination between the investment potential and the economic development index; U is the investment potential index [20]; Q is the economic development level index; C is the coupling degree between investment potential and economic development; T is the comprehensive coordination index of the two systems; α, β are the undetermined coefficients. While investment potential serves as an essential catalyst for economic growth, it is not the only driving factor for economic development, and in this study, we used α = 0.4 and β = 0.6 [20,21]. Exploratory Spatial Data Analysis Using Moran's I index, we calculated the spatial agglomeration of investment potential and economic development coordination in Africa en masse, using the equation where Wij is an element of a spatial weight matrix indicating whether i and j are contiguous and S 2 is the variance of the attribute value. The range of Moran's I ∈ [−1,1], such that values greater than zero represent positive correlations, and values lower than zero represent negative correlations. The Getis-OrdGi* index was used to identify the hot spots and cold spots in the spatial distribution of the coordination degree between investment potential and economic development, using the formula: when Gi*(d) is positive, the area i indicates a hot spot; when Gi*(d) is negative, the area i suggests a cold spot. Geographic Detector Developed by Wang et al., the geographic detector is an operating software for identifying driving factors [22]. It mainly includes factor detection, interaction detection, risk area detection, and ecological detection. The factor detector can reveal the influence of driving factors on the investment potential and economic development coordination degree. The formula is where D is the driving factor of the coordination between investment potential and economic development; TD is the explanatory power of the driving factor influencing the coordination relationship between the two systems; Nh is the number of units of type h; N is the number of all countries; and σh 2 and σ 2 are the variances of the D values for the h class and for all countries, respectively. Geographically Weighted Regression Model The spatial econometric model fully accounts for the autocorrelation of geographic elements and can effectively measure the spatial non-stationarity of the driving factors [23]. Based on the coordination degree and the corresponding data of investment potential and economic development from 2008 to 2016, we developed a regression model to analyze the driving factors that led to the spatial heterogeneity of the coordination relationship. The model established is as follows: where yi is the coordination degree index; xij is the various explanatory variables; (ui,vi) is geographical position coordinates; βi is the corresponding geospatial position function for each region; and εi is the residual. Table 2 and Figure 3 summarize the progression of the investment potential and the economic growth among African countries. From 2008 to 2016, the average investment potential index decreased from 0.259 to 0.252, representing a decline of about 2.7%, while its corresponding coefficient of variation (CV) value decreased from 0.437 to 0.386. The evaluation scores (ranging from 0 to 1), which reflect the investment potential for African countries, showed low values. At the same time, the investment potential was moderately stable, and the relative differences in investment potentials among countries had gradually contracted. The average economic development index increased from 0.116 to 0.151, while its corresponding CV value decreased from 1.152 to 1.025. This suggests that the overall economic level of African economies has steadily increased. However, growth among African countries had been highly heterogeneous, and the gap between economies has become more evident. As shown in Figure 3, the exponential distribution curve of the investment potential is highly comparable with the exponential distribution curve of economic development. South Africa, Egypt, Seychelles, Mauritius, and Botswana were among the top countries both in terms of investment potential and economic growth. This indicates that the spatial distribution of the investment potential index and economic development index is strongly related, which supports the coupling coordination mechanism between investment potential and economic growth. Note: AVG is the average of the evaluation system index; CV (coefficient of variation) is used to measure the extent of index differences and can be computed using the equation: CV = SD/AVG; SD is standard deviation, which is used to measure the degree of dispersion of the data set. Classification of Coordinated Relationship between Investment Potential and Economic Development The CCDM was used to calculate the coordination degree between investment potential and economic development. Using 0.2 and 0.4 as nodes, the calculation results were divided into three categories: (1) uncoordinated development, (2) barely coordinated development, and (3) coordinated development. Using the lagging condition of investment potential index and economic development index, the values were further subdivided into three groups: relative lag of economic development (U(x) − Q(x) > 0.1), relative lag of investment potential (Q(x) − U (x) > 0.1), relative balance between investment potential and economic development (0 < |Q(x) − U(x)| ≤ 0.1). We found that for 2008, a high percentage of countries (39.22%) were categorized as having uncoordinated development; this value steadily declined over the years, which in 2016 stood at 21.57%. This suggests that the general coordinated relationship among African economies has started to develop. The barely coordinated category was the dominant grouping and increased further over time. For 2008, 2012 and 2016, the percentages of countries under this category were 50.98%, 52.94%, and 60.78%, respectively. This indicates that a considerable number of countries in Africa continued having weak coordinated relationship between investment potential and economic development (see Table 3). Note: RLOED, Relative lag of economic development; RLOIP, Relative lag of investment potential; RB, Relative balance; "①~⑨" means the same as Figure 4. As shown in Figure 4, countries with uncoordinated development level and where economic growth trailed investment potential included Mali, Niger, Guinea and other Western African countries, and Madagascar. Since these countries already have substantial investment attraction, they ought to focus on attracting foreign capital into primary industrial sectors by highlighting the high probability for quick returns on investment and excellent economic benefits and enhancing the promotion of investment potential for economic growth in the future. Countries categorized under barely coordinated were dominated by those with relative lag in economic development. This country-type was concentrated in Eastern and Western Africa (e.g., Ethiopia, Uganda, Mauritania, Burkina Faso), with some sporadic distributions in Southern Africa. Among the countries with coordinated development, those with balanced investment potential and economic development increased in 2016, including Egypt, Nigeria, and Botswana. These countries are capitalizing on the economic advantages of investment potential. In the future, these countries ought to direct foreign investment towards the industrial-technological innovation system, cultivate new growth points with scientific and technological innovation as the core, and continue to promote the positive role of foreign investment towards economic infrastructure. Countries with coordinated development and relative lag of investment potential expanded in 2016 and included Algeria, Equatorial Guinea, Mauritius, and Seychelles. The recommended path for these countries to achieve sustainable economic growth involves enhancing the guiding role of the government's financial resources, accelerating upgrades in infrastructure facilities, and creating a conducive infrastructure-and business-friendly environment. Overall, in each category of coordination relationship, the dominant subclass was countries with a relative lag in economic development. Due to stark differences in lag determination between systems, different country types should adopt specific strategies for opening up and attracting investment. Analysis of the Spatio-Temporal Pattern of the Coordinated Relationship between Investment Potential and Economic Development For 2008, 2012, and 2016, the corresponding Moran's I indexes were 0.360, 0.232, and 0.237, respectively. The z-test values were 4.668, 3.081, and 3.152, and the results were statistically significant (p < 0.01). The results indicate that the spatial distribution of the coordination degree between investment potential and economic development is characterized by spatial agglomeration as a whole. The calculated Getis-OrdGi* Indexes using ArcGIS Software and the obtained p-value scores were divided into four categories using the natural breaks classification method: cold spot, sub-cold spot, sub-hot spot, and hot spot (see Figure 5). In 2008, the hot spots formed an agglomeration in Southern Africa (e.g., South Africa, Botswana, and Namibia), and another in Central Africa (e.g., the Republic of Congo and Cameroon). For cold spots, an agglomeration was formed by some countries in Western Africa, including Mali, Senegal, Guinea, and other Western African countries. In 2012, the hot spot agglomeration zone in Central Africa contracted, and Chad became an isolated cold point with no contiguous country of the same type. By 2016, the pattern of a dual-core group in hot spot area had been broken. With the overall decline in coordination degree, the hot spot region in Central Africa had vanished entirely and was converted into a sub-hot spot area. The Southern African region became the only hot spot agglomeration in the content. Algeria, Tunisia, and Egypt in Northern Africa continued being hot spots from 2008 to 2016. Meanwhile, the cold spot region continued to spread in Central Africa forming a spatial pattern of "one core and one belt". The "one core" area was composed of several Western African countries (e.g., Mali, Guinea, and Côte d'Ivoire) while the "one belt" comprised a number of Central and Eastern African countries (e.g., Chad, Central African Republic, the Democratic Republic of the Congo, and Tanzania). Overall, the countries found in the hot spot areas maintained the characteristics of centralized distribution and fragmented distribution in the local area. The cold spot agglomeration gradually changed in a strip-shaped distribution, while the subhot spot and the sub-cold spot areas were always distributed around the hot spots and cold spots, indicating strong dependence on the shifts and evolution of hot spot and cold spot agglomerations. The Analysis of Driving Factors Based on Geographic Detector Without a doubt, the geographical location, infrastructure development, and macro-regional economic integration between countries can have significant impact on the coordination relationship between investment potential and economic development, but obtaining commensurate indicators quantifying these parameters is highly problematic. The following parameters were chosen as indicators in analyzing the driving factors: economic base level, industrial construction level, degree of urbanization level, information support level, level of business friendliness, residents' living standards, and government support. We discretized the various indicators into five categories using the natural breaks classification method. The factor detection module was then used to analyze the main driving factors that influenced the coordination relationship for the purpose of dimensionality reduction while resolving possible multicollinearity problems. The results are as shown in Table 4. Our analysis showed the top five indicators included economic base, residents' living standard, industrial construction level, use of modern information technology, and level of business friendliness. The degree of urbanization and government support were shown to be weak parameters for the coordination relationship between investment potential and economic development in African countries. For the given study period, we found that the influence of the indicators changed significantly over time. In 2008, residents' living standards (0.714) was the leading variable, followed by economic base (0.657) and industrial construction level (0.486). In 2012, the residents' living standards (0.772) remained the leading parameter, followed by economic basic level (0.768), which also increased substantially in interpretative strength. The other top variables declined at varying degrees. In 2016, economic base level (0.812) had overtaken residents' living standards (0.732) as the leading indicator. Information support level (0.403) increased its interpretative strength, overtaking industrial construction level (0.390) as the third top indicator. Geographically Weighted Regression Analysis of Driving Factors In order to further understand the indicators' spatial dimension, we constructed a geographically weighted regression model using the five leading driving factors as independent variables and used the coordination degrees for 2008, 2012, and 2016 as dependent variables. The GWR tool in ArcGIS software was employed for the regression model. As shown in Table 5, the regression model had R 2 between 0.712 and 0.723 and adjusted R 2 between 0.675 and 0.692, indicating that the model could be reasonably explained using the five independent variables. This confirms that the main driving factors obtained through geo-detection have a strong capacity to gauge the coordination degree distribution. Visualizing the resulting regression coefficients, as shown in Figure 6, we analyzed the spatial heterogeneity of indicators. Based on the spatial distribution of high-value regions of the regression coefficient, we divided the driving mechanisms of the coordination relationship into five groups: economic base driven, industry-driven, information application-driven, business convenience-driven, and consumer market-driven. 1. The regression coefficient for economic base had been decreasing gradually but remained one of the most influential driving factors. In 2008, the economic base had a significant impact on the coordination degree for Ethiopia, Sudan, Egypt, Tunisia, Eritrea, and other countries in the northeast. From 2012 to 2016, the high-value distribution area shifted gradually to Southeastern Africa, and Madagascar became an economic base driven country. 2. The spatial distribution of the regression coefficient for industrial construction showed significant changes. In 2008, the industrial construction level was a vital driving force for improving coordination in Algeria, Morocco, Tunisia, Mauritania, and Senegal. In 2012, a large number of countries in southern Africa became a country of industry-driven. In 2016, the highvalue area of the regression coefficient contracted to the north. In the end, the coordination degree of only Algeria, Morocco, and Tunisia was strongly driven by the industrial construction level. 3. The change in regression coefficient for information support level can be characterized as having an east-west configuration, where the high-value areas are found on the eastern side of the continent the regression coefficient decrease gradually when moving westwards. Ethiopia and Madagascar were found to be significantly affected by the information support level and are categorized as being information application-driven economies. 4. From 2008 to 2016, the regression coefficients appeared to have the morphological characteristics of decreasing from northwest to southeast; the areas with highest regression coefficients for business friendliness were found in northwest Africa (e.g., Morocco, Mauritius, Senegal, and Cape Verde). Business friendliness is an essential driving factor to promote improvements in the coordination relations in these countries, which can be categorized as being the business convenience-driven type. At the same time, Southeast African countries must actively pursue more spillover benefits from a business conducive environment. 5. Over time, the regression coefficient for the residents' living standards has changed from negative to positive, and the positive effect of this variable has substantially increased. High regression coefficient areas for this variable gradually extended from Liberia, Côte d'Ivoire, Guinea, Guinea-Bissau, and Senegal into Morocco, Cape Verde, and Mauritania. The changes in the residents' consumption capability, product demand level, and consumer market have a vital role in promoting the coordination relationship, particularly in Africa's northwestern region, which can be considered as being consumer market-driven. Evaluation of Investment Potential and Economic Development The investment potential and economic development level of Africa were measured using the entropy weight method, which shows that the average value of investment potential in 2016 was only 0.252, while the average value of economic development was only 0.151. This suggests that the current investment potential and economic development in Africa are still at low levels, consistent with findings from previous research [24,25]. For most African countries, guiding the capital flow into infrastructure construction and social services and promoting sustained development in technical training, science and engineering education, and technology research should be considered as urgent national concerns. Countries with smaller economies, such as Seychelles, Mauritius, and Botswana, have relatively high rankings in the evaluation index comparison, contrary to the finding of Xie et al. and Jiang et al. [25,26]. This is mainly because the evaluation system used in this study employed a large number of mean indicators and ratio indicators, such as per capita cultivated land area and per capita GNI (gross national income). The rankings in the subsystem evaluation index for many countries are comparable with those from international reports such as the Doing Business Report and the World Investment Report (In 2016, Seychelles, Mauritius, and Botswana ranked the 99th, 20th, and 86th places in The World Investment Report). This supports the feasibility and rationale of the indicator selection approach. The Evolution of the Coordinated Relationship between Investment Potential and Economic Development Based on the coupling and coordination mechanisms between investment potential and economic development, we used the CCDM to analyze the coordination relationship between investment potential and economic development in Africa, which is an extension of the field of coordination relationship research [12,27,28]. We divided the coordination relationship into three levels based on the coordination degree of each country, from coordinated development to uncoordinated development. As the value of the coordination degree decreases, so does the degree of mutual promotion between investment potential and economic development. This study has shown that barely coordinated development and uncoordinated development are the main forms of coordination relationship in Africa. Based on the analysis of evaluation indicators, this may be the result of inefficient government policies and management, shortage in technical skills and required competencies, and the high import dependence of many economies in Africa, which have hindered overall improvements in the business environment and economic development. This finding on the condition regarding the coordination relationships in African countries could be used to explain the current backwardness in investment potential and economic development found in much of the region. It also supports the feasibility and scientificity of using the CCDM to analyze the coordination relationship between investment potential and economic development. The analysis of the subsystem shows that the dominant coordination relationship subtype was relative lag in economic development. This could be related to factors such as the instability in local politics, complex dynamics of international relations, and instability in the international exchange rate market. Individual countries would need to adjust the equilibrium relationship between investment potential and economic development to match the complexities of the international investment environment and satisfy the demands for economic development. The spatio-temporal pattern analysis suggests that the coordination degree has spatial agglomeration characteristics. This suggests that countries with high coordination degrees demonstrate driving effects, which can improve coordination relationships in the surrounding areas [29]. Finally, based on the analysis of the coordination relationship classification, adopting policies that would effectively attract investment is crucial for many African countries in promoting the coordinated development of the national economy and investment environment. The Driving Mechanism of the Coordinated Relationship between Investment Potential and Economic Development When using the geographic detector in measuring the driving factors of coordination relationship, we found that the economic base level, residents' living standard, information support level, industrial construction level, and business friendliness are the leading factors influencing coordination relationship, sorted by the value of their explanatory power. This suggests that in order to improve the coordination degree between investment potential and economic development, African countries would need to strengthen their economic base, use the "national wealth to benefit the masses", support the development of the industrial system, and promote the democratization of internet use. In the geographically weighted regression analysis, we found significant spatial heterogeneity in the distribution of indicator influence. From high-value areas, the regression coefficients decreased gradually into low-value regions, indicating that these parameters are influenced by the spatial-neighbor effect and distance attenuation mechanism in forming different driving mechanisms. Also, driving mechanisms in adjacent areas have high similarity. The driving factor analysis, combined with the geographic detector and geographical weighted regression method, provides more advantages in screening and detecting spatial heterogeneity of driving factors compared with previous approaches [30]. The indicator rankings generated from different methods showed substantial similarity. For example, the economic base level and the residents' living standards were the most important parameters found using geographically weighted regression and geographic detectors. This can be used to support the feasibility and reasoning of combining the methods in order to have a more comprehensive understanding of the driving mechanisms affecting investment potential and economic development. Limitations of the Study This study has some potential shortcomings. First, although the evaluation index system included a number of indicators, this does not guarantee that all significant variabilities have been considered in the indicator list. Some indicators of development, such as inflation, environmental phenomena, and poverty rates, were not considered. Likewise, the endogeneity issue between investment potential and economic development evaluation system could not be completely avoided. Second, because of limited research conducted with regards to the relationship between investment potential and economic development, our approach required some inevitable use of subjectivity from methodological choices to analysis framework. Third, the driving effect of geographical location, infrastructure development, and the economic integration between countries were not included in this study and would have to be explored in future studies. The research scope can also be extended to reflect the differences in the coordination relationship between investment potential and economic development within countries. Conclusions This study made use of data from 51 African countries, combining the entropy weight method, coupling coordination degree model, exploratory spatial data analysis, geographic detector, and geographically weighted regression model in order to analyze the evolution and driving mechanisms of the coordination relationship between investment potential and economic development. The following conclusions are drawn. 1. We found that the spatial distribution of high-level countries has strong similarities in terms of investment potential and economic development. The overall investment potential of African countries was found to be weak, but the internal differences in investment potential have gradually narrowed. The overall economic level is rising steadily, but the "economic gap" between countries is still very evident. 2. The coordinated relationship between investment potential and economic development can be divided into three categories: uncoordinated development, barely coordinated development, and coordinated development. Uncoordinated development and barely coordinated development were the most dominant types of coordinated relationship. By determining the lag conditions, countries can adopt unique strategies in order to attract foreign investments. The coordinated relationship between investment potential and economic development in African countries showcased attributes of spatial agglomeration. Hot spot areas were characterized by condensed and continuous distributions as the overall pattern while the local area had fragmented distributions; a hot spot agglomeration was found in Southern Africa. For cold spot areas, a spatial distribution pattern of "one core and one belt" was formed, with some Western African countries become part of the core area and some Central and Eastern African countries constituting the belt. 3. Economic base, residents' living standard, industrial construction level, information support level, and business friendliness were the leading indicators in the relationship between investment potential and economic development. The distribution of regression coefficients showed distinct spatial heterogeneity. According to the distribution of regression coefficients in various countries, the driving mechanism of the coordination relationship can be divided into five types: economic base driven, industry-driven, information application-driven, business convenience-driven, and consumer market-driven. Although this study has some shortcomings, such as constraints in the evaluation system, subjectivity of the methodological choices, and the absence of some parameters of driving factors, it serves as an essential reference for African countries to develop unique strategies and policies, in order to effectively attract inflows of foreign investments. In the context of economic globalization, African countries must actively optimize their investment potential, create a conducive business environment, and guide foreign investments towards areas according to the actual condition of their natural resource endowment, industrial advantages, industrial layout, and foreign trade direction. In particular, African countries must focus on improving the level of education and social security in order to make full use of Africa's huge demographic dividend and rapid urbanization process in attracting foreign investments. Similarly, countries can also prioritize improving the utilization efficiency of foreign capital. Governments should implement effective domestic macroeconomic policies (e.g., low inflation monetary policies, low debt growth fiscal policies) and export-oriented trade strategies that can be competitive in the global economy. Strengthening economic cooperation between countries and avoiding the convergence of industrial structure are crucial in creating a conducive environment for market competition and improving the level of foreign capital utilization. These changes can provide the needed continued external support for African integration and sustainable development.
8,306.2
2020-01-06T00:00:00.000
[ "Economics" ]
Effects of White Space in Learning via the Web This study measured the effect of specific white space features on learning from instructional Web materials. The study also measured learners' beliefs regarding Web-based instruction. Prior research indicated that small changes in the handling of presentation elements can affect learning. Achievement results from this study indicated that in on-line materials, when content and overall structure are sound, minor differences regarding table borders and vertical spacing in text do not hinder learning. Beliefs regarding Web-based instruction and instructors who use it did not differ significantly between treatment groups. Implications of the study and cautions regarding generalizing from the results are discussed. Web-based and Web-supported instructional development is increasing rapidly. The Web promises access to immense amounts of information quickly and easily, and offers exciting options to teachers and learners, both at a distance and in conventional settings. Instructional Web sites are available on nearly any topic for any age. Unfortunately, many instructional sites include elements of visual, structural, or content design that can hinder learning. Sites that are poorly designed, visually or structurally, may diminish or negate the Web's potential benefits. Previous research in the realm of learning from computer mediated visual presentations indicates that even where structure and content are sound, decisions regarding visual design can affect subjects' ability to learn from the material presented [1]. Two constructs related to this framework are Presentation Interference and Cognitive Overhead. Presentation Interference has been defined as any presentation-related factor that distracts the learner from the message content [2]. Examples include inappropriate color choices, inconsistent or incompatible screen transitions, inconsistent navigation devices, spelling and grammatical errors, etc. Cognitive Overhead [3], refers to the negative influence distracting factors such as presentation interference have on learning. The premise is that increased cognitive overhead results in increased challenge and difficulty for learners in focusing on and processing vital content. Distractions caused by relatively insignificant elements of an instructional presentation, for example, inconsistent placement of navigation icons, make learning more difficult because learners must reorient themselves to structural and screen elements when they should be attending to content. As with all other media, Web developers need to know that cluttered documents communicate less [4,5] and design elements should always be planned [6]. Rich research has been conducted in the area of learning from instructional text with regard to legibility. For example, to help learners understand the structure of the content, developers should use white space differently around various levels of headings [7][8][9]. To differentiate paragraphs, vertical space is more effective than indention [8]. White space is an important component of readability that is generally considered to make text easier to read [10,11]. White space is any part of the page or screen that does not have text or graphical elements [12]. White space has at least six forms: 1) the margins, 2) the area surrounding the headings and between headings and corresponding paragraphs, 3) the space at the end of lines, 4) the leading, 5) the tracking, and 6) the space around images and graphic elements. Appropriate use of white space forms provides for powerful visual design [9]. Among the benefits of white space are the following: appropriate use of white space facilitates contrast [13], simplicity, and balance in a document. White space can create tension between two design elements [14]. Empty space provides resting points within a page that may facilitate deeper processing. In printed instructional text, developers frequently have used boxes to extend the reader's comprehension of the main ideas, but research has not reported on its effectiveness [15]. Applied in isolation, generic guidelines can be of limited usefulness [2]. As with all other design choices, context, including topic, audience, setting, and medium, is important to determining appropriateness. Above all, the appropriateness of white space must be considered holistically, in relation to all other design choices and elements on or within the same page or document. As a design element [16,17], its use should be as thoughtful as any other. Recent research in this area utilizing modern media is minimal and many text-specific guidelines lack true experimental and treatment designs, particularly with comparisons of actual practice versus professional standards (as opposed to two treatments that are merely different from each other). There is a need to re-examine legibility of text, specifically spatial arrangement in the context of instructional Web pages. Although both deal with large amounts of text, the context of the Web is very different from the context of a printed page, or even from Computer Based Instruction (CBI) screens. For example, in a printed page white space generally has a positive connotation, although its use may be limited by the need to conform to specific numbers of pages. In a CBI program, the amount of text on a single screen can be greatly reduced and the amount of white space increased because it costs virtually nothing to add additional screens, thereby making the information on any single screen appear easier to access. However, what would appear to be an optimal amount of white space in a CBI program would, in the context of a Web page, be considered wasteful and bad practice because it would create the need for scrolling to get to all the information. Nielson discusses the need to limit the amount of white space to just what is necessary for ease of usability [18]. Further, the presence of hyperlinks in Web pages allows users to think about, use, and pursue information differently than with a static printed page. This may result in differences regarding learners' perceptions, motivation, and effort when using the Web, thereby affecting their abilities to learn using Web-based text. The topic is important for developers interested in transferring curricula to the Web and in understanding how to develop on-line multimedia instruction that is visually sound and that should promote better learning. Research in a context of computer generated presentations has indicated that small changes with regard to presentation factors can affect achievement and beliefs [1]. In that study, an instructional presentation that was intentionally presentation interference free resulted in higher achievement scores than did presentations with interference. Further, although subjects who learned from visually well designed presentations and those who learned from presentations containing presentation interference both reported strong beliefs regarding the beneficial nature of computer generated presentations, including that they believed they had "learned a lot" from the treatment, the treatment groups viewing presentations containing interference learned significantly less as measured by the achievement test. The present study examined similar hypotheses in the context of information via the Web. The purpose of this study was to determine whether small changes in white space related factors (e.g., structural white space, visible table borders) affected learning from Web-based instructional materials as measured by achievement and belief. In particular, it measures effects of white space in an intentionally presentation interference-free instructional Web presentation vs. similar lessons with inappropriate handling of white space. Hypothesis 1: Subjects receiving treatment one, with white space and no visible table borders, would score higher on the achievement test than would subjects in treatments two or three, without white space structural cues and with visible table borders. Hypothesis 2: Subjects viewing treatment one, the interference free presentation, would report more positive beliefs regarding 1) the treatment site, 2) learning from the Web in general, and 3) instructors who include Internet use in their classes, than would those viewing presentations containing presentation interference. METHOD Subjects Subjects were 47 undergraduates enrolled in first-year multimedia design and production courses at a small, public, four-year university in the southwestern United States during the spring 2000 semester. Participation was voluntary. Subjects were randomly assigned to three treatment groups. Following the treatments, subjects completed an immediate achievement test and belief questionnaire. Treatments A Web site presentation was developed and presented via Windows-based computers with high resolution monitors and Netscape Communicator. The treatments presented types, incidence, and identification of skin cancer, and steps to reduce risk (adapted from [1]). The "control" version was designed according to known screen design research findings and prescriptive guidelines, was free of intentional interference, and included a great deal of white space, without visible tables, lines, or borders. The second version of the presentation was identical to the control version with one exception, the borders of tables used to organize the information were visible. The result was borders very close to the text without the "pixel padding" that would allow space between border and text. The third version was identical to the second with one exception: the structural white space was eliminated, resulting in more condensed text and, in some instances, text no longer directly beside related graphics. Across treatment groups, all factors remained constant except the handling of white space and visible table borders on the premise that small changes in a page may affect learning. This premise was based on results from previous research in which small changes in screen design and presentation factors between treatments did result in significant differences across treatment groups. Instrument A two-part instrument was used to collect data. The first part was a 25-item "short answer" and "fill in the blank" achievement test regarding the information presented. The second portion was a series of 12 seven-point bipolar probability items regarding subjects beliefs. Questions 1-5 of the belief questionnaire referred to students' beliefs regarding the credibility of the site and how much they learned from it. Questions 7-9 referred to subjects' beliefs regarding Web-based information and instructors who use it. Question 10 referred to the subjects' own experiences with Web development, and questions 6, 11-12 referred to subjects' beliefs about their behavior before and after visiting the site. Content validity was assured in the following manner: Both test and treatments were developed using information and photographs distributed by the American Cancer Society and the Mayo Clinic. Posttest items were parallel to the instruction. Both the test and the content were evaluated by the researchers, two software development teachers, and one corporate instructional designer. Using a split half procedure, the reliability coefficient for the posttest was 0.82. Procedures An experimental posttest-only control group design was used. Randomization of subjects into treatment groups was used to assure absence of bias. The independent variable was treatment. The dependent variables were 1) the percentage of correct responses to the follow-up test and 2) responses to the belief questionnaire. Data were collected during a four-week period to increase the number of participants and to more closely mimic students' uses of the Internet for learning. Subjects chose their own participation times from a variety of times offered, without the pressure of a contrived classroom atmosphere, and were randomly assigned to one of three treatment groups. Study participants learned from a Web-based lesson, then completed an achievement test and belief questionnaire. The treatment took subjects about 15 minutes to complete. All subjects completed the questionnaires immediately following the treatment and were allowed as much time to complete the posttest and questionnaire as they desired. Completing the questionnaire required 10-20 minutes. Data Analysis The number of points correctly answered on the posttest was converted to a percentage of the items possible. Data from the posttest were analyzed using one-way analysis of variance (ANOVA). Significance was set at the .05 level. SPSS was used to analyze the data. Responses to items on the belief questionnaire also were analyzed using ANOVA, with significance set at the .05 level. RESULTS The study considered two hypotheses. The first predicted that subjects receiving treatment one, with white space and no table borders, would score higher on the achievement test and that subjects in treatment three, without white space structural cues and with visible table borders, would score lower than subjects in both treatments one and two. The second hypothesis examined responses to a 10-item belief questionnaire regarding the treatment and the use of the Web for instruction in general. Analysis of variance was used to analyze data regarding each of the questions. Achievement All subjects completed a 25-item achievement test immediately following the treatment. Means achievement scores for Treatments groups 1, 2, and 3 were 77, 80, and 78, respectively ( Table 1). Analysis of variance showed no significant difference between treatments, F(2, 44) = 0.231, p = .795 (Table 2). Because no significant difference was found between treatment groups, no further analyses were conducted for the achievement test. Beliefs All subjects completed a 12-item belief questionnaire. Responses were collected via a seven-point bipolar probability scale. For items 1-10, 7 = "Strongly Agree" and 1 = "Strongly Disagree;" for item 11, 7 = "Extremely Concerned" and 1 = "Not at all Concerned;" and for item 12, 7 = "Extremely Often" and 1 = "Not at All." There were no significant differences between treatment groups on any of the belief items (Table 3). Subjects in all groups thought the site was credible, with mean responses for item 1 ranging from 5.5-6.1 out of 7. Responses across all groups were also positive for items 2, "I learned a lot from this site," (5.1-5.6) 3, "Learning from this site was easy" (5.5-6.1), and 5, "I thought the site had about 196 / BRADSHAW AND JOHARI 12a. Before I visited the site, I applied sunscreen this often: 12b. After visiting the site, I expect to apply sunscreen this often: the right amount of information," (5.1-5.7). Responses to question 4, "Learning from this site required concentration" were neutral (4.0-4.7). For items 6, "I expect some of my behaviors to change as a result of viewing this presentation," and 8, "Instructors who use the Internet are usually better than those who do not," responses were uniformly neutral across treatment groups. Responses to item 7: "In general, Web-based information is very beneficial," ranged from 5.9-6.3. For item 9: "I trust and respect teachers who include Internet use in their classes more than I trust and respect those who do not," response means ranged from 3.8-3.9. Responses to item 10, "I have had lots of experience with Web production," ranged from 4.1-4.9. Responses to items 11 and 12 also did not vary significantly by treatment. Items 11 and 12 did not deal directly with presentation-related issues but focused on individuals' concerns and habits regarding their own health and skin care habits. DISCUSSION The present study measured the effect of specific white space features on learning from instructional Web materials. The study also measured learners' beliefs regarding Web-based instruction. The treatments used in this study were carefully developed to provide sound and accurate information. The overall structure and content were developed carefully and were consistent across all three treatments. The differences between treatments were limited to small differences in a visual component, specifically how white space was handled. Results indicated that in this case, in which the content and overall structure were sound, minor differences with regard to table borders and vertical spacing did not hinder learning. Hannafin and Hooper [19] have reported that when a learning task is perceived as more difficult, learners may compensate by applying more effort. Reading on-line text certainly demands more concentration than reading printed text. Internet users may compensate by trying harder or concentrating more when reading on-line text. This could work to minimize differences between treatment groups. What seems normal in a given environment also is important. In printed reading, condensed text without structural cues has been shown to decrease comprehension [16]. In the context of printed materials such as a book, poorly presented text as in treatment three would be very unusual and, therefore, noticeable and distracting. However, poor text presentation, including condensed and unstructured text is quite common in the on-line environment. Therefore, in the on-line context, the poor handling of text in treatment three may not have been very noticeable or distracting. Scrolling may have contributed to a decrease in the positive impact of the control treatment and the lack of scrolling may have contributed to an increase in the appeal of treatment three. Scrolling is sometimes necessary in on-line communication, although most credible Internet development guidelines suggest scrolling should be minimized as much as possible [18,20,21]. While the overall appearance of the control treatment is clearly more appealing and consistent with screen design principles, some scrolling is required. In contrast, treatment three, which is poorest in terms of overall visual appeal and compliance with screen design guidelines, requires little or no scrolling. Another possible explanation for the finding of no significant difference is learner control, a powerful feature of Web-based instruction. Subjects had unlimited time and were allowed to read the materials as much as they wanted, moving on to more information or returning to earlier information, prior to the posttest. Personal control is an important motivating factor [22][23][24]. High amounts of learner control is a normal state in on-line learning, as opposed to low or no learner control in preprogrammed computer-generated presentations. Achievement results may have been different if the treatments had precluded learner control, for example, by controlling the amount of time each page was displayed. However, to do so would have been unauthentic, given the nature of the medium. An important attribute of the Web as a medium for learning is that learners generally can control both the amount of time they spend with each screen, and whether or not they return to specific screens for review. For many learners, the Web also may have more intrinsic appeal than ordinary paper-based instructional materials. Regarding items on the belief questionnaire, the fact that no significant differences were found is not surprising, given the results of the achievement test. On average, subjects in all groups did well on the achievement test, and the content and navigation structures were identical for all treatments. While it may be tempting to generalize widely and to interpret the results as an indication that white space structural cues and visually appealing layout are not important components of learning from the Web, such is not necessarily the case. All other aspects of the treatments used in the study were carefully developed: content was well structured, clear and concise, and highly credible; text was presented using highly readable font, size, and contrast; navigation was clear and logical; graphics used were high quality, meaningful, and relevant. Generalization should be limited to situations in which other components are carefully designed and developed. Results do not necessarily indicate that white space handling doesn't matter, rather that small differences in white space handling may not be critical when all other aspects are handled well. Follow-up studies should incorporate time on task measures, as well as qualitative techniques such as think aloud sessions, observations, open-ended questions, and interviews to provide deeper understanding of how learners experienced the treatments. Future research should continue to explore differences in the impact of specific visual aspects of Web instruction. The on-line environment has its own norms, quirks, and rules. For example, underlined and italicized text have important and common uses in printed text. However they should generally be avoided in on-line text as their uses have different meaning or impact on-line. On-line, underlined text indicates a clickable link and italicized text is difficult to read (even more so via the Web than in print). Suggested research questions for future related research include the following. Do specific text and layout conventions that research has shown to impact learning in printed instructional materials have similar results when applied to on-line materials? Under what conditions do text and white space related changes hinder or not hinder learning? What are the relationships between intrinsic motivation and cognitive overhead in Web-based instruction? What are the relationships between cognitive overhead and learner control in Web-based instruction? Research that addresses questions such as these should help define more effective uses of the internet for instructional purposes.
4,593.4
2002-03-01T00:00:00.000
[ "Education", "Computer Science" ]
Assessment of Stiffness and Strength Parameters for the Soft Soil Model of Clays of Cameroon This paper focuses on the advanced modelling of soft Cameroon clays for the global prediction of the behaviour of geotechnical structures. A comprehensive set of experimental data on Cameroon subsoils from the oedometer and triaxial tests are analyzed in this paper in order to determine the stiffness and strength parameters for the Soft Soil model. It is based on 71 soil samples taken from both sides of the construction sites of several major structures across the territory. At the first approach, the soil samples taken were analyzed in a geotechnical laboratory to obtain physical and mechanical identification parameters specific to each soil type. The results obtained reveal that the analyzed soils are generally compressible clays. In the second phase, the law of behaviour of the Soft Soil model was used to characterize these Cameroon soils. Its parameters were obtained after calibration of the results of the laboratory tests obtained in first approach. The results obtained in this article can be compared to the different models obtained on clays soils around the world. The parameters are of the same order of magnitude as other clays modelled around the world. Interest and Needs of Soils Modeling. In the practice of numerical modelling, not just any law of behaviour can be considered as an acceptable approximation of any real behaviour, even after calibration of the parameters. e soil has a stress-strain relationship that is nonlinear and irreversible from a certain threshold. For numerical modelling to provide a realistic estimate of behaviour, it must use advanced behavioural laws adapted to each type of soil. In the context of this article, we have dealt with several types of compressible soils with different mechanical behaviours. In this paper, an advanced law of behaviour called the Soft Soil model, which describes the behaviour of the compressible soils in a realistic way, will be highlighted. Present, several civil engineering and geotechnical works are under construction in Cameroon. e predictive evaluation of the behaviour (settlements, lift, and stability) of these infrastructures is often carried out by empirical methods or conventional analytical methods. ese do not always take into account the realistic behaviour of the material or the overall behaviour of the work and impose simplifying assumptions of behaviour. From these shortcomings in the study of structures, it is important to determine the parameters of advanced soil models that are not currently available in Cameroon in order to effectively predict the behaviour of geotechnical structures considering successive phases of construction. is global approach involves defining a specific constitutive law for each soil type, determining its parameters using classical geotechnical laboratory data and modern numerical technics. e parameters of the advanced Soft Soil model will be determined from 71 soil samples taken from construction sites across Cameroon. ese parameters will constitute a national database of exploitable compressible soil parameters that can be used in numerical modelling to predict the overall behaviour of geotechnical structures. geotechnics. e first rational approach to this problem, based on the principle of effective stress, was proposed by Terzaghi [1]. After this, an intensive research has been conducted in this area. A logical extension of Terzaghi's onedimensional consolidation theory to a 3D situation is due to the complete coupled porous-elastic formulation of Biot [2]. e two theories of Terzaghi and Biot postulate that soil behaviour is linearly elastic. For global problems, most researchers are turning to digital methods. Schiffman and Arya [3], using Terzaghi's one-dimensional consolidation model (1D), conducted a research using the finite difference method and the finite element method. Desai proposed a nonlinear model for the soil and implemented this model in a finite element computation program to deal with the onedimensional consolidation problem. Melanie's model [4], developed at LCPC to represent the behaviour of natural clays by finite element calculations, is an anisotropic elastoplastic model with hardening, which allows the resolution of consolidation problems. It is derived from the modified Cam-Clay model. Soft soils, which are normally consolidated, are known for their very high compressibility [5][6][7][8][9]. It is obvious that creep is important for problems that show a significant primary settlement: this is the case of road construction, foundations, dikes on compressible soils, or dams where strong primary settlements are followed by creep settlements years later [8,[10][11][12]. In other cases, dams or buildings may initially be based on over-consolidated soils; the primary settlements are then relatively small. In addition to foundation settlement problems, creep plays an important role in slope stability problems. Several natural slopes with a low safety factor often show continuous displacements due to creep under constant mechanical or hydraulic conditions [13,14]. In recent years, several research studies have been carried out around the world to determine the advanced soil parameters for feeding advanced soils models implanted in geotechnical software for the calculation of geotechnical works in construction or in the process of being monitoring. is is the case with the stiffness and strength parameters for hardening and the Soft Soil model of soft and stiff Bangkok clays investigated by Adachi and Oka and Suched et al. [13,15]. ese authors conducted several series of isotropically consolidated drained and undrained compression tests at the Asian Institute of Technology on soil samples from Bangkok, and then exploited the results of these tests to obtain the advanced parameters of these soils for the advanced calculation of geotechnical works in Bangkok. Nallathamby and Minna [16] modelled and exploited the advanced parameters of soft anisotropic soils for their exploitation in the calculation of the experimental embankment of Murro in Finland. e results were compared with those measured during the test phase of the work. is Murro test embankment was constructed on a 23 m deep deposit of medium sensitive clay near the town of Seinäjoki in Western Finland. e embankment has been monitored for a long time, since it was built in 1993, and it has been subjected to several studies. e almost normally consolidated clay is overlain by a 1.6 m thick overconsolidated dry crust, and the underlying thick clay layer is almost normally consolidated and relatively homogeneous. e groundwater table is estimated to be at 0.8 m below the ground level. Murro clay is highly strain anisotropic and time dependent [16]. Many other studies have been investigated for clays [17][18][19]. A large number of C c − w n (C c is the compression index, and w n is the natural water content) correlations have been proposed by researchers for different soft clays around the world, but comparisons of these correlations and reasons for differences between them are rarely reported. e C c − w n relationships of marine soft clays from eight China's coastal cities have been investigated by Gao and Chen [19]. It is found that the north coast clays have larger slope of the C c − w n relationships (about 0.02) than the south coast clays (about 0.008). Methodology Generally, the perfectly linear elastic behaviour law with a Mohr-Coulomb-(MC-) type failure criterion is used in geotechnical engineering calculations. is law defines the behaviour of the soil based on 5 parameters: Young's modulus (E), the angle of friction (φ), the cohesion (c), the dilatancy angle (ψ), and the Poisson ratio (v). However, several studies [13,[20][21][22][23] have shown that this model does not represent the nonlinearity of the actual behaviour of the soil and imposes that the module in loading is the same as that in unloading. In the case of a structure exposed to cyclic loadings, for example, case of foundations of industrial buildings receiving vibrating machinery, wave breaking on a dike, and traffic load, unloading areas play a predominant role in determining its behaviour. is simplification therefore has a consequence in the models for the prediction of the real behaviour of the structure in question. e choice of the realistic behaviour model for this study meets two requirements: (i) On the one hand, that of better representing the behaviour of compressible soils in Cameroon compared to the Mohr-Coulomb model (ii) On the other hand, that of identifying the parameters of the Soft Soil model from the results of triaxial or oedometric tests on the materials selected in several construction sites throughout the national territory Several studies have shown that the dedicated Soft Soil (SSM) model for the realistic representation of the behaviour of compressible soils is implemented in several calculation codes [24]. It will be highlighted in 71 soil samples taken in the central, south, and Littoral regions of Cameroon. From the classical geotechnical laboratory results on soils, the parameters of this advanced model will be determined according to the relationships linking several geotechnical parameters presented in detail later in this article. Formulation of the Soft Soil Model (SSM) In this section, we start from the classical formulation of the Soft Soil Creep model (SSCM) to the determination of the parameters of the Soft Soil model. e Soft Soil model makes it possible to take into account the work hardening of the soft clays but not the secondary consolidation; this results in 2 Advances in Civil Engineering the evolution of the axial strain in an oedometric test as a function of time, after the end of the primary consolidation. is deformation evolves according to the logarithm of time (at least for observable time scales). Generalization of the Differential Law for 3D Creep. e 3D model is a generalization of the 1D model. We will adopt the stress invariants for the pressure P � s oct and the deviatoric stress q � 3 · τ oct / � 2 √ . With s oct and τ oct being the normal octahedral stress and the octahedral shear stress, respectively, these invariants are used to define a new equivalent mean constraint called P eq : Figure 1 shows that the measured stress p eq is the constant on the ellipses in the plane p-q. Actually, we have the modified Cam-Clay model ellipses amended by Roscoe and Burland [25]. e soil parameter M represents the slope (of what is called critical state line as shown in Figure 1). Equation (1) is used for the deviatoric stress q: where φ cv is the angle of critical friction or constant volume. Using the above definition of q, the equivalent pressure P eq is constant over an ellipse and in the principal stress space. To extend the theory in 1D to the general case in 3D, we will now focus on the case of normal consolidation encountered in an oedometer. is value of M is a practical value calculated by default. Moreover, the finite element calculation code is used for the analysis of geotechnical structures: PLAXIS [24,26] makes it possible to calculate an approximate value of K NC 0 (coefficient of resting land under normal consolidation conditions), which is the value of M calculated from equation (2). In general, the value of K NC 0 calculated by the program is greater than that calculated by Jacky's formula (K NC 0 � 1 − sin ϕ). Otherwise, one could enter a value of K NC 0 to calculate the value of M by using Brinkgreve formula developed in the following PLAXIS calculation code [26]: (3) e load surfaces are ellipses with associated flow (increment of normal deformation at the ellipse) while for rupture, the flow is not associated (that is, why it is necessary to enter the dilatancy angle, possibly zero, which corresponds to the plastic flow at constant volume). Under these conditions, we have σ 2 ′ � σ 3 ′ � K NC 0 · σ 1 ′ , and from relation (1) we derive the following relations: where σ′ � σ 1 ′ and P eq p is the generalized preconsolidation pressure, this parameter being proportional to the one-dimensional case. For a known value of K NC 0 , P eq can be computed from σ′, and P eq p can also be calculated from σ p . Instead of the parameters A, B, and C of the one-dimensional model, we will now use the parameters κ * and λ * , defined by the following relations: where v ur is the same Poisson's ratio. Formulation of Elastic Deformations in 3D. e 1D model can be extended to obtain the 3D model, but until now, this has not been performed for elastic deformations. In order to obtain a 3D model for elastic deformations, the elastic modulus E ur is defined as dependent on the stress [24] by 3.3. Parameters of the Soft Soil Model. As soon as the ultimate limit criterion f (σ′, c, φ) � 0 is reached, the instantaneous plastic strain speed develops in accordance with the flow law _ ε p � λzg/zσ ′ with g � g (σ′, ψ); the parameters of plastic flow of the material are as follows: c′ is the effective cohesion, φ′ is the Mohr-Coulomb friction angle, and ψ is the dilatancy angle. For fine and cohesive soils, the dilatancy angle is usually small and can therefore often be taken as zero. e Soft Soil model therefore requires the following hardware constants. Modified Swelling Index and Modified Compression Index. ese parameters can be obtained from an isotropic compression test or an oedometric test. When the logarithm of the stress is plotted against the deformation, the curve can be approximated by two straight lines. e slope of the normal consolidation curve gives the modified compression index λ * , and the slope of the discharge curve (or swelling) can be used to calculate the modified swelling index κ * . It should be noted that there is a difference between the modified indices κ * and λ * and the original parameters of the Cam-Clay model, κ and λ. Relationship with the Cam-Clay model parameters and relationship with classic parameters ere is no exact relationship between the isotropic compression indices κ and κ * and the one-dimensional swelling index C g because the ratio of horizontal to vertical stress changes during unidimensional discharge. For the approximation, it is assumed that the case of average stress during the discharge is a case of isotropic stress, that is, to say the horizontal and vertical stresses are equal. is synthesis on the formulation of the Soft Soil model, an advanced behaviour model implemented in several computation codes, shows that it is a sufficiently simple model to be able to determine its parameters from a classical geotechnical study (triaxial and oedometric tests). In this model, there is no wedging parameter without a physical signification as often found in other models. e determination of its parameters can require optimization techniques. e user must focus on two choices: one is inherent to geotechnics in general, and the other is realistic digital simulation. e determination of the geotechnical parameters to be entered in a calculation code is not different from a choice of "manual" calculation parameters for a compaction calculation, for example. Some of the parameters are different in their expressions but always remain connected to classical geotechnical constants. e models of behaviour developed are distinguished mainly by the type of mechanical problem to be solved on the soil or on structures in soil, by the number and type of parameters that characterize them. e least "current" parameter is presumably the dilatancy. In fact, the choice of the model of behaviour depends on the problem posed, which model of behaviour is to be used for which geotechnical problem? is question is not so simple because there is no "universal" model capable of reproducing the behaviour of any type of the geotechnical work. Geology Setting of Cameroon. e geological history of Cameroon begins with the Archaean era between 3.5 and 2.5 billion years (Ga) ago. Its different phases of development are illustrated by geological masses formed during successive orogenic cycles. It is characterized by formation of craton and mountain ranges and subsequent extension phases by the splitting of the continental crust. From the south to the north, we have ( Figure 2 Advances in Civil Engineering Congo Craton towards the south [27,28] from this domain, which were affected by four stages of ductile deformation, corresponding to alternating phases of E-W to NW-SE contraction (D1, D3) and N-S to NE-SW extension D2 [27]. (2) e Cameroonian central domain, broad area that extends between the fault of the Sanaga to the south and the Tibati-Banyo fault to the north. It consists of Archaean to Palaeoproterozoic high-grade gneisses intruded by widespread Pan-African syntectonic plutonic rocks of high-K calc-alkaline affinities [27]. Major setbacks in this area seem to have guided the implementation of plutonism having an orthogneissification with variable intensity. is area is subjected to an advanced general metamorphism where banking is composed of gneiss and amphibolite. (3) e Cameroonian northern domain is characterized by subordinate 830 Ma old metavolcanic rocks of tholeiitic and alkaline affinities associated with metasediments known as the Poli series. is domain is characterized by three stages of deformation: an early phase D1 is associated to the metamorphism with granulite facies, medium pressure [27]. A phase D2 dated 600-580 Ma is synchronous with an intense migmatization [27] and a granitization associated to a metamorphism with amphibolite facies (600°, 5-7 Kb) and greenschist facies (550°, 5 Kb [28]. For example, Yaoundé, Cameroon's capital, is made up of typical ferralitic soils at three levels, namely [28] (i) A loose upper formation dominated by topsoil. (ii) A ferruginous, nodular, armored intermediate layer consisting of iron hydroxide and kaolinite clay accumulations from hydrolysis of rock minerals. (iii) At the lower horizon, the geological formations encountered are the grenatiferes gneiss with micas or biotite only, whose structure of the mother rock (Gneiss) is conserved in depth and more altered towards the surface. Minerals such as quartz, kaolinites, goethite, and hematite are encountered. All these formations belong to the old metamorphic base. Each time depending on the type and scale of the project to be carried out, the cored holes, and reconnaissance wells on construction sites have revealed the pedological overview of the soils of the central Cameroon region. e different terrain profiles generally encountered reveal not only the possible combinations of the sequence of soil layers in the city of Yaoundé but also the nature of each layer. So, we find in the extent of the city: clays and lateritic reddish bass. Lateritic cuirasses are observed as well. In the Littoral region, the sedimentary rocks form low lying and gently undulating hills along the western side of the Dibamba River. e strata have experienced tropical weathering, which has resulted in the Advances in Civil Engineering development of a lateritic residual soil profile. is type of rock mass tropical leads to weathering profiles as presented by Fookes [29] and shown in Figure 3. e material viewed at site comprises weathering Grade 6, as seen in the trial pit photograph from site, with possibly weathering Grade 5 material encountered towards the base of deeper rotary boreholes. Below this material, the stratum may then transition more rapidly to weathered parent rock material. is transition was not identified in the deepest exploratory boreholes that went 30 m below the ground level. In the exploratory holes observed, there was no clear evidence for overlying transported material, All rock material is converted to soil. the mass structure and material fabric are destroyed All rock material is decomposed and/or disintegrated to soil. the orginal mass structure is still largely intact. More than half of the rock material is decomposed or disintegrated to a soil. Fresh or discoloured rock is present either as a discontinuous framework or as corestones. Less than half of the rock material is decomposed or disintegrated to a soil. Fresh or discoloured rock is present either as a discontinuous framework or as corestones. Discolouration indicates waethering of rock material and discontinuity surfaces. all the rock materials may be discoloured by weathering. No visible sign of rock material weathering: perhaps a slight discolouration on major discontinuity surfaces. Advances in Civil Engineering so material may be weathered in situ. e following section presents the nature of the materials investigated in this paper. e samples were collected "undisturbed" using a PVC 100 mm diameter corer and 300 mm in length Figure 4. e shear box used was 60 mm in width and 25 mm in height, which is recognizable as the small shear box apparatus to the British Standards Institution [30][31][32]. ese investigations were conducted under perfectly controlled conditions. e Table 1. Figure 5 presents test pits, the soils samples, triaxial shear cell, and oedometric cell used for this paper. Tables 2 and 3 summarize the results of the identification, compressibility, and shear strength tests carried out on soil samples taken from 71 sites of projects under construction in Cameroon. According to the results of oedometric compressibility, soils tested in the laboratory are generally compressible. e statistical significance of these laboratory data is presented in Tables 2 and 3. Results and Discussion In the previous sections, the parameters of identification, compression, and shear resistance were presented for the soils of Cameroon, including the transformation relations of the laboratory parameters to the parameters of the Soft Soil model for their use in a numerical calculation code. [25], presented in parts of this article. e statistical significance of these parameters of the Soft Soil model is presented in Table 4. e results obtained in this paper (clays of Littoral region, central region, and southern region) can be compared to the different models obtained on the clays soils around the world. e parameters are of the same order of magnitude as other clays modelled around the world (Table 5): Haney clay [33]; Osaka clay [34]; the Cubzac-les-Ponts clayey embankment [35,36]; from the site of the dam of Flumet in the Isère in France [14]; and the site of Saint-Laurent des eaux [14], the Bangkok clays [12,15], and the Murro test embankment clay in Western Finland [16] to name only those. e soft soils found in the African continent (tropical zone soils) [37][38][39] have been considered for these comparison in this paper: the settlement of a railway embankment on PVD-improved Karakore soft soil in Ethiopia [39]; the rheology of mechanical properties of soft soil in Nigeria [37]; and various problem on soft clayed soils in South Africa [38]. e results of monitoring of the full scale experimental embankment on soft Douala clays (Littoral region) of Cameroon have been published recently in this domain in the Journal of Civil Engineering [40]. In the context of projects of great importance whose works are subjected to complex loading (vertical and horizontal components and moments applied to the foundation, dynamics loads, and cyclic loads), the analytical calculation is no sufficient to predict the behaviour of such structures. It is therefore necessary to first performed advanced soil modelling according to the results of laboratory tests and then to carry out a numerical modelling of the structure taking into account the realistic behaviour of the materials, the soil-structure interaction, and the different staged construction. For compressible soils, the Soft Conclusion In this paper, we determined the parameters of the advanced compressible soil model on 71 soil samples from various locations in the country. e parameters of the Soft Soil model determined now serve as a national database for compressible soils. ese parameters can be used in projects to numerically predict the behaviour of geotechnical structures under construction or during operation. Present, the most efficient tool for predicting complex phenomena is modelling. It is therefore important to first determine the parameters of each model likely to describe the actual behaviour of the structure before using it in a computer code. e study meets national or even international needs, which is to have a database for exploitable soils for supplying calculation codes when carrying out major projects. More precise methods, such as numerical calculations, should be used when the soil-structure interaction has a dominant influence. Numerical methods have the capacity to take into account macroscopic heterogeneities of the soil (layers of different characteristics or heterogeneity); the same is true of the heterogeneity caused by different loading levels according to the points of the Massif, in the case of a soil with nonlinear behavior (variable rigidity). Numerical methods make it possible to take into account any loading geometry and the phasing construction or the progressive application of loading; they are also well suited to situations where it is necessary to study the interaction between neighboring structures, that is, to say where one is dealing with one or more structure-soil-structure interaction problems. e use of the Soft Soil model for compressible soils involves longer computation times, since the stiffness matrix of the material is decomposed in each step of the calculation. For the analyzed soils, which are compressible, the Soft Soil model correctly describes their behaviour and makes it possible to optimize the designing of the geotechnical structures, which are supported or support these compressible soils, thus demonstrating the importance of the advanced modelling of soils for predicting the behaviour of geotechnical structures. is study is a contribution to the advanced soil modelling of Cameroon; it will be extended to the case of rigid soils with the hardening soil model. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors confirm that there are no conflicts of interest associated with this publication.
5,906.6
2020-09-18T00:00:00.000
[ "Engineering", "Geology", "Environmental Science" ]
Towards real-time PGS range monitoring in proton therapy of prostate cancer Proton therapy of prostate cancer (PCPT) was linked with increased levels of gastrointestinal toxicity in its early use compared to intensity-modulated radiation therapy (IMRT). The higher radiation dose to the rectum by proton beams is mainly due to anatomical variations. Here, we demonstrate an approach to monitor rectal radiation exposure in PCPT based on prompt gamma spectroscopy (PGS). Endorectal balloons (ERBs) are used to stabilize prostate movement during radiotherapy. These ERBs are usually filled with water. However, other water solutions containing elements with higher atomic numbers, such as silicon, may enable the use of PGS to monitor the radiation exposure of the rectum. Protons hitting silicon atoms emit prompt gamma rays with a specific energy of 1.78 MeV, which can be used to monitor whether the ERB is being hit. In a binary approach, we search the silicon energy peaks for every irradiated prostate region. We demonstrate this technique for both single-spot irradiation and real treatment plans. Real-time feedback based on the ERB being hit column-wise is feasible and would allow clinicians to decide whether to adapt or continue treatment. This technique may be extended to other cancer types and organs at risk, such as the oesophagus. Range verification is one of the most important problems to be solved in particle therapy 1,2 . Offline positron emission tomography (offline PET) has verified range uncertainties of approximately 6 mm 3 . Offline PET scans are performed after irradiation, and the activated tissue is imaged. However, this technique suffers from low signal and biological washout over time. More recent results with in-beam PET have demonstrated the online capabilities of this technique 4 . Prompt gamma imaging (PGI) has emerged as an alternative that relies on the prompt nature of the gamma radiation emitted during particle therapy. Range verification can be accomplished in real time during treatment, thus providing a means to avoid unwanted irradiation of healthy tissues. Since 2006, several concepts based on imaging and non-imaging systems have been developed [5][6][7][8][9][10][11] . Eventually, two of them-the knife-edge slit camera and prompt gamma spectroscopy-reached the clinical phase 12,13 and are currently being used at proton facilities. Proton therapy for prostate cancer (PCPT) has been a reality since the 1990s 14,15 . Several clinical studies have estimated the toxicity of prostate cancer therapy with photons 16,17 , protons 14,[18][19][20][21][22][23] , and carbon ions 24,25 . At the outset, PCPT was considered to deliver less dose than photon radiation to normal tissues surrounding the prostate, such as the rectum and bladder [26][27][28] . PCPT had, however, a major setback, with two clinical studies reporting higher toxicity than conventional photon therapy 19,21 . Sheets et al. showed that although intensitymodulated radiation therapy (IMRT) delivered three times more radiation to the body, it presented 50% less gastrointestinal morbidity. Proton therapy-treated patients were more likely to receive a diagnosis of gastrointestinal morbidity and undergo gastrointestinal procedures. There were, however, no significant differences in urinary nonincontinence or incontinence diagnoses or procedures, erectile dysfunction, or hip fractures 21 . Kim et al. also showed that proton therapy had the highest rate of grade 3/4 toxicity among radiotherapy modalities (20.1 per 1000 person-years) 19 . However, the authors pointed out that the sample size for the proton cohort was quite small because the study included patients diagnosed from 1992 to 2005, a period when proton therapy was in its relative infancy and only passively scattered proton therapy (PSPT) was available. In the meantime, intensity-modulated proton therapy (IMPT) was developed both for protons 29,30 and carbon ions 31 . More recent studies have demonstrated more favourable toxicity outcomes with proton therapy 20,22,23 . Results We started by irradiating different water solutions and mixtures with single-spot proton beams. Figure 1 shows the detectors, the targets, and the beam nozzle. Figure 1a shows an ERB filled with a water mixture to be irradiated with the lowest energy available (48 MeV). Afterwards, we increased the energy of our proton beam to an energy applicable in PCPT. Figure 1b shows two flasks of water in front of our target. To evaluate the prompt gamma attenuation in the patient, we placed two water flasks on each side of the target, i.e., in the path from the target to the detectors, as shown in Fig. 1c. Figure 1d shows a prostate phantom with a custom-made insert filled with a commercial silicone sealant. Two tungsten collimators were placed in a semi-collimation configuration in front of each detector in the beam direction to prevent scattered particles in the nozzle from hitting the detectors and to collimate the prompt gammas only from the most downstream region. These collimators had a strong impact in reducing the detector count rate, thus allowing higher beam intensities. In Fig. 2a, we show the energy spectra of several water solutions and mixtures irradiated by single-spot proton beams at the lowest energy. The mixture with silicon dioxide (SiO 2 ) exhibits several differences from the other solutions. The solution of heptahydrate magnesium sulphate (MgSO 4 ·7H 2 O), also known as Epsom salt, responds to higher temperatures with higher solubility. This is not observed in the SiO 2 mixture. The addition of sodium hydroxide (NaOH) to the SiO 2 mixture creates a solution of sodium metasilicate (Na 2 SiO 3 ), but the quantity in grams of dissolved solute remains the same as that in the mixture. The limit for SiO 2 , either mixed or dissolved in 60 mL of water, is 40 g. Above that quantity, the viscosity increases, and the mixture or solution cannot flow inside the small diameter tube between the syringe and the balloon. A commercial silicone sealant was also irradiated for the sake of comparison with the expected silicon gamma lines. Figure 2b shows the spectra of these targets with two water flasks placed in front of them. Due to the increased lateral spread and range straggling, the 1.78 MeV silicon gamma line is smeared out, and the nearby 1.635 MeV energy line resulting from oxygen irradiation becomes more prominent. The addition of NaOH creates a sodium line at 1.278 MeV, increases the oxygen and sodium lines at 1.635 MeV, and decreases the silicon line at 1.78 MeV. In view of these results and due to the simplicity of operation and its harmlessness (lack of toxic effects), we decided to continue our studies with a mixture of water and SiO 2 . Figure 2c shows the spectra obtained with two water flasks placed on each side www.nature.com/scientificreports/ of the target. The prompt gamma attenuation effect is hardly visible. All sequential effects were combined, thus mimicking a worst-case scenario of a target mostly made of water. In this case, the prompt gamma water lines compete strongly with the prompt gamma silicon lines. Figure 2d compares the energy spectra from the proton irradiation of a prostate phantom with either a silicone insert or an ERB filled with a water mixture of SiO 2 . The differences in the prominences of the peaks of interest are negligible. We then aimed to evaluate the cumulative effects of range straggling and prompt gamma attenuation in a prostate phantom with an inserted ERB filled with a mixture of water and SiO 2 . Therefore, we irradiated the prostate and the ERB with single-spot proton beams at different phantom positions. To reproduce a real treatment scenario within a rotating gantry, the phantom was rotated by 90 • in the transaxial direction and irradiated by a horizontal beam. Figure 3a-c shows the phantom at three gantry angles: 0 • , 90 • , and 270 • . Figure 3d-f shows the spectra resulting from the irradiation of the prostate and the ERB with single spot beams in the three positions. A 1.78 MeV silicon line is present in the ERB irradiation and absent in the prostate irradiation. For the lateral beams, the closer the ERB is to the detector, the better the signal from the 1.78 MeV prompt gammas. Detector 1 collects a higher signal for the 90 • angle, while detector 2 collects a higher signal for the 270 • angle. To increase the signal at a 0 • angle, we used the timing information of the arrival time of the protons provided by the scintillating fibres placed between the nozzle and the target 60 . The trigger was not further used in the treatment plans due to the strong impact in the statistics and due to the intensity constraints (increasing pile-up above 8 × 10 7 p/s). In the last setup, we considered real treatment-like plans. Figure 4 shows the computed tomography (CT) and the plans of an anterior beam irradiating the prostate either conformally or overlapping with the ERB. In Fig. 4a-c, the sagittal views through the prostate and the ERB clearly show their structure and the spacing between the ERB and the prostate. The CT also shows the seminal vesicles, the bladder, and the small tube inside a larger tube that transports the solution or mixture from the syringe to the ERB. Figure 4d shows a coronal plan where the IELs as well as the spots overlapping the prostate and the ERB are visible. While IEL 17 has all spots overlapping within the ERB, IEL 12 only has six central overlapping spots. www.nature.com/scientificreports/ Our goal was to determine at which IEL the protons hit the ERB with the overlapping anterior beam. However, since not every spot within each IEL overlapped with the ERB, we sorted the irradiation within each IEL by columns parallel to the ERB and attributed time stamps to each column. Figure 5 shows the prompt gamma spectra from the irradiation of the phantom at IELs 12, 13, and 14. IEL 12 is at the interface between the prostate and the ERB. Columns were detected from the first to the last starting in beam-eye view (BEV) on the left for detector 1 and on the right for detector 2. While detector 1 detects the columns to the left in BEV with higher sensitivity, detector 2 has a higher count rate for columns to the right in BEV. For detector 1, we observe a In the AO plan, we also reordered each IEL of the plan in such way that they were irradiated in columns parallel to the ERB from left to right in the BEV. Figure 6a and b shows a photo of the prostate phantom at an angle of 279 • and schematics of the irradiation of IEL 12 from column 1 to column 15. The plans with and without overlap with the ERB are shown in Fig. 6c and d. In Fig. 7a, we observe that the columns to the right overlapping with the ERB produce a 1.78 MeV prompt gamma line, while those to the left irradiate the prostate and therefore present no such line. Such tracking is possible with columns comprising less than 10 8 protons. For IEL 12, the protons start hitting the ERB at column 6 with 8.4 × 10 7 particles. In Fig. 7b, we confirm that the real plan without overlap with the ERB does not yield a 1.78 MeV energy line for the last columns to the right. For the sake of irradiation speed, the first depicted column aggregates several columns to the left in the prostate region. An independent measurement undertaken after one month with the same gantry angle demonstrates the existence of 1.78 MeV energy lines for the columns www.nature.com/scientificreports/ overlapping with the ERB (Fig. 7c). An additional measurement at a symmetric position of 81 • shows 1.78 MeV energy lines for the columns to the left closer to detector 1 (Fig. 7d). A peak analysis within the region of interest for the spectra presented in Fig. 7 is depicted in Fig. 8. The prominence and the width at half prominence are shown for the peaks of interest. The top four peaks that result from the irradiation of the ERB are indicative of the prompt gamma lines associated with the reaction between the protons and the silicon atoms. (d-f), Energy spectra from the irradiation of the prostate and the rectal balloon at the three angles. The silicon energy peak of 1.78 MeV is clearly distinguishable in the ERB spectra. Discussion Prompt gamma spectroscopy (PGS) is currently one of the most promising techniques for particle range monitoring and measurements of the elemental composition of irradiated targets in particle therapy 6,13,55,61 . This technique facilitates absolute range measurements with millimetre precision due to accurate knowledge of the nuclear reaction cross-sections between the irradiated particles and the types of atoms in the patient. Two PGI modalities, PGS and the knife-edge slit camera, have now reached the level of clinical prototypes 12,13 . The combination of in vivo range monitoring and adaptation methods has been proposed for the treatment of prostate cancer with either anterior beams 35 or anterior oblique (AO) beams 33 . An in vivo range verification system has already been commissioned 36 . This system is composed of a 4 by 3 array matrix of silicon diodes attached by a self-adhesive surface to an ERB and presents a WEPL measurement accuracy on the order of 1 mm. In this paper, we propose a wireless solution that uses prompt gamma rays to monitor the interaction of protons within an ERB filled with a silicon dioxide water mixture and inserted in a prostate phantom. This concept aims to monitor the proton range in PCPT in real time. The irradiation of atomic nuclei within the human body by protons emits prompt gamma rays with characteristic energy lines 6,56 . The irradiation of carbon and oxygen atoms is followed by the emission of prompt gamma radiation with low and high energies (0.511 MeV, 0.718 MeV, 1.022 MeV, 1.635 MeV, 2.31 MeV, 2.8 MeV, 4.4 MeV, 5.2 MeV, and 6.1 MeV) 6,54 . Conversely, during the irradiation of metals, prompt gamma rays are emitted with lower energy (below 3 MeV) 54,55 . This radiation exits the patient under proton bombardment and may be detected by scintillating crystals, e.g., CeBr 3 . The signals are digitally converted and processed to extract energy and time information. Metals usually not present within the human body are good candidates for ranging probes. Although not a metal, silicon dioxide has been shown to be a good choice due to the unique signature provided by the emission of a prompt gamma energy line at 1.78 MeV. This line is distinguishable from the remaining spectrum and can therefore provide binary information about the elemental composition of the material being hit. However, even www.nature.com/scientificreports/ with good dose confinement to the target, the patient is still exposed to a dose in the organ at risk (OAR) and very likely prompt gammas emitted from the ERB. Therefore, a possible solution would be to set a threshold on the 1.78 MeV prompt gammas detected at a certain IEL and neighbouring IELs. This binary output might trigger a decision on whether to continue or stop/adapt the treatment since an organ at risk may be endangered. Proton beam delivery with spot-or raster-pencil-beam scanning (PBS) is particularly suitable for such an approach. A synchronization between beam delivery and prompt gamma detection may allow real-time monitoring of the voxels being hit and simultaneous comparison to the prediction. A standard 2 Gy prostate treatment provides sufficient statistics for such monitoring. Due to the round shape of the rectum, an anterior beam requires column-wise delivery parallel to the rectum so that which IEL column the nuclear reactions with the silicon take place in can be inferred. The range monitoring also requires detectors closer to the irradiated column. Therefore, the right columns in the beam-eye view require detectors on the right side, and the left columns are better detected by detectors on the left side. The AO beams present an even more preferable solution, as the geometry allows the detectors to be placed closer to the ranging probe. All columns within IELs overlapping with the range probe are prone to be detected with higher sensitivity. In the case of a range probe located in the rectum or the oesophagus, the AO beams are especially suitable, as the detector may be positioned at right angles with the patient and close to the probe. Range monitoring by means of PGS is feasible in PCPT. Once the proton range is under control, one may use fields other than the commonly used bilateral opposing fields that are more robust to range uncertainties. The two AO beams may assume variable angles due to the flexibility provided by the method presented in this 65 . This is a module of a FlashCam FADC system, originally designed for the Cherenkov Telescope Array (CTA) 66 . Intensities, acquisition times, and counts. The Peak analysis. The presence or absence of the silicon line could not be visually verified. Therefore, a simple method was developed to identify the presence of 1.635 MeV and 1.78 MeV peaks within a region of interest. We subtracted the background from the peaks by fitting a straight line through their high and low energy values. The MATLAB function findpeaks was adapted to identify the peaks within a certain energy interval and to meet certain criteria. The parameters, such as the minimum peak height or prominence, the minimum peak width at half prominence, and the maximum and minimum distances between energy peaks, were adjusted after the spectra were properly calibrated. Other methods, such as that presented by Dal Bello et al. 55 , could also have been used. These peaks correspond to columns 6 to 9 of IEL 12 (Fig. 7a). (b) The four top spectra presenting the peaks of interest correspond to columns 8 to 11 of IEL 13 (Fig. 7c). (c) The gantry angle of 81 • also presents the peaks of interest for columns 7 to 9 of IEL 12 (Fig. 7d). The vertical scale has been adapted for visualization purposes.
4,176.2
2021-07-28T00:00:00.000
[ "Medicine", "Physics" ]
Construction of Cyclically Permutable Codes From Cyclic Codes Cyclically permutable codes (CPCs) are important to communication networks, e.g., multiple access collision channels without feedback and frequency-hopping spread spectrum communication channels. A CPC is a block code of length n such that each codeword has full cyclic order n and all codewords are cyclically distinct. This study investigates the characteristics of finite fields to develop an efficient algorithm to find a CPC from a p-ary cyclic code, where p is a prime number. In this paper, the Galois field Fourier transform technique is used to generate a CPC of either primitive or non-primitive length. Introduction For the past few years, cyclically permutable codes and their applications in communication networks, e.g.multiple access collision channel without feedback (1) and frequencyhopping spread spectrum communications channels (2), (3), and digital watermarking (4), (5) have become increasingly important.A cyclic code (6) is defined as a linear block code such that any cyclic shift of every codeword yields another codeword.Gilbert (7) defined a cyclically permutable code (CPC) as a block code of length n, such that each codeword has cyclic order $n$ and any cyclic shift of all codewords are distinct, i.e., no codeword in CPC can be obtained by any cyclic shift of another codeword.Maracle and Wolverton (8) proposed an algorithm for constructing the cyclically inequivalent subset.However, an existing p-ary cyclic codes usually do not enable this efficient construction. A cyclically permutable code of length n can be obtained directly from a cyclic code by partitioning this cyclic code into cyclically equivalent subsets, each consisting of all cyclic shift of a codewords, and then choosing any one codeword from those subsets of size $n$.However, it is required that the CPC must be effectively constructed from this cyclic code.N. Q. A. L. Gyorfi and J. L. Massey (9) proposed an encoding procedure for obtaining cyclically equivalent subsets and formed a CPC from a maximum-distance-separable (MDS) code, such as abReed-Solomon (RS) code of length p-1 or a generalized Berlekamop-Justesen (BJ) code of length p+1, both with dimension k and over Fp.More precisely, they constructed a CPC with p k-1 codewords from a RS code or a CPC with p k-2 codewords from a BJ code. The design of a difference family and several constructions of constant-weight CPCs are presented in (10) where the authors proposed combinatorial constructions to constrcut a CPC that have other coding applications.In (11), the authors proposed the use of algebraic property of a binary cyclic code, such as the generator polynomial in timedomain, for an efficient and systematic construction of a CPC from this binary cyclic code. In this paper, we use the Galois field Fourier transform method to form a CPC from a p-ary cyclic code of length n where n is a divisor of p m-1 .Let  be an element with multiplication order n.This study extends the results of (9) in twofold advantages.First, for a cyclic code of nonprimitive length n=(p m-1 )/s, s>1, and dimension k, we can construct a CPC with  s p k-m codewords.The CPC constructed here has s times more codewords compared to the CPC constructed in (9).Secondly, let  i , i >1, be a nonzero of a RS code of primitive length p-1 and assume i and p-1 are relative prime, we can then construct a CPC which has more than p k-1 codewords.The remainder of this paper is organized as follows.In Section, we describe the Galois Field Fourier transforms for a p-ary cyclic code.Section III proposes an efficient construction for a CPC from the cyclic codes and provides some CPC examples with DOI: 10.12792/icisip2015.029constructive discussions.Finally, Section IV presents the conclusion. Galois Field Fourier Transform Methods For signal processing applications, there are several discrete Fourier transforms applied to complex fields (6).Fourier transforms also exists in the Galois field GF(q), which is important in the study and processing of cyclic codes.As opposed to (11), which used the time-domain to find a CPC, this study proposes the use of the frequency domain as an efficient method to find many CPCs from cyclic codes.Cyclic codes are defined as codes whose codewords have certain specified spectral components that are equal to zeros.The most important cyclic codes studied in this paper are the Reed-Solomon (RS) codes and Boss-Chaudhur-Hochquenghen (BCH) codes, which are used to find CPCs.In this section, we describe the constructions of the Galois field Fourier transform (GFFT) to define cyclic codes, and we use the GFFT for cyclic codes to find the CPC.Galois field Fourier transform is a linear operator described by a matrix multiplication.Let v=(v0,v1,…,vn-1) be a vector over Fq m of length n|q m -1 for some positive integer m.Let   GF(q m ) have order n.The GFFT of v is the vector V=(V0,V1,…,Vn-1) defined as The GFFT can be also written by the formula . We can use v  V to denote the Fourier transform relationship between v and V. Similarly, the inverse GFFT of the vector V=(V0,V1,…,Vn-1) can be showns to be The inverse GFFT can also be written as where n -1 is the vector multiplicative inverse of n.The spectrum of the polynomial v(x)=v0+v1x+…+vn-1x n-1 is the GFFT of v=(v0,v1,…,vn-1), and the vector v is in the time domain, and its corresponding transform V is in the frequency domain.Given a polynomial v(x), we can show that The j-th component of the GFFT of v is obtained by evaluating v(x) at x=  j .Similarly, we can write the i-th We can then use the zeors and the inverse of the nonzeros of a cyclic code to form the parity check matrix H and the generator matrix G as Construction of Cyclically Permutable Codes CPCs are based on the characteristics of cyclic codes, where the cyclically shifted codewords occupy the same subspace, with each CPC codeword belonging to each subspace.Gillbert (3) defined a CPC as a binary block code, the codewords of which are cyclically distinct and have full order.This study proposes efficient methods that can be used to find the CPC, and propose the construction of CPCs using p-ary linear cyclic (n, k, d) codes, where n, k and d are the block length, dimension and minimum Hamming distance, respectively, and those codes with code digits in GF(q) and p can be a prime number.Moreover, this paper presents a cyclic code that can be used to find more CPCs compared with (6), and n can have both primitive and non-primitive lengths. Conclusion and Discussion Cyclic codes are block codes in which a cyclic shift codeword generates another codeword belonging to the same subspace.With cyclically permutable codes (CPCs), the codewords are cyclically distinct and have full cyclic order.Although it is important to effectively determine CPCs from cyclic codes, no approach has thus far been proposed.In this paper, we study the characteristics of finite fields and cyclic codes, and propose an efficient CPC construction procedure.The construction method proposed here is more efficient than the RS-based construction proposed in ( 5) and (6).In the first case, when gcd(n,i)=1, we can obtain more CPCs. Second, in (6), there is only the case where the CPC number comprises s=1 multiples, and we can obtain more than s  1 multiples of CPC.Moreover, as opposed to (5) which used the time domain, we proposed to use the frequency domain as an efficient method to find the CPC from cyclic code.We have shown the construction of CPCs based on the binary mapping of some p-ary linear cyclic codes, and we noted that the code can have primitive and non-primitive length. . Let k=1+r  m be the binary (n, k, d) BCH code, where the codewords have 2 k =2  2 m  r .Substituting a non- zero inverse into (3) to generate the matrix, we have -1 is primitive and has primitive length.
1,817.2
2015-09-03T00:00:00.000
[ "Computer Science", "Mathematics" ]
Self-Organizing Maps and Principal Component Analysis to Improve Classification Accuracy The aim of this study is to perform the Kohonen Self-Organizing Map (SOM) using Principal Component Analysis (PCA). SOM is an algorithm commonly used to visualize and classify datasets, due to its ability to project large data into a smaller dimension. However, their performance decreases when the size of the problem becomes too big. Therefore, reducing the size of the data by removing irrelevant or redundant variables and selecting only the most significant ones according to certain criteria has become a requirement before any classification, this reduction should give the best performance according to a certain objective function. Many researchers have tried to solve this problem. This study presents a new approach to improve SOM based on PCA. The experimental analysis of real data from the UCI machine learning repository shows an improvement of the proposed SOM compared to a traditional approach. More than 2% of the improvement in the accuracy of the classification is observed. INTRODUCTION In recent years, the data is exponentially expanded, so their characteristics, consequently, reducing the size of the data by removing irrelevant or redundant variables and selecting only the most significant according to some criterion has become a requirement before any classification, this reducing should give the best performance according to some objective function (Devaraj et al., 2002;Dudoit et al., 2002;Narayanan et al., 2004). In general, the performance of a classifier decreases when the dimensionality of the problem becomes too large. Several approaches are used in classification, to name a few, Hopfield network, K-means, Support Vector machine; most of them are inspired by biological neural networks. Among these, Kohonen Self-Organizing Maps (SOM) are popularly and widely used for the classification. SOM is one type of the neural networks commonly used for visualizing and classifying of multidimensional data. It is applied in various areas: medicine, financial, ecological, engineering, law enforcement and other fields (Ettaouilet al., 2013(Ettaouilet al., , 2012Kohonen, 1998;Pavel and Olga, 2011). However, certain topological constraints of the SOM are fixed before the training phase; the dimension of neurons has a great effect on the classification performance that we had to discuss in this study. The interesting question is which features should be used. Given a set of features; how do we select an optimal subset of features such that? Consequently, the execution time for classification the data decreases and the accuracy increases (Arauzo-Azofra et al.,2011). One approach to solve this problem is to use feature selection that consists of choosing a subset of input variables and deleting redundant or irrelevant entities from the original dataset. It is divided into three categories; filters, wrappers and embedded or hybrid selectors (Blum and Langley, 1997;Ding and Peng, 2005). The filters extract features from the data without any learning involved by ranking all features and chosen top ones (Guyon and Elisseeff, 2003;Ruiz et al., 2012). There were several and widely used filters in literature, such as Information Gain (IG) (Wang et al., 2006), Minimum Redundancy Maximum Relevance (mRMR) (Ding and Peng, 2005), Relief F (Kira and Rendell, 1992). The wrappers use classifying algorithm to evaluate which features are useful; it means that the features were selected taking the classification algorithm into account (Gheyas and Smith, 2010;Kohavi and John, 1997). The third field of feature selection approaches is embedded methods. It takes advantage of the two models by using their different evaluation criteria in different search stages (Guyon and Elisseeff, 2003;Maldonado et al., 2011;Mundra and Rajapakse, 2010). d l The second approach used which called feature extraction that replaces the set of n features by a set of m features; each one is a combination of the original feature. A well-known dimensionality reduction technique is Principal Component Analysis (Abdi and Williams, 2010). PCA tries to find a linear subspace of lower dimensionality, such that the largest variance of the original data is kept. However, note that the largest variance of the data does not necessarily represent the most discriminative information (Jolliffe, 1972). This research opts for the classification of realworld data from the UCI Machine Learning Repository using SOM and PCA. Accuracy rate is used to evaluate this algorithm. The aim of our study is to reduce the number of features and demonstrate the importance of feature selection to improve classification. The experimental analysis shows the speed up of the proposed SOM training process in comparison to a classical approach. PROPOSED MODEL The SOM-PCA proposed is divided into two main steps. In the first, the network was trained by the classical SOM. The neurons resulted from the training phase, were used as input for PCA; to transform them to a new set of vectors with the low dimension. So, the dataset will be reduced to a smaller number of dimensions with low information loss. Figure 1 shows a flowchart of this model. Self-organizing maps: The SOM often consists of a regular grid of map units. Each unit is represented by a vector , where d is input vector dimension. The units are connected to adjacent ones by neighbourhood relation. The SOM is trained iteratively. At each training step, a sample vector is randomly chosen from the input data set, a metric distance is computed for all weight vectors to find the reference vector that satisfies a minimum distance or maximum similarity criterion following the Eq. (1). The neuron with the most similar weight vector to the input pattern is called the Best Matching Unit (BMU): (1) where, is the neurons number in the map in instant . The weights of the BMU and its neighbours are then adjusted towards the input pattern, following Eq. (2): (2) One of the main parameters influencing the training process is the neighbourhood function between Feature selection using PCA: Principal Component Analysis (PCA) was a powerful statistical tool for reducing the dimensionality of multivariate data sets in many areas such as image analysis, data compression, time series prediction and analysis of biological data by finding a new set of variables (Abdi and Williams, 2010). The new set of variables, called Principal Components (PCs), is characterized by his dimension that is smaller than the original counterpart and is ordered by the fraction of the total information each retains. These PCs have been chosen so that the first principal component must have the greatest possible variance; the second component is computed under the constraint of being orthogonal to the first component and having the greatest possible inertia and so on. In our study, we consider the use of PCA in extracting relevant features from the neurons vectorswj; were j is thej th weight vector from the n neurons resulted after the SOM training process; and that have a features (dimension). Therefore, we have an array matrix with the size of : These vectors are now subjected to principal components analysis. To transform them into a new set of the vector with derived dimensions ( ), but in this case, their information content is ranked and stored in the first dimensions. So, the dataset will be reduced to a smaller number of dimensions with low information loss. The transformation is based on the matrix computation: Under the constraints that is a diagonal matrix and that is an identity matrix. Matrix has the same dimension as and related by a linear transformation . will have the properties that most of their information content is stored in the first dimensions and should be chosen so that R represents the largest variance for the input data. There are several ways of obtaining the solution of this problem. In this study, we try to construct using covariance method. Before calculating the covariance matrix we need to centering data in matrix as follow: where, is an column vector of ones for ; and is a vector of dimensions that contains the empirical mean along each column j = 1,..., p of W and defined as: The covariance matrix is now, defined by outer product of Wc with itself: The eigenvalues of for the given data should be calculated. Those m eigenvectors corresponding to the largest eigenvalues of define a linear transformation from the n-dimensional space to an mdimensional space in which the features are uncorrelated. An eigenvalue and eigenvector of a matrix are a scalar and a nonzero vector so that: Let provided that be the set of eigenvalues of and with their corresponding eigenvectors, called the principal axes. Then: The problem in using PCA as the dimensional reduction is to define the number of principal components needed to get a good representation of the data. Different methods exist for predicting this value (Abdi and Williams, 2010;Jolliffe, 1972;King and Jackson, 1999) including Kaiser's stopping rule (Kaiser, 1960) that retains and interprets any component where its eigenvalue greater than 1.00. Scree test (Cattell, 1966) which trace the eigenvalues in descending order of their magnitude in relation to their number of factors and determines where they stabilize (D'agostino and Russell, 2005). Percentage of variance explained (Jolliffe, 1972;Shaharudin and Ahmad, 2017); this technique retains components that account for at least of the total variance. Cumulative Percentage of Variance extracted retains components where certain percentages of the cumulative have been suggested; In this study, the Cumulative Percentage of Variance explained method was used following the equation: The choice of the subset of characteristics represents a good estimate of the n-dimension space if the ratio is sufficiently large or greater than a threshold, usually at least 70%. This method is inexpensive in calculation when it is applied directly to the total data; however, if PCA is applied on the neurons, it reduces enormously the computations (Fig. 2). DATASETS DESCRIPTION The performance of the proposed SOM-PCA method has experimented on the variety of real classification problems. The specification of these problems is listed in Table 1. All datasets are available from the UCI Machine Learning Repository. Table 1 summarizes the number of features, instances and classes for each dataset used in this study. Wisconsin breast cancer: The dataset was collected by Dr. William H. Wolberg (1989Wolberg ( -1991 at the University of Wisconsin-Madison Hospitals. It contains 699 instances whose 458 (65.5%) instances of them areBenign and 241 (34.5%) instances are Malignant, characterized by nine features, which are used to predict benign or malignant disease. This data contains 16 instances with single missing value. Heart-Statlog: The dataset is based on data from the Cleveland Clinic Foundation and it contains 270 instances belonging to two classes: the presence or absence of heart disease. It is described by 13 features. Cardiotocography Data Set: The dataset consists of measurements of Fatal Heart Rate (FHR) and Uterine Contraction (UC) features on cardiotocograms classified by expert obstetricians. 2126 fatal cardiotocograms (CTGs) were automatically processed and the respective diagnostic features measured. The CTGs were also classified by three expert obstetricians and a consensus classification label assigned to each of them. Classification was both with respect to a morphologic pattern (A, B, C. ...) and to a fatal state (N, S, P). Therefore the dataset can be used either for 10class or 3-class experiments available in UCI Machine Learning Repository. RESULTS AND DISCUSSION In order to show the efficiency of the proposed method, SOM-PCA has experimented on the variety of real benchmark classification problems downloaded from the UCI Machine Learning Repository (a short description of each data set is shown in Table 1) and it is evaluated in terms of accuracy and it is compared to classical SOM. In our topology, the hidden layer consists of 25 neurons (rectangular topology 5×5). The output layer was determined by one neuron that can be 0 or 1. The general architecture of the proposed network is shown in Fig. 1. A summary of the parameters used is described in Table 2. Firstly, All datasets were prepared for the classification, the missing values were replaced by median value (Acuña and Rodriguez, 2004), the data were normalized using min-max normalization (Sola and Sevilla, 1997;Jain and Bhandare, 2011), the datasets were divided into two, 70% is employed for training process and 30% for testing process and all the weights have initialized to random numbers. Then the training process will be done. When the training process is complete for the training data, the last weights of the network have been saved to be ready for the feature extraction procedure using the PCA algorithm and then apply the test dataset. To evaluate SOM-PCA, we used the classification accuracy as follow: where, TP (True Positives) = The correctly classified as positive cases TN (True Negative) = Correctly classified as negative cases FP (False Positives) = Incorrectly classified as negative cases Table 3 the best results obtained for the accuracy of classifier using for feature reduction. These results are gotten from Fig. 3 to 5 on a percentage basis. In these figures, the horizontal axis represents the number of PCs and the vertical axis represents accuracy of classification (the gray curve) and Cumulative Percentage of Variance explained (black curve) on percentage basis. These figures demonstrate that by using proposed method, the accuracy is almost unchanged and even increased; it is clear that there is a slight improvement in the classification rate; the maximum value is obtained when the cumulative is between 75% and 95% and after it begins to decrease. In other words, when the number of contributed variables increases the classification rate decreases, therefore, we can only keep variables whose cumulative is less than 95% and the remained features have no effect on the classification rate.In the rest of this part, the results in detail for each dataset. Breast cancer dataset: Figure 3 shows the cumulative sum of explained variance over different feature selection for the breast cancer dataset (black curve) and the accuracy obtained (grey curve). The grey curve shows that most of the variance (79% of the variance) can be explained by the two first principal components. The third, fourth and fifth principal component still bears some information (16%) while the remaining principal components can carefully be dropped without losing too much information. Together, the first five principal components contain 95% of the information. Now, take a look at the grey curve; we can see that the value of accuracy is around 97% when using 5 features. On classifying the dataset employing original features, it is noted that the classification accuracy of 95.85% is obtained. On applying the proposed method, the accuracy is increased to 97.14%. The highest accuracy is reported for this dataset when the proposed SOM-PCA approach is employed with 5 components. Heart-Statlog: Figure 4 the cumulative of variance explained over different feature selection for the Heart-Statlog shows that most of the variance can be explained by the eight first principal components. The first eight principal components contain 91% of the information. In opposite, the best accuracy 81.48% is obtained with first three components. Compared to 79% of accuracy obtained by classifying the dataset employing original features, With SOM-PCA, the accuracy is increased slightly to 81.48%. The highest accuracy is reported for this dataset when the proposed SOM-PCA approach is employed with three components. Cardiotocography dataset: From the Fig. 5 the five first components accounts for 77% of the variance. The remaining components contribute with gradually decreasing variance and we assume this smaller variation is mostly unimportant. The value of accuracy is around 89% when using 5 features and it kept its value almost fixed along the rest components. The accuracy obtained using all original features are 79.93%. So, applying the proposed method, the accuracy is increased significantly to 97.14%. The 75% 95% r £ £ highest accuracy is reported for this dataset when the proposed SOM-PCA approach is employed with 5 components. CONCLUSION This study presents a result of direct classification of variety of datasets using self-organizing maps algorithm. A novel approach based on the Self Organizing Maps and principal component analysis to address the problem of classification. The main innovation is to reduce the dimension of the neurons detected after the SOM training; the new dataset will represented the map with high accuracy. From the numerical results, the improved method gives better accuracy and low time for training, by reducing the dimension of the map and so decreasing the memory size to store the map. The presented method considers the datasets with low dimension and can be extended to treat the data with high dimension. Up to 2% of improvement is obtained using SOM-PCA compared to classical SOM; it can be concluded that this method can be a solution to some problems where very few numbers of training samples exist and feature reduction is needed to apply unsupervised classifiers.
3,865.4
2018-05-15T00:00:00.000
[ "Computer Science", "Mathematics" ]
Change in Electric Contact Resistance of Low-Voltage Relays Affected by Fault Current Contact resistance is an important maintenance parameter for electromagnetic switches, including low-voltage relays. The flow of significant current through electric contacts may influence the contact surface and thus the value of the electric contact resistance (ECR). The change in ECR is influenced not only by the value of current but also by the current phase. Therefore, the impact of the switching short-circuit current’s phase on the ECR was analyzed in this paper. Significant changes in the resistance after each switching cycle were observed. The ECR decreased significantly after each make operation, and a correlation with current amplitude, total let-through energy, and short-circuit time was not observed. Introduction Low-voltage relays are commonly used to connect circuits with moderate switching currents. They have found an application mainly as executive elements in building automation systems [1,2] that are becoming more and more popular, as well as in programmable controllers [3,4]. The relays available on the market differ not only in technical parameters, but also in construction and purpose. An analysis of the literature on relays shows a certain research gap in the field of low-and medium-current AC switches. Research presented in referenced literature focuses either on different materials, like Ag-W, or higher test currents, over 1 kA, and contact force up to 50 N. It seldom touches upon the issues related to lower static contact force, in the range of centinewtons together with voltages up to 230 V AC and test currents reaching 300 A. In order to describe contact resistance, it is common to apply a one-point model with ellipsoidal, equipotential dimensioning currents [5]. There are also attempts to make new models to describe the value of contact resistance for two conducting surfaces [6]. The microstructure of a constant surface is presented on Figure 1. The conducting contact area A c is much smaller than the nominal (apparent) surface A C . The difference is significant and the actual contact area may constitute about 5% of the apparent contact surface [7,8]. The contact resistance of an electro-energetic switchgear is its significant parameter of use. It is important that the resistance in the maintenance period of a relay reach the smallest possible values and, at the same time, that it not change over time. Its value influences the acceptable working load of a relay which is related to its heating. The value of contact resistance depends on the shape and resistance of the thin film layer [5]. Resistance of the thin film layer R n is difficult to establish analytically, as it depends on many factors, sometimes random, including ambient temperature, humidity, and contact material. Shape resistance R k mainly depends on contact material and the clamping force of contacts. The value of the relay's contact resistance is influenced by the material being used. The contact surface may be made of pure metals, including copper, silver, gold, platinum, palladium, wolfram, or molybdenum. However, alloys and sinters are more often used, such as silver-copper, silvercadmium, silver-palladium, silver-cadmium oxide, silver-wolfram, silver-nickel, and silver-tin oxide [9]. The contact surface may be covered with an additional layer of material in order to enhance its properties (e.g., resistance to material transfer). Coatings made of tin, silver, or gold are also applied. The coating of a contact point with a layer of tin leads to a minimal increase of contact resistance in comparison to a material that is not coated. The layer of silver causes the reverse effect of decreasing the value of transition contact resistance [10]. Presently, the most common contact materials for low-voltage relays of alternating currents of average power are sinters of silver with nickel (AgNi), cadmium oxide (AgCdO), and tin oxide (AgSnO2). Materials that are made of silver-metal or silver-metal oxide typically present higher resistance to welding [11]. The frequency of occurrence and the force of the welding of contacts increase proportionally to the value of amperage of the electric arc, whereas the time of ignition in the arc does not have the same impact [12]. The force of the welding of contacts does not show any dependence on the static force of clamping between contact surfaces. However, it is dependent on the travelling pace of the moveable contact. These considerations are adequate mainly for those contacts made of pure silver [12]. Some contact materials present a higher probability of welding than others. If a contact material is characterized by a high tendency to welding, then the joints will be strong. In this regard, pure silver has the worst properties, showing a higher tendency to the welding of contacts [13], and it is one of the reasons why it is not used as contact material. Slightly better parameters are presented by AgCdO. It has a lower tendency to welding but it is of higher resistance. Contact materials such as AgSnO2 and AgNi, used for the analyzed relays, are characterized by similar properties in terms of welding and the resistance to tearing apart [14]. In this case, the first one creates stronger welds, but it proves that it has a lower tendency for their occurrence. This article describes the influence of the short-circuit current phase on the change in electric contact resistance. The contact materials tested were AgNi, AgCdO, and AgSnO2. The AgSnO2 was tested in two variations. For the first one, contact rivets were made using the internal oxidation process and are referred to in the article as simply AgSnO2. For the second one, the rivet was designed to withstand higher inrush currents (up to 80 A for 20 ms) and are referred to in the paper as AgSnO2 P. In previous work [15], the experiments were carried out only for current switching at phase equal to zero and the AgCdO material was not tested. The results of this study are helpful in assessing contact materials used in low-voltage relays as they indicate how the contact resistance may change while switching fault current. This is an important factor for the long-term exploitation of relays used in modern electrical installations. The value of the relay's contact resistance is influenced by the material being used. The contact surface may be made of pure metals, including copper, silver, gold, platinum, palladium, wolfram, or molybdenum. However, alloys and sinters are more often used, such as silver-copper, silver-cadmium, silver-palladium, silver-cadmium oxide, silver-wolfram, silver-nickel, and silver-tin oxide [9]. The contact surface may be covered with an additional layer of material in order to enhance its properties (e.g., resistance to material transfer). Coatings made of tin, silver, or gold are also applied. The coating of a contact point with a layer of tin leads to a minimal increase of contact resistance in comparison to a material that is not coated. The layer of silver causes the reverse effect of decreasing the value of transition contact resistance [10]. Presently, the most common contact materials for low-voltage relays of alternating currents of average power are sinters of silver with nickel (AgNi), cadmium oxide (AgCdO), and tin oxide (AgSnO 2 ). Materials that are made of silver-metal or silver-metal oxide typically present higher resistance to welding [11]. The frequency of occurrence and the force of the welding of contacts increase proportionally to the value of amperage of the electric arc, whereas the time of ignition in the arc does not have the same impact [12]. The force of the welding of contacts does not show any dependence on the static force of clamping between contact surfaces. However, it is dependent on the travelling pace of the moveable contact. These considerations are adequate mainly for those contacts made of pure silver [12]. Some contact materials present a higher probability of welding than others. If a contact material is characterized by a high tendency to welding, then the joints will be strong. In this regard, pure silver has the worst properties, showing a higher tendency to the welding of contacts [13], and it is one of the reasons why it is not used as contact material. Slightly better parameters are presented by AgCdO. It has a lower tendency to welding but it is of higher resistance. Contact materials such as AgSnO 2 and AgNi, used for the analyzed relays, are characterized by similar properties in terms of welding and the resistance to tearing apart [14]. In this case, the first one creates stronger welds, but it proves that it has a lower tendency for their occurrence. This article describes the influence of the short-circuit current phase on the change in electric contact resistance. The contact materials tested were AgNi, AgCdO, and AgSnO 2 . The AgSnO 2 was tested in two variations. For the first one, contact rivets were made using the internal oxidation process and are referred to in the article as simply AgSnO 2 . For the second one, the rivet was designed to withstand higher inrush currents (up to 80 A for 20 ms) and are referred to in the paper as AgSnO 2 P. In previous work [15], the experiments were carried out only for current switching at phase equal to zero and the AgCdO material was not tested. The results of this study are helpful in assessing contact materials used in low-voltage relays as they indicate how the contact resistance may change while switching fault current. This is an important factor for the long-term exploitation of relays used in modern electrical installations. State of the Art Relays intended for connecting the electrical load are prone to some disadvantageous switching phenomena. These phenomena may include the occurrence of overload currents and short-circuit currents that, as they flow through the relay contacts, can affect their surface condition and, consequently, the value of the contact resistance, which is an important operational parameter of the relay. This situation may also lead to a shortening of the relays' maintenance time or, in extreme cases, to their complete destruction. Moreover, long-term exposure to higher temperature may also lead to a relay's degradation, for example, its contact resistance and opening and closing times [16,17]. The corrosion film results in an increase in contact resistance and thus a decline in the contact performance. The problems associated with low-voltage relay contacts are one of the research studies undertaken at several research centers in the world. The literature presents the test results of contacts made of various materials [12,13,18]. These tests are carried out both under normal operating conditions as well as under conditions of specific exposures (e.g., short circuits). Morin et al. [13], Neuhaus et al. [12], and Doublet et al. [18], who independently undertook work for similar contact materials (AgNi, AgCdO, and AgSnO 2 ), focused their research on low-current circuits of direct current and small amperage. As shown by Morin et al. [13], each make operation results in contact material transfer (high for AgCdO and lower for AgNi and AgSnO 2 ) and a high welding tendency for AgCdO, lower for the latter two. According to Neuhaus et al. [12], the welding force is hardly influenced by the static contact force. Supply voltage values sufficiently higher than the minimum arc voltage cause stable bounce arcs lasting the total bounce period. This is the case in the presented research, as the supply voltage is higher than the minimum arc voltage. In turn, the publication by Doublet et al. [18] states that AgSnO 2 gives the best performance under short arcs as compared to Ag and AgNi. It presents a low welding and erosion tendency for short arcs with a higher erosion with longer arcs. However, these studies focus only on low-voltage (< 50 V) DC circuits. In the literature related to relays, there are also papers about the processes of making circuits of alternating current having an average voltage and an amperage of several kA [19][20][21]. The operation of making significant currents may lead to contact bounces. Together with the increase of values for the contacting current's amperage, there is the increase in the mass loss of the contact rivet [11,[21][22][23]. Materials Used and Their Characteristics The tests considered the following contact materials: AgNi, AgCdO, as well as AgSnO 2 (bimetal rivet) and AgSnO2 P (single metal rivet). Each of them was composed of 90% silver and a 10% addition of nickel, cadmium oxide, and tin oxide, respectively. Bimetal rivets are mainly made using powder metallurgy technology or, in the case of metal oxides, using so-called internal oxidation. Single metal rivets are mainly produced from wires made of a certain contact material and their shape is obtained through the cold forging process. Selected properties of the contact materials used in the tests are presented in Table 1. Testing Circuit Diagram The circuit diagram for a testing circuit is presented in Figure 2 and the test bench in Figure 3. The circuit is supplied directly from the power network having a low voltage of 230 VAC. Testing Circuit Diagram The circuit diagram for a testing circuit is presented in Figure 2 and the test bench in Figure 3. The circuit is supplied directly from the power network having a low voltage of 230 VAC. The circuit is protected against short circuits and overloads with installation switches of rated operating current 16 A and characteristics B, C, or D, and also with a fuse for general use gG 16. For each contact material, a single test was conducted for every protection device. That resulted in four contact resistance values, for each contact material, before and after the test. A single test was executed for each tested relay, as each test had to been concluded on a new set of contacts. Four relays were tested for each contact material, and different protection device as mentioned above, which Testing Circuit Diagram The circuit diagram for a testing circuit is presented in Figure 2 and the test bench in Figure 3. The circuit is supplied directly from the power network having a low voltage of 230 VAC. The circuit is protected against short circuits and overloads with installation switches of rated operating current 16 A and characteristics B, C, or D, and also with a fuse for general use gG 16. For each contact material, a single test was conducted for every protection device. That resulted in four contact resistance values, for each contact material, before and after the test. A single test was executed for each tested relay, as each test had to been concluded on a new set of contacts. Four relays were tested for each contact material, and different protection device as mentioned above, which The circuit is protected against short circuits and overloads with installation switches of rated operating current 16 A and characteristics B, C, or D, and also with a fuse for general use gG 16. For each contact material, a single test was conducted for every protection device. That resulted in four contact resistance values, for each contact material, before and after the test. A single test was executed for each tested relay, as each test had to been concluded on a new set of contacts. Four relays were tested for each contact material, and different protection device as mentioned above, which resulted in sixteen test trials. Due to the synchronizing device used at the moment of switching on the relay to the selected voltage phase, it was possible to obtain the repeatability of testing conditions. For the short-circuit current phase, two extreme cases were selected-at the transition of the voltage through zero and at the moment when the supplying voltage reached a maximum value. Since the circuit was of resistant nature (power ratio j ≈ 0.99), the current in the circuit was in phase of supply voltage. The results obtained for the switching-on at zero of the supplying current for contacts made of AgNi and AgSnO 2 have already been presented in [15]. The expected amperage of short-circuit current was limited with a resistor R lim (resistance value set is 0.729 Ω) to the value of 320 A (I m = 453 A). The average top value of a short-circuit current for tests of switching-on at zero of the supplying current equaled 421 A and, for switching-on at the maximum value of voltage, an average amplitude of current reached 457 A. The average value reached during the test may have exceeded the I m value because of power supply voltage fluctuations as the test stand was supplied directly from the public power network. However, the difference is less than 0.88% and can be omitted from the discussion. Because of the protection device's limiting activity, the maximum value of the current for switching-on at zero was lower than the value of amplitude of the expected current. The instantaneous value of the current in circuit may be described using Equation (1). The test circuit has very low induction L, so the power ratio is almost equal to one, that is, φ = 0. The set stand is powered from the public network, so the pulsation ω is equal to 314 rad·s −1 and the load is pure ohmic, that is, ψ = 0. For the two selected cases, when the voltage reached zero and maximum, it can be simplified as follows. For the first case φ = 0 and the exponential part of Equation (1) is equal to zero, the result is presented in Equation (2). For the second case φ = π/2 and the exponential part is also zero, the result is presented in Equation (3). The oscillogram of current and voltage between the contacts was registered using an oscilloscope GW Instek GDS-3154 (GW Instek, Taiwan), with a current probe and a voltage probe. An example of the oscillogram presenting the current in the circuit and the voltage between the contacts of a relay at short-circuit current phase of supplying voltage Φ = 90 • , for contact material AgSnO 2 P, is shown in Figure 4. Since the current flow lasted less than 7 ms, the heating of the contact has been omitted from the discussion. resulted in sixteen test trials. Due to the synchronizing device used at the moment of switching on the relay to the selected voltage phase, it was possible to obtain the repeatability of testing conditions. For the short-circuit current phase, two extreme cases were selected-at the transition of the voltage through zero and at the moment when the supplying voltage reached a maximum value. Since the circuit was of resistant nature (power ratio j ≈ 0.99), the current in the circuit was in phase of supply voltage. The results obtained for the switching-on at zero of the supplying current for contacts made of AgNi and AgSnO2 have already been presented in [15]. The expected amperage of short-circuit current was limited with a resistor Rlim (resistance value set is 0.729 Ω) to the value of 320 A (Im = 453 A). The average top value of a short-circuit current for tests of switching-on at zero of the supplying current equaled 421 A and, for switching-on at the maximum value of voltage, an average amplitude of current reached 457 A. The average value reached during the test may have exceeded the Im value because of power supply voltage fluctuations as the test stand was supplied directly from the public power network. However, the difference is less than 0.88% and can be omitted from the discussion. Because of the protection device's limiting activity, the maximum value of the current for switching-on at zero was lower than the value of amplitude of the expected current. The instantaneous value of the current in circuit may be described using Equation (1). The test circuit has very low induction L, so the power ratio is almost equal to one, that is, φ = 0. The set stand is powered from the public network, so the pulsation ω is equal to 314 rad·s −1 and the load is pure ohmic, that is, ψ = 0. For the two selected cases, when the voltage reached zero and maximum, it can be simplified as follows. For the first case φ = 0 and the exponential part of Equation (1) is equal to zero, the result is presented in Equation (2). For the second case φ = π/2 and the exponential part is also zero, the result is presented in Equation (3). The oscillogram of current and voltage between the contacts was registered using an oscilloscope GW Instek GDS-3154 (GW Instek, Taiwan), with a current probe and a voltage probe. An example of the oscillogram presenting the current in the circuit and the voltage between the contacts of a relay at short-circuit current phase of supplying voltage Φ = 90°, for contact material AgSnO2 P, is shown in Figure 4. Since the current flow lasted less than 7 ms, the heating of the contact has been omitted from the discussion. The relays were switched to the circuit through a dedicated connecting socket. The measurement of resistance was performed with Kelvin's 4-wire method using a meter for small resistance, MI3252 (Metrel, Horjul, Slovenia). In order to eliminate a measuring error of the expected value of contact resistance, the correction value was introduced which considered the resistance of the current paths for both the socket itself and a relay. Measuring current was 10 A with a reading accuracy of ± 0.25% and with a range of 2000 mΩ to 199,999 mΩ. The contact force for each contact material is presented in Table 2. Two different constructions of relays were used, one with AgNi AgSnO 2 and AgSnO 2 P materials, and the other with AgCdO. Those two relay models distinguished themselves with different closing mechanisms that led to varied contact forces. Results and Discussion The tests analyzed four models of relays with contacts made of materials presented in Section 3.1. Each relay underwent a single connecting trial. Before and after the trial of making a short-circuit current, the resistance value was measured. The trials were performed for different, commonly used, protection devices against short circuits and overloads. Table 3 presents the mean values of the time period for short circuit t z , Joule's integral i 2 t, and the maximum of a short-circuit current i m at switching of a short-circuit current, depending on the applied protection. It can be observed that the shortest periods of short circuits (the shortest operating time of the protection device) are seen for protection devices when the circuit was switched at the moment of maximum of supply voltage. The differences between time periods are small and result from the properties of certain devices. The time of the short circuit t z was calculated in a manner presented in [24]. Short-circuit time t z is calculated from the moment when the contacts are closed to the moment when the current is switched off by the protection device; therefore, it represents the total time for which the contact is conducting electrical current. For energy i 2 t, transported through particular switches during a short circuit occurring at switching at zero of voltage, the sequence of values is as follows (from the lowest to the highest value): fuse gG 16, switch B16, C16, and D16. There is a visible (more than 1.5 times) difference between the lowest and the highest value. Besides the case of a fuse gG 16, at switching at zero of voltage, the value of Joule's integral is lower for trials of switching at maximum of supply voltage. For mean values of maximal current, there are small differences between the applied protection device. There are no visible relations between the applied protection device and the power transported during a short circuit and the value of contact resistance after the trial. The changes in the values of contact resistance before and after the trial, for the two above-mentioned cases of switching the testing circuit, are presented in Figure 5. Mean values of this resistance are presented in Table 4. First, there is the case of closing the relay contacts at zero value of supply voltage. The values of contact resistance before performing the trial are different for each of the analyzed materials. Comparing the mean values for them before performing the trial, it can be observed that the highest value characterizes the contacts which are made of AgSnO 2 P, then, in decreasing order, they are AgSnO 2 , AgCdO, and AgNi. After the trial of switching the short-circuit current, the contact resistance is changed. For each material, there was a significant decrease of its value. The mean value of contact resistance for all trials equals 0.3053 mΩ and does not show a significant difference for particular materials. For each of the trials, no welding of contacts was observed. The changes in the values of contact resistance before and after the trial, for the two abovementioned cases of switching the testing circuit, are presented in Figure 5. Mean values of this resistance are presented in Table 4. First, there is the case of closing the relay contacts at zero value of supply voltage. The values of contact resistance before performing the trial are different for each of the analyzed materials. Comparing the mean values for them before performing the trial, it can be observed that the highest value characterizes the contacts which are made of AgSnO2 P, then, in decreasing order, they are AgSnO2, AgCdO, and AgNi. After the trial of switching the short-circuit current, the contact resistance is changed. For each material, there was a significant decrease of its value. The mean value of contact resistance for all trials equals 0.3053 mΩ and does not show a significant difference for particular materials. For each of the trials, no welding of contacts was observed. Table 4. Mean values of the contact resistance for relays switched on at zero and maximum of supply voltage, before and after the trial: Řb, mean value of contact resistance before the trial; Řa, mean value of contact resistance after the trial. In the case of closing the contacts of a relay at maximal value of supply current, the value of contact resistance before the trial (for AgNi, AgSnO2, and AgSnO2 P) is lower than in the previous case, when the make operation was made at zero current value. The difference exists despite the fact that there is no selection of particular relay items; they were selected randomly, and there is no initial surface treatment. For contacts made of AgSnO2, the value of the contact resistance increased after the performance of one of the trials. It was the only such case. The most significant change for these trials was the occurrence of the welding of contacts. After the test trial, the relay's coil was Table 4. Mean values of the contact resistance for relays switched on at zero and maximum of supply voltage, before and after the trial:Ř b , mean value of contact resistance before the trial;Ř a , mean value of contact resistance after the trial. In the case of closing the contacts of a relay at maximal value of supply current, the value of contact resistance before the trial (for AgNi, AgSnO 2 , and AgSnO 2 P) is lower than in the previous case, when the make operation was made at zero current value. The difference exists despite the fact that there is no selection of particular relay items; they were selected randomly, and there is no initial surface treatment. For contacts made of AgSnO 2 , the value of the contact resistance increased after the performance of one of the trials. It was the only such case. The most significant change for these trials was the occurrence of the welding of contacts. After the test trial, the relay's coil was disconnected from the power supply and the contacts' position was tested with an ohmmeter. The low value of the contact resistance indicated that the contacts were welded as they stayed in the close position without the external force provided by the electromagnetic coil. For contacts made of AgNi, welding occurred for three out of four trials, and for those made of AgCdO, welding occurred for each trial. Table 4 presents the mean values of the contact resistance which were calculated on the basis of trials, where there were no cases of welding of contacts. Therefore, for contacts made of AgCdO, there is a lack of calculated values, and for AgNi, it refers only to a single measurement. The welding observed is related to contact bouncing that occurred during the test. For seven out of eight test trials of AgNi, a bounce appeared, for AgSnO 2 it occurred in five, for AgSnO 2 P it occurred in six, and for AgCdO it occurred in four tests. As AgNi and AgCdO are less resistant to contact welding, the phenomenon occurred in them and the contact made of AgSnO 2 and AgSnO 2 P remained resistant. Studies have shown [25] that contact welding occurs only when the switching phase is π/2 and a bounce occurs. However, some materials, like AgSnO 2 and AgSnO 2 P, are immune to contact welding at this current level, as compared to AgNi and AgCdO. Protection An example of the switching oscillogram with the registered contact bounce is presented in Figure 6. It can be observed that, in the period of time between around 45 µs to 1 ms, the mean value of voltage between relay contacts equaled approximately 20 V (i.e., the same as the voltage drop of the electric arc in the air). The voltage (8 V) remaining after 1 ms was recorded after switching off the circuit, and it is believed that it was due to the working mechanism of the differential probe used. low value of the contact resistance indicated that the contacts were welded as they stayed in the close position without the external force provided by the electromagnetic coil. For contacts made of AgNi, welding occurred for three out of four trials, and for those made of AgCdO, welding occurred for each trial. [25] that contact welding occurs only when the switching phase is π/2 and a bounce occurs. However, some materials, like AgSnO2 and AgSnO2 P, are immune to contact welding at this current level, as compared to AgNi and AgCdO. An example of the switching oscillogram with the registered contact bounce is presented in Figure 6. It can be observed that, in the period of time between around 45 μs to 1 ms, the mean value of voltage between relay contacts equaled approximately 20 V (i.e., the same as the voltage drop of the electric arc in the air). The voltage (8 V) remaining after 1 ms was recorded after switching off the circuit, and it is believed that it was due to the working mechanism of the differential probe used. The question is the following: why do the same types of relays for the same values of expected short-circuit current behave differently for different phases of switching the current? The answer comes from the combination of two mechanisms: electro-dynamic forces [26] having an impact during the flow of current of significant values and the bounce which results from the impact of two contacts against each other. While switching the circuit at zero phase of voltage, the electro-dynamic force reaches its maximum value after a time of 5 ms. Thus, these two mechanisms do not overlap in time, the resultant opening force of contacts is lower than the clamping force, and there is no contact bounce. For the second case, the electro-dynamic force reaches the maximum which is synchronized with the value of a short-circuit current, that is, just after switching on the circuit. It leads to the The question is the following: why do the same types of relays for the same values of expected short-circuit current behave differently for different phases of switching the current? The answer comes from the combination of two mechanisms: electro-dynamic forces [26] having an impact during the flow of current of significant values and the bounce which results from the impact of two contacts against each other. While switching the circuit at zero phase of voltage, the electro-dynamic force reaches its maximum value after a time of 5 ms. Thus, these two mechanisms do not overlap in time, the resultant opening force of contacts is lower than the clamping force, and there is no contact bounce. For the second case, the electro-dynamic force reaches the maximum which is synchronized with the value of a short-circuit current, that is, just after switching on the circuit. It leads to the overlapping of these two mechanisms in time and, as the consequence, it leads to the contact bounce. During this bounce, at the ignition of an electric arc, there is a pressure increase of the plasma which is located between the relay contacts, which enhances their opening effect. The ignited electric arc leads to a strong, local heating of the contact surface. Its temperature may exceed the value of the material's melting temperature. Closing the contacts in such a case leads to solid metallic welds, that is, the durable welding of contacts. The calculated value of the electrodynamic force is in the range between 0.20 N and 0.25 N, with the lower value referring to AgNi and the higher value to AgCdO; for AgSnO 2 and AgSnO 2 P, the value is equal to 0.21 N. As the value is lower than the nominal contact force shown in Table 2, it is clear that without the contact bounce occurring during the making process, the contact would not have been welded. It is believed that, for welded contacts, the measurement of contact resistance is unjustified; thus, there is no record of such trials in Table 4. For contacts made of AgSnO 2 and AgSnO 2 P, there is no welding also for trials with contact bounce. For AgSnO 2 P, the value of the mean contact resistance after the trial decreased analogically as for trials of switching at zero value of supply voltage. Only for trials of contacts made of AgSnO 2 was there a slight increase of the mean value of the contact resistance. This increase, however, was calculated only once on the basis of the single trial for which there was an increase of contact resistance. For the remaining three trials, a decrease was observed. For trials resulting in the welding of contacts, there was also the measurement taken of transition resistance. Its value is marginal in comparison to other intended values of resistance. Conclusions The presented results of this research indicate the influence of making a short-circuit current on the value of the contact resistance. This influence depends not only on the contact material which relay contacts are made of, but also on the switching phase of a short-circuit current. Switching on at the moment when the supply current reaches a zero value leads to a decrease of the value of the contact resistance. The initial value of the contact resistance, after the trial performance of switching on a short-circuit current at zero voltage, decreased significantly. Switching on at the moment when the supply voltage reaches maximal value often leads to the welding of contacts. The value of contact resistance changed for contacts which were not welded, but these changes were not so apparent in comparison with the first discussed cases. It is worth mentioning that the protection device applied in the testing circuit did not ensure a sufficient level of protection for relay contacts. The protection device should ensure that the protected circuit and all its components are not dysfunctional after a short circuit. Contacts made of AgNi and AgCdO turned out to be prone to welding. In the range of short-circuit currents up to 320 A, the contacts made of AgSnO 2 and AgSnO 2 P were characterized by resistance to welding. These results are consistent with the data presented in the literature [10][11][12]16]. As the electrical contact resistance (ECR) is an important factor in determining the overall relay lifetime reliability, knowledge on how it is influenced by short-circuit current becomes relevant. The ECR value is also a key factor for the design stage of a relay, as it influences its rated current, for example. Future work should include tests with both higher and lower current together with research on material transfer and on contact rivet mass loss. Author Contributions: Section 1 was prepared by A.K., G.D., and K.N. Section 2 was prepared by A.K and G.D. Section 3 was prepared by A.K. Section 4 was prepared by A.K., G.D., and K.N. Section 5 was prepared by A.K. and G.D. Funding: The research was funded from resources of the Ministry of Science and Higher Education for statutory activities No. 04/41/DSMK/4133, under the following task name: Switching and electrode processes in low voltage relays-the effect of switching on nominal and short-circuit currents.
8,569.4
2019-07-01T00:00:00.000
[ "Engineering", "Physics" ]
VERSION, CHECK JOURNAL of Analyzing the effect of context of second language learning: Domestic intensive and semi-intensive courses vs. study abroad in Europe This study examines the second language (L2) written and oral performance of three groups of Spanish-speaking university students after being exposed to English in different contexts. One group of learners was spending some time abroad (Erasmus students in the UK), and two groups were following classroom instruction in two different types of intensive courses in Spain: “intensive” and “semi-intensive”. The learners’ L2 written and oral production was analyzed at different time points through different measures of fluency, syntactic and lexical complexity, and accuracy. The main objective of this study was to compare the performance of the students abroad with each of the two intensive programmes. According to the results of the statistical analyses, after an equivalent period of exposure to the L2 in the two contexts, the students abroad outperformed the learners in the “at home semi-intensive” programme in the post-test in some of the variables under study, namely fluency and lexical complexity. Nevertheless, the students’ written and oral performance after an intensive course at home and after the equivalent time abroad was similar. Introduction Context of learning is undoubtedly a factor that needs to be considered when examining second language acquisition. As Collentine (2009) suggests, "one of the most important variables that affects the nature and the extent to which learners acquire a second language (L2) is the context of learning, that is, whether the learning takes place within the society in which the L2 is productive or where the first language (L1) is productive" (p. 218). L2 learning contexts vary in terms of the quantity and quality of L2 input they provide, and the opportunities they offer for learners' output and interaction with native speakers. Moreover, contexts also determine the degree of explicitness/implicitness of the L2 knowledge that tends to be attained and whether automatization is fostered (DeKeyser, 2007). According to DeKeyser (2007), learning the L2 abroad provides more opportunities for practice in real-life situations and thus automatization of L2 skills. On the other hand, L2 classroom learning in the students' country usually promotes the development of declarative knowledge to a larger extent (DeKeyser and Juffs, 2005;DeKeyser, 2009). The objective of this particular study is to analyze the effects on L2 proficiency of two types of contexts which provide different input for the L2 learners, as well as different types of practice: a study abroad (SA) context and two types of "at home" (AH) programmes (intensive and semi-intensive). In the former, L2 learners-who have previously been exposed to classroom teaching in their home country-have the opportunity of regularly using the L2 for everyday interaction as well as of being exposed to an extensive amount of input in the L2. In the second context, however, students only Context of L2 learning 3 interact with their teacher and their classroom peers, and the input these learners obtain is greatly limited to the classroom hours and is, in many instances, not native-like. Even if, technically, we are considering two contexts (at home vs. abroad), our main interest is in the comparison of three different types of exposure to the L2, and that is why semiintensive and intensive courses will be analyzed separately. Although research on contexts of learning or SA is becoming more popular within the second language acquisition (SLA) literature, there are few studies that examine L2 learning abroad in Europe (Byram and Feng, 2009;Coleman, 1998;Dyson, 1988;Llanes and Muñoz, 2009;Papatsiba, 2005;Regan, 1995Regan, , 1998Teichler, 2004), and even fewer studies that consider intensive courses when analysing L2 learning in a foreign language context (Serrano and Muñoz, 2007;Serrano, 2011). Nevertheless, intensive courses are noticeably quite comparable to the SA context, considering the concentration of exposure to the L2 at the learners' disposal. Our study aims to fill the gap in these areas by including the European SA context and two types of AH programmes that offer more intensive L2 practice than those traditionally considered as control groups in previous research on SA. The AH intensive courses under examination here offer 10 hours/week (semi-intensive) and 25 hours/week (intensive) of instruction, as opposed to the typical AH courses (2-4 hours/week). Freed et al. (2004) considered AH courses that offered approximately 17.5 hours of instruction a week. However, such courses were rightly classified as "immersion" courses, since the learners had the opportunity of practicing the L2 after finishing their classes. The students in the intensive programmes included in the present research went home after the instructional time-and not to a dormitory or residence area with other L2 learners-therefore, the exposure they received was Context of L2 learning 4 restricted to the classroom. In this sense, the contexts included in the present study (European SA and two types of AH intensive programmes) have not been previously compared. Additionally, whereas most studies examining context of learning have only concentrated on one skill or a specific area within one skill, this particular study examines different areas of both written and oral production. Literature Review Even though there is a general belief that learning/practicing the L2 in the country where it is spoken leads to quicker and more remarkable language progress than L2 classroom learning, most empirical studies investigating the issue have failed to find such clear superiority for the SA context with respect to the AH context except for a few areas, most notably oral fluency. Students abroad have been claimed to be significantly more fluent after the experience than their peers who stayed at home learning the L2 in the classroom (Freed, 1995;DeKeyser, 1991;Lafford, 2004;Möhle, 1984;Segalowitz and Freed, 2004). Similarly, students in the SA context have often been reported to significantly increase their vocabulary after their experience in the foreign country (DeKeyser, 1991;Dewey, 2008;Ife et al., 2000;Lennon, 1990;Llanes and Muñoz, 2009;Milton and Meara, 1995). The progress SA students make in other language areas has not been generally reported to be superior to AH students (Collentine, 2004;DeKeyser, 1991;Dewey, 2004;Díaz-Campos, 2004;Freed et al., 2003;Lennon, 1990;Mora, 2008). What many studies analyzing the effects of the SA experience on learners' L2 skills have claimed, however, is that most educators and researchers perceive that the majority of the students after staying abroad demonstrate a qualitative change in their L2 skills. Context of L2 learning 5 Nevertheless, the measures that have traditionally been used to analyze learners' progress tend to focus on features which are highly related to formal instruction: that may be the reason why many studies have found advantages for the AH context, according to Collentine (2004). He also thinks that it is important that measures that examine other types of language gains are developed in order to quantify the impression that "the SA learner can 'tell a story' a little better and can 'get their point across' more effectively" (Collentine, 2004, p. 245). It is true that there might be some L2 gains in the SA context that are hard to quantify, yet most-if not all-of the students staying abroad whose performance has been examined also received formal instruction, even in higher amounts than the students in the AH context. It is thus surprising that the SA students' results are not superior-or are in fact lower in many cases-with respect to their peers at home (Collentine, 2004). One explanation can be that the gains in fluency which are unarguably attributed to students in the SA context are made at the expense of growth in other areas, such as grammar complexity or accuracy. The majority of the studies investigating L2 acquisition in a SA context have used comparison data from AH classroom learners. Although most comparison studies with learners in the SA context have been made with regular AH programmes, some research has been done comparing the students' gains in SA, AH, and domestic intensive (or "immersion") courses. Freed et al. (2004) found that the students in the immersion context (seven weeks of French instruction during the summer, approximately 17.5 hours a week) improved their fluency more than their peers abroad. The learners in the AH programme did not make any significant gains according to the fluency measures used in Context of L2 learning 6 this study. When examining the data obtained from the out-of-class contact questionnaire, it was evident that, thanks to the large number of extracurricular activities organized for the students in the immersion programme, those learners reported to have used the L2 more than their peers in the other two contexts. Another study comparing learners in SA and domestic immersion (Dewey, 2004) found no significant differences in reading comprehension in Japanese between the two contexts, except for self-assessment: the students in the SA context felt more confident of their reading abilities than those in the AH intensive programme. From these results Dewey (2004) concluded that a 9-week intensive summer course can produce gains in reading abilities as determined by objective reading measures comparable to an 11-12 week stay in Japan. The language gains made by learners receiving intensive instruction at home have been demonstrated to be superior not only to the gains experienced by some students abroad, but also to those attained by students in domestic programmes that do not offer concentrated hours of instruction (or "regular" L2 courses offering a maximum of 4 hours of instruction a week). Most studies comparing intensive and regular L2 programmes have included Canadian primary school learners of English with French as their native language. These studies clearly demonstrate that intensive L2 instruction promotes L2 acquisition more than regular instruction (Collins et al., 1999;Netten and Germain, 2004;Spada and Lightbown, 1989;White and Turner, 2005). These findings have also been replicated in the case of Spanish-speaking adult learners of English at an intermediate proficiency level (Serrano, 2007;Serrano and Muñoz, 2007;Serrano, 2011). Context of L2 learning 7 In general, it can be said that certain advantages have been attributed to contexts other than the typical L2 classroom programmes that offer long periods of instruction (usually from primary school until the end of high school) with minimum time concentration (2-4 hours every week). In the present study some of these "less typical" contexts of L2 learning are analyzed in order to shed some light on how context of learning affects L2 acquisition. Research on intensive instruction for adult learners is indeed necessary: many students learn the L2 in this context all over the world and little research has been done examining this type of programme. Likewise, the number of students in Europe who participate in stay abroad programmes under the Erasmus scheme is also noteworthy (e.g., according to the European Commission for Education and Training, during the academic years between 2004 and 2008, a total of 466,000 students-more than 150,000 per year-engaged in a SA experience thanks to the Erasmus scholarships). Research Questions The purpose of this study is to analyze the effect of context of acquisition on the written and oral performance of L2 learners. The study abroad context will be compared to two types of "at home" programmes with different degrees of concentration of L2 hours of instruction (intensive and semi-intensive programmes), always keeping the days of L2 exposure constant. More specifically, our research questions are the following: 1. Is the SA context more or less beneficial than an intensive course "at home" for the development of L2 written and oral production in terms of fluency, syntactic complexity, lexical complexity and accuracy? Context of L2 learning 8 2. Is the SA context more or less beneficial than a semi-intensive course "at home" for the development of L2 written and oral production in terms of fluency, syntactic complexity, lexical complexity and accuracy? 2. Method Learning Contexts and Participants A total of 131 participants from two different contexts were considered: at home EFL classroom learners receiving intensive instruction in Spain (N=106), and study abroad students from Spain learning English in the UK (N=25). Within the former group, two programmes offered at the language school of a university in Barcelona, Spain, were examined: intensive (N=69) and semi-intensive (N=37). These programmes will always be considered separately for the analysis, as the focus of this study is to examine how each of these programme types compares to the SA context. The methodology used in the semi-intensive and intensive programmes is highly similar: both programmes follow the same syllabus, books, and the students take the same exam at the end of the course. The intensive programme, however, can be considered more similar to an "immersion course", since the learners are in contact with English for a long period of time each day (5 hours). Most of the AH participants included in this research were university students falling within the 18-23 year-old range, who were taking English classes in order to obtain elective credits. The percentage of males (43.6%) and females (54.4%) is similar. All the students are comparable in terms of motivation and previous experience with English. With respect to the instructors, each of the groups considered for this study had a different teacher. The participants in the SA context include 25 Spanish-speaking learners (7 males and 18 females) who were studying at the same university in the UK, thanks to the Erasmus European exchange programme. The Erasmus programme is the most popular mobility programme to study abroad within the European framework. Scholarships are awarded to European undergraduate students and they offer the possibility to study in a European country for one semester or for a whole year, so that participants can improve their Context of L2 learning 10 second language skills and get to know another culture. All these learners had received explicit instruction on the L2 in their home country. While in the UK, the majority of the students (76%) had a total of 8-12 hours a week of classes in English (including English language classes). Most of the SA participants stayed in houses with other students (60%), although 20% stayed in halls of residence, and another 20% with families. We do not have an independent indicator of these students' proficiency level before their stay abroad. Nevertheless, the pre-test scores of all the students (both in the AH and SA contexts) will be considered as covariates in the statistical tests, and the analyses of the students' performance in the post-test will thus control for initial L2 knowledge (see section 2.3 for a clearer description of the statistical procedures). Procedure and Instruments In case of the AH context, the same data collection procedure was followed for the two different types of programmes. One researcher was in charge of the data collection, although she received occasional help from three research assistants for approximately 25% of all the data that were collected. All the researchers followed the same instructions when implementing the tests. The students' written and oral production was elicited by means of a composition and an oral narrative. The students' performance was measured twice, once towards the beginning of the course and the second time towards the end of the course. The students took both the pre-and the post-test during class time. In the case of the intensive course, the number of days of class between pre-and post-test was 15, whereas for the semi-intensive course the corresponding period between pre-and post-Context of L2 learning 11 test was two months. The number of hours between the two test-times was the same (80 hours) for both programmes. The topic of the composition was "My best friend" in the pre-test, and "Someone I admire" in the post-test. The students were given 15 minutes to write the composition and were asked to use approximately 150 words. Because of practical reasons, the oral task was performed by a subgroup of students chosen randomly (N=12 in the semi-intensive course; N= 43 in the intensive programme). The students were recorded while they told a narrative on the basis of a series of pictures that presented two children and their mother preparing a picnic (Heaton, 1966). This test has been extensively used in a variety of projects including learners with different languages and in different age groups (Muñoz, 2006;Tavakoli and Foster, 2008). The SA students performed the same tasks as the AH students. In this case one researcher collected all the data. The pre-test was administered right before classes at the university started. The researcher met with the students individually or in pairs at the university premises where they completed the oral task first, which was recorded in a quiet room with the only presence of the researcher. Then, the students performed the written task in the same conditions as the AH students. In order to compare the SA students with the AH intensive learners, the former performed the post-test 15 days after the pre-test, which was the same lapse of time between the administration of pre-test and post-test for the AH intensive group. To facilitate comparison with the AH semi-intensive programme, the SA learners wrote another composition on a similar topic ("My best friend in Southampton") and told the oral narrative again approximately two months after Context of L2 learning 12 the pre-test, which is the time between both tests for the AH semi-intensive group (see Figure 1). [FIGURE 1] We are aware that we are only controlling for "days" of exposure and not "hours" when we compare the SA group with the two AH groups-which is what most studies on SA vs. AH have done. We can only be sure about the hours of exposure between pre-and post-test for the AH groups (around 80), since those hours were mostly restricted to the classroom hours, but we do not have a detailed account of the number of hours of contact with English for the SA participants. Nevertheless, even if we had asked students to keep a record of the hours they were in contact with the English language every day/week, it would have been highly difficult to find a group of learners with the same number of hours of L2 practice per week in the same SA context during the same period of time, and that such number of hours coincided with the number of classroom hours for learners AH. Given the difficulty of a design that controls for hours of practice per week across contexts, we decided to control for days of exposure between pre-and post-test among groups. Data Analysis The students' written and oral production was analyzed in terms of fluency, complexity (both syntactic and lexical) and accuracy. The same measures were adopted for both modes, except for the case of fluency. All these measures have been considered Context of L2 learning 13 as some of the most reliable measures to analyze students' written and also oral production (Wolfe-Quintero et al., 1998). Written fluency was examined in terms of words per T-unit (W/T). Fluency in oral production was examined by means of syllables per minute (Syll/min), since this measure is generally considered more appropriate for oral fluency than W/T (Griffiths, 1991). For this particular study, the syllable count did not include false starts, repetitions, selfcorrections, or words in the students' first language. Syntactic complexity was examined using the T-unit complexity ratio, clauses per T-unit (C/T). Lexical complexity was examined using Guiraud's Index of Lexical Richness: word types divided by the square root of the word tokens (Types/√Tokens). Finally, accuracy was examined by counting the errors per T-unit (Err/T). The data were transcribed and coded using CLAN (MacWhinney, 2000). Three different researchers were in charge of coding the data for the more objective measures (W/T, C/T, Syll/min). Inter-rater reliability was calculated by means of percentage agreement for these measures, reaching 98%. For accuracy, which is usually more problematic, two researchers were in charge of the coding; one of them coded 60% the data and the other 40%. Inter-rater reliability was calculated on 14% of the data reaching 96.5% agreement. After all the samples were coded, analyses were performed using the Statistical Package for the Social Sciences (SPSS). In order to compare the performance of the students in the contexts under analysis, different Multivariate Analysis of Covariance (MANCOVA) tests were performed. Separate MANCOVAs were executed for written and oral production because doing two separate tests ensures a higher number of students in the written production task. This Context of L2 learning 14 task was performed by all the students in the AH context (N=69 in the intensive; N=37 in the semi-intensive programme), but only a percentage of those learners did the oral production task (N=43 in the former; N=12 in the latter). Consequently, if a single analysis had been done, only the students that did both tasks could have been included, and thus a considerable amount of data would not have been examined. In the written production task four variables were considered: fluency, as measured by words per T-unit (W/T); syntactic complexity, as measured by clauses per T-unit (C/T); lexical complexity, as measured by Guiraud's Index; and accuracy, as measured by errors per T-unit (Err/T). The scores in the post-test for those variables were entered as dependent variables in the MANCOVA, and those of the pre-test acted as covariates, in order to control for initial skill in the L2. Context of learning (SA and AH intensive in the first analysis, and SA and AH semi-intensive in the second) was the independent variable. Regarding the oral production task, the dependent variables and measures considered were the same, except for the oral fluency measure (syllables per minute [Syll/min] and not W/T). As for the written production task, the scores in the pre-test in the measures of fluency, syntactic complexity, lexical complexity and accuracy were entered as covariates. The independent variable was also context of learning. Results The results of the analyses will be presented in two different sections. Section 3.1. includes the SA context and the AH intensive programme, and section 3.2. compares the SA context and the AH semi-intensive programme. Context of L2 learning 15 3.1. AH Intensive and SA Table 1 presents the mean scores and the standard deviations for the written pre-test and post-test for the learners in the AH intensive programme and for the learners abroad after staying in the L2 country for 15 days. It must be noted that the number of students in the SA context is lower because one student could not do the post-test. [ The descriptive statistics show that the scores obtained by the learners in the SA context in the post-test were slightly higher than those obtained by the learners in the AH intensive program. Nevertheless, the results of the MANCOVA, after controlling for pretest performance, indicate that no differences existed between the learners in AH intensive programme (N=69) and in the SA context (N=24) on the combined dependent variables: F(4, 84)=1.05, p=.388, Wilks' Lambda=.952, partial eta squared=.048. According to this result, the performance of the learners in the two groups in each dependent variable was comparable in the post-test. Table 2 presents the means and standard deviations for the scores in the oral production task for the learners in the intensive programme (N=43) and abroad (N=24). As was the case for the written production task, the performance of the learners in the SA context was slightly superior to that of their peers in the AH intensive programme in the post-test in all the measures under analysis. Context of L2 learning 16 [TABLE 2] The results of the MANCOVA were also similar to those of the written production task, in that no significant differences existed between the two contexts on the combined dependent variables: F(4, 58)=.196, p=.940, Wilks' Lambda=.987, partial eta squared=.013. AH Semi-Intensive and SA There were 37 students in the AH semi-intensive group and 25 in the SA group who performed the written production task. The descriptive statistics for the pre-test (beginning of the semi-intensive course for the AH context and beginning of the stay for the SA context) and the post-test (two months later for both groups) are presented in Table 3. [ Concerning the oral production task, 12 students were included in the AH semiintensive programme, and 25 in the SA context. See Table 4 for the means and standard deviations. [ F(1, 35)=4.32, p=.046, partial eta squared=.122. As was the case for the written production task, learners' oral syntactic complexity and accuracy after two months abroad or after receiving two months of instruction at home were comparable. Discussion and Conclusion According to the results presented in the previous section, it can indeed be claimed that context of learning has certain effects on L2 development of written and oral production. These differences, however, are restricted to the comparisons between the AH semi-intensive context on the one hand, and the SA context on the other. After two Context of L2 learning 18 months abroad, the learners in the present study demonstrated a more advanced performance in terms of some variables of written and oral production than their peers spending the same period of time in a semi-intensive course AH. In contrast, the students' L2 written and oral production after spending 15 days abroad or the same period in an intensive course at home was similar. With respect to the comparison between the SA students and those in the AH semiintensive programme, it can be said that the SA context seems to be more advantageous for the development of both written and oral production in terms of fluency and lexical complexity. These results are consistent with other studies that have attributed advantages to SA learners as opposed to AH "regular" (i.e. "non-intensive") learners in terms of oral fluency (Freed, 1995;DeKeyser, 1991;Lafford, 2004;Möhle, 1984;Segalowitz and Freed, 2004), and vocabulary (DeKeyser, 1991;Dewey, 2008;Foster, 2009;Freed, 1995;Ife et al., 2000;Milton and Meara, 1995;Segalowitz and Freed, 2004). In contrast with the learners in the AH semi-intensive context, the learners in the AH intensive programme do not appear to be at a disadvantageous position with respect to their peers abroad. After controlling for pre-test scores, there were no differences in the measures of written and oral production under study between the learners following an intensive course AH and the learners abroad. The study by Freed et al. (2004), which also analyzed an intensive programme at home (or rather "domestic immersion") and a SA context concerning oral fluency, also found that the SA context did not necessarily lead to greater fluency gains, contrary to findings of studies comparing SA and typical AH courses (Dewey, 2008;Foster, 2009;Freed, 1995;Pérez-Vidal and Juan-Garau, 2009;Segalowitz and Freed, 2004). Students in the domestic immersion courses analyzed by Context of L2 learning 19 Freed and associates in fact made more gains in oral fluency than those abroad. As was mentioned before, however, this immersion context provided learners with opportunities to practice the L2 outside the class and the learners took advantage of those opportunities even more so than the SA learners. In the present study, though, the learners in the AH intensive context lacked opportunities for L2 practice outside the class, and this could be the reason why the AH intensive context did not lead to more significant advantages than the SA context. The comparison between learners attending semi-intensive and intensive courses has not been performed in this particular study because we are analysing the effect of days of instruction or days of SA, and the days between pre-and post-test for the students in the two programmes are quite different. Nevertheless, other studies that have compared the performance of adult learners in programmes with different degrees of time concentration have reported benefits for the most concentrated programme type (Serrano, 2007;Serrano and Muñoz, 2007). Other research analysing time distribution in the case of children L2 learners, provide further evidence for the positive effect of concentrating the time of L2 instruction (Collins et al., 1999;Lapkin, et al., 1998;Netten and Germain, 2004;Spada and Lightbown, 1989;White and Turner, 2005). Although the two AH programmes under study here were classified as "intensive" (indeed the semi-intensive offers more concentrated instruction than the typical L2 courses: 10 hours/week vs. 2-4 hours/week), probably 2.5 hours per session or 10 hours a week was not concentrated enough to be regarded as "intensive". More research should be done on intensive instruction in order to find out how long the sessions should be or how many hours of exposure the students Context of L2 learning 20 should have per session (or every week) for a specific L2 programme to be considered intensive. Taking into account the findings from this study and previous studies on SA and on the time factor, it can be claimed that, probably, both intensive classroom practice (as promoted in an AH intensive course) and real communicative practice outside the class (as encouraged in the SA context) generally provide a more suitable environment for L2 learning than regular, "drip-feed" (or not concentrated) L2 instruction. DeKeyser (2007;2010) suggests that mere communicative practice in real-life situations without appropriate previous command of the L2 is not a guarantee for successful L2 learning abroad, which could be one of the reasons why the SA context has not been found to be systematically more beneficial than the AH context. On the other hand, L2 classroom practice that is not concentrated does not generally facilitate remembering or, even less so, "proceduralising" previously acquired declarative knowledge (using DeKeyser's terms: DeKeyser, 1997;, as suggested by Serrano (2011). The fact that this study did not find statistically significant differences between SA and intensive classroom learning suggests that both are equally potentially beneficial contexts to encourage L2 development. The findings from this particular study provide further empirical evidence for the effect of context on L2 learning. In the present case, the least advantageous context seems to be the AH semi-intensive, considering the results of the statistical analyses. Nevertheless, there is an issue that should be taken into account. At the time of the posttest, the AH learners had performed the written and the oral production tasks only once for this study (in the pre-test), whereas the SA students had already performed similar Context of L2 learning 21 tasks twice (in the pre-test and in the post-test they performed 15 days later, which was compared to the performance of the intensive learners). There could have been a task repetition effect that favoured the SA participants. Nevertheless, the students in the AH semi-intensive course probably practiced with similar types of writings throughout their L2 course more often than the SA participants who were practicing the language in more meaningful and naturalistic contexts. Another limitation of the present study is that it only controlled for days and not hours of practice when comparing SA and AH; nevertheless, most studies comparing the two contexts cannot control for hours of exposure for reasons that have already been mentioned in section 2.2. In conclusion, this study has demonstrated that context of L2 learning has some effects on the development of the L2, with some advantages for contexts which provide opportunities for intensive language practice. This finding, together with previous findings in the SLA literature, emphasizes the role of the context of learning in L2 acquisition. More research should be done in order to examine which specific feature/s related to a given context is/are key to success in L2 learning, whether it is communicative interaction in real-life situations, classroom instruction, intensity of L2 exposure and practice or a combination of different factors. Tables Table 1 Written production Intensive and SA
7,277.2
2011-06-01T00:00:00.000
[ "Linguistics", "Education" ]
A Weighted Feature Fusion Model for Unsteady Aerodynamic Modeling at High Angles of Attack : Unsteady aerodynamic prediction at high angles of attack is of great importance to the design and development of advanced fighters. In this paper, a weighted feature fusion model (WFFM) that combines the state-space model and neural networks is proposed to build an unsteady aerodynamic model for the precise simulation and control of post-stall maneuvers. In the proposed model, the influences of the physical model on neural networks are considered and adjusted by introducing a standardization layer and a new weighting method. A long short-term memory (LSTM) network is used to fuse two mappings: one from flight states to aerodynamic loads, and the other from low-fidelity data to high-fidelity data. Data from wind tunnel oscillation experiments at high angles of attack using a new kind of wire-driven parallel robot and the traditional tail support are used for verifying the proposed aerodynamic model. The output of the WFFM is also compared with predictions from other models, such as the state-space model, single LSTM model, and feature fusion model not including a feature weighting layer. Results demonstrate improved accuracy of the proposed model in the interpolation and extrapolation tests. Furthermore, the WFFM is applied to the flight simulation of F-16 with different control inputs. Compared with conventional models, the WFFM shows improved accuracy and better generalization capability. Introduction Despite the increasing prevalence of advanced beyond-visual-range missiles, closerange dogfights remain a critical aspect of aerial combat.High agility and maneuverability are still the indispensable key features of next-generation fighter aircraft.The aerodynamic characteristics of aircraft at high angles of attack exhibit highly nonlinear and unsteady characteristics due to phenomena such as flow separation and vortex shedding.The traditional database-based interpolation method is no longer sufficient for the precise simulation and control of post-stall maneuvers.Therefore, the establishment of accurate aerodynamic models is crucial for aircraft dynamics investigation, stability analysis, and control system design at high angles of attack.Past work has focused on mathematical models.These models establish a mathematical relationship between aerodynamic loads and flight states (such as velocity, angle of attack, and sideslip angle) based on unsteady flow phenomena and physical principles, such as the dynamic derivative model [1,2], the statespace model [3,4], and the indicial function model [5,6].While these models often possess explicit physical interpretations, their accuracy is constrained by the level of understanding of physical phenomena and the degree of mathematical simplification involved. In recent years, with the rapid advancement of artificial intelligence, machine learning models have found widespread application in unsteady aerodynamic modeling.Machine learning models, also known as black-box models, bypass the need for explicating complex physical mechanisms.With a sufficient amount of training data, these models possess powerful nonlinear fitting capabilities to establish the relationship between flight states and aerodynamic loads.Models such as support vector machines [7,8], random forest [9], and neural networks [10][11][12] have been employed for this purpose.Furthermore, recurrent neural networks improve accuracy by capturing the time lag effects of unsteady aerodynamics [13][14][15].In addition to flight state data, the geometric representation of the airfoils is used as an input feature for the deep neural network to obtain the aerodynamic parameters [16,17].While machine learning models can fit nonlinear relationships effectively, they cannot provide a reasonable explanation for how inputs are used to make predictions. If the explanation of physical relationships between flight states and aerodynamic loads could be applied to model training, it would enable the model to possess the interpretability of traditional mathematical models and the powerful nonlinear fitting capabilities of black-box models simultaneously.Currently, there are two approaches to exploring the combination of traditional mathematical models and black-box models. One approach involves using physics-informed neural networks (PINNs) [18].The physics equations are incorporated into the loss function of a neural network to constrain the model while training, thereby ensuring outputs follow known physical laws [19].Zhao et al. [20] proposed an identification method of aerodynamic models using a physics neural network that incorporates the attitude dynamics of an aircraft.Li et al. [21] utilized a PINN model to predict parameters of the state-space model using neural networks instead of predicting aerodynamics directly.While this method enhances the neural network's extrapolation capabilities, it remains fundamentally a state-space model, with no significant improvement in interpolation accuracy. Another approach is using the fusion model, which attempts to integrate traditional mathematical models into machine learning models [22].The combination of models with different accuracy is also known as the multi-fidelity method.Low-fidelity models' predictions, which are assumed to have similar trends to the high-fidelity models' predictions, are used to provide additional information.Wang [23] and Li, et al. [24] combined dynamic derivative models with black-box models for unsteady aerodynamic prediction.These are used to compute low-and high-fidelity outputs, respectively.Finally, the least squares method is used to merge the outputs.The fusion model exhibited superior generality when compared to black-box models.However, the effectiveness of the least squares method depends on the correlation between models and the assumptions about the error distribution.If the models are highly correlated, the least squares method may not provide a significant advantage.Zhang et al. [25] presents an innovative aerodynamic modeling method using heterogeneous data and physical feature embedding, significantly improving prediction accuracy while reducing training data needs.The complexity of implementation, reliance on high-quality data, need for further real-world validation, and demand for substantial computational resources may be potential challenges of this approach. This paper proposes a weighted feature fusion model (WFFM) based on the statespace model and long short-term memory network (LSTM) to predict nonlinear unsteady aerodynamics.The main contributions of this paper can be summarized as follows: (1) An architecture of an aerodynamic model is proposed, which combines the physics model and black-box model, exhibiting high accuracy in both interpolation and extrapolation tests.(2) A new method for weighting data is proposed.To reduce the impact of the statespace model error, the feature standardization layer and weighting layer, which is implemented using a single neuron and an activation function, are introduced.(3) Two mappings are established and fused by LSTM.One is the mapping from flight states to aerodynamic loads, and the other is the mapping from low-fidelity data to high-fidelity data.(4) To test the model, the proposed model is used to predict aerodynamic loads at highangles-of-attack oscillations.Furthermore, the model is applied to a flight simulation of the F-16 with different control inputs to evaluate the generalization capability. The paper is organized as follows.In Section 2, a brief introduction to the statespace method and neural network approach in aerodynamic modeling is given, and the structure of the WFFM based on the state-space model and LSTM is discussed in detail.In Section 3.1, the WFFM is used to predict the pitching moment coefficient at a high-angle-ofattack oscillation data obtained from wind tunnel experiments using a wire-driven parallel robot with eight wires (WDPR-8) and a traditional tail support to verify the proposed aerodynamic model.The results are then compared with three other models.In Section 3.2, the WFFM is applied to flight simulation and tested with different control inputs to validate the robustness and generalization of the models.Finally, Section 4 presents conclusions. State-Space Method Aircraft exhibit unsteady characteristics at high angles of attack, with airflow separation being a primary cause of aerodynamic time delays.To address this issue, Goma et al. proposed a state-space modeling method by introducing internal state-space variables into traditional aerodynamic derivative models [26].The internal state-space variable is defined as the nondimensional coordinates of the airflow separation point, formulated as x = x/c.Here, x represents the distance between the position of the separation point and the leading edge of the airfoil, and c represents the chord length.The range of values of the airflow separation point x is [0, 1].Introducing the airflow separation point allows the state-space model to depend not only on the instantaneous state variables but also on the physical mechanisms of airflow separation and attachment.The aerodynamic force and moment coefficients can be expressed as: where C i represents the aerodynamics coefficients, C is represents the static component of aerodynamic coefficients, C id represents the dynamic component of aerodynamic coefficients, and C iδ represents the effect of control surface deflection on aerodynamic coefficients. In the state-space models, these coefficients are expanded in their Taylor series, using the first derivatives only and truncating higher-order terms, which may lead to insufficient accuracy.For example, considering the aerodynamic coefficient C X along the body axis, its dynamic aerodynamic coefficient C Xd is typically approximated by the first five terms of its Taylor series expansion: In this formula, all coefficients of the expansion terms in the dynamic aerodynamic coefficients are approximated using a quadratic polynomial model.For example, C Xα can be expressed as follows: where k Xα0 , k Xα1 , and k Xα2 are unknown parameters within the model, which are determined using parameter identification techniques. Neural Network Approach Neural networks are powerful machine learning tools capable of learning complex nonlinear relationships from data.They have found widespread applications in fields such as image recognition, natural language processing, and time-series prediction.A neural network consists of multiple layers of neurons, including an input layer, one or more hidden layers, and an output layer.All the neurons connected by links take in some data and use it to perform specific operations and generate output through an activation function. Aerodynamic loads could be regarded as a function of the instantaneous values of the aircraft's motion state variables [27].In general, aerodynamic force and moment coefficients C can be expressed as: where u is a vector of flight states. Assuming the considered flight states include angle of attack α, sideslip angle β, pitch rate q, altitude H, and Mach number Ma, then u = [α, β, q, H, Ma] T .A typical neural network used to predict aerodynamic loads is shown in Figure 1.By training a neural network on an aerodynamic dataset, it is possible to fit the mapping between flight states and aerodynamic loads, thereby achieving aerodynamic force and moment coefficients prediction. Neural networks are powerful machine learning tools capable of learning complex nonlinear relationships from data.They have found widespread applications in fields such as image recognition, natural language processing, and time-series prediction.A neural network consists of multiple layers of neurons, including an input layer, one or more hidden layers, and an output layer.All the neurons connected by links take in some data and use it to perform specific operations and generate output through an activation function. Aerodynamic loads could be regarded as a function of the instantaneous values of the aircraft's motion state variables [27].In general, aerodynamic force and moment coefficients C can be expressed as: where u is a vector of flight states. Assuming the considered flight states include angle of attack  , sideslip angle  , pitch rate q , altitude H , and Mach number Ma , then   , , , , . A typical neural network used to predict aerodynamic loads is shown in Figure 1.By training a neural network on an aerodynamic dataset, it is possible to fit the mapping between flight states and aerodynamic loads, thereby achieving aerodynamic force and moment coefficients prediction.The nonlinear and unsteady aerodynamics exhibit time lag effects.The unsteady aerodynamic forces and moments not only depend on the instantaneous states but also their time histories.Consequently, aerodynamic coefficients C can be further modeled as a function of flight states over a continuous period: , , , where A recurrent neural network (RNN) is a type of neural network which uses sequential data or time series data.In contrast to the feedforward neural network, an RNN has feedback connections that enable the network to remember the previous input.Improved network structures such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) have resolved the vanishing gradient problem of traditional RNNs.Therefore, by inputting a continuous sequence of flight states into an RNN, it is possible to extract temporal information and improve the accuracy of machine learning models in aerodynamic forces and moments prediction.Figure 2 illustrates the process of using LSTM to predict aerodynamic loads based on the flight state history.The nonlinear and unsteady aerodynamics exhibit time lag effects.The unsteady aerodynamic forces and moments not only depend on the instantaneous states but also their time histories.Consequently, aerodynamic coefficients C can be further modeled as a function of flight states over a continuous period: where u t−n+1 , u t−n+2 , . . ., u t denote the flight states corresponding to the time step from t − n + 1 to t. A recurrent neural network (RNN) is a type of neural network which uses sequential data or time series data.In contrast to the feedforward neural network, an RNN has feedback connections that enable the network to remember the previous input.Improved network structures such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) have resolved the vanishing gradient problem of traditional RNNs.Therefore, by inputting a continuous sequence of flight states into an RNN, it is possible to extract temporal information and improve the accuracy of machine learning models in aerodynamic forces and moments prediction.Figure 2 illustrates the process of using LSTM to predict aerodynamic loads based on the flight state history.In summary, neural network models typically achieve high accuracy.However, the prediction process of neural networks is difficult to interpret, making their application in aerodynamic modeling challenging. Weighted Feature Fusion Model This paper introduces a weighted feature fusion model (WFFM) based on both the state-space model and LSTM.The design of the WFFM aims to overcome the difficulty of obtaining explicit physical mechanisms and high accuracy. State-space models possess explicit physical meanings and describe the physical characteristics of separated flows.However, their limited fitting capability on nonlinear In summary, neural network models typically achieve high accuracy.However, the prediction process of neural networks is difficult to interpret, making their application in aerodynamic modeling challenging. Weighted Feature Fusion Model This paper introduces a weighted feature fusion model (WFFM) based on both the state-space model and LSTM.The design of the WFFM aims to overcome the difficulty of obtaining explicit physical mechanisms and high accuracy. State-space models possess explicit physical meanings and describe the physical characteristics of separated flows.However, their limited fitting capability on nonlinear problems leads to low prediction accuracy.The aerodynamic data obtained through this method are often referred to as low-fidelity data.In contrast, the aerodynamic data obtained through methods such as wind tunnel experiments are referred to as high-fidelity data.The key concept behind the WFFM is to introduce physical information from the low-fidelity model into the neural network model.By using high-fidelity data for training, it establishes a mapping from low-fidelity data to high-fidelity data.This method minimizes the impact of errors in the low-fidelity data on prediction accuracy.In this way, the WFFM maintains physical significance and reduces additional errors to improve the accuracy of predictions for aerodynamic forces and moments. The structure of the WFFM is illustrated in Figure 3.The WFFM consists of four layers.The first layer is the state-space model layer, which takes flight states u t at time t as input and calculates the low-fidelity aerodynamic coefficients y low t using the state-space model: where f SS (•) is the state-space model described in Section 2.1. Aerospace 2024, 11, x FOR PEER REVIEW 6 of 21 different values, it should be assigned a reduced weight to play a guiding role.The feature weighting layer is implemented using a single neuron and an activation function: where w and b are the weight vector and bias term for this neuron, adjusted automat- ically through backpropagation based on the error between the predicted output and the actual high-fidelity output [29].The activation function forms any input from the range   , inf inf  to a value that lies on the interval   0,1 .The second layer is the feature standardization layer, which makes the distributions of each feature in the input data have zero means and unit variances.This step normalizes the range of independent variables or features of data, thus improving training convergence speed and prediction accuracy [28].The standardization operation is defined as follows: where U and Y represent the population of flight state data and low-fidelity data.µ U and µ Y are the means of U and Y, respectively.σ U and σ Y are the standard deviations of U and Y, respectively.u t and y low t are the standardized data.The third layer is the feature weighting layer.The process of weighting is used to assign different levels of importance to the various features in a dataset.Since the lowfidelity model output y low t exhibits similar trends to the high-fidelity data, but with different values, it should be assigned a reduced weight to play a guiding role.The feature weighting layer is implemented using a single neuron and an activation function: where w and b are the weight vector and bias term for this neuron, adjusted automatically through backpropagation based on the error between the predicted output and the actual high-fidelity output [29].The activation function sigmoid(x) = 1/(1 + e −x ) transforms any input from the range (−in f , in f ) to a value that lies on the interval (0, 1).u represent the weighted data.The fourth layer is the feature fusion layer, which consists of the LSTM model.In this layer, two mappings have been established.One is the mapping from flight states to aerodynamic loads, which is the same as the black-box model.The other is the mapping from low-fidelity data to high-fidelity data, which includes additional physical information.The LSTM model with strong nonlinear fitting capabilities is used to fuse the features: , y weighted t (11) where f LSTM (•) represents the LSTM model, consisting of an LSTM sublayer with 100 hid- den units, followed by two fully connected sublayers with 100 and 50 neurons, respectively, and a dropout layer.u weighted i and y weighted i compose the input vector for the LSTM model at time i (i = t − 2, t − 1, t). ŷt represents the output of the WFFM. Finally, the error between the model's output ŷ and the high-fidelity data y high is calculated.Based on the chain rule, the backpropagation algorithm calculates the error gradient of the loss function with respect to each parameter of the network.Parameters are adjusted to minimize the difference between the actual output and the desired output.Mean squared error (MSE) is chosen as the loss function: where n is the number of training samples.Overall, the weighted feature fusion model predicts aerodynamic forces and moments guided by the state-space model.It leverages the state-space model's explanatory power for unsteady aerodynamic effects and the powerful nonlinear fitting capabilities of neural networks.In constructing the model, low-fidelity data are used to indicate trends, while high-fidelity data are used to correct these trends.This ensures both high accuracy predictions and generalization of the model. Experimental Data In this section, the performance of the weighted feature fusion model in predicting unsteady aerodynamic loads is assessed and tested.In order to obtain the experimental data, we have proposed a new aircraft model suspension method, the Wire-Driven Parallel Robot with Eight Wires (WDPR-8) [30][31][32].The pose of the aircraft model could be measured by extrospective sensors and dynamically controlled by adjusting the lengths of the cables.Force sensors are also used to monitor the cable tension in case of slackness.A prototype was developed to achieve arbitrary multi degrees of freedom (M-DOF) motion for an analog F-22 aircraft model (Lockheed Martin, Bethesda, MD, USA), as illustrated in Figure 4. To measure aerodynamic forces and moments, a built-in six component strain-gage balance is used in this experiment [31].The output from the balance is voltage data, which requires processing to obtain aerodynamic forces and moments.Initially, the voltage data from the balance were subjected to a low-pass filter with a cutoff frequency set at five times the motion frequency.Subsequently, the voltage values collected in the no-wind condition were subtracted from those collected during the wind-on condition, in order to obtain the real incremental voltage caused by the aerodynamic forces.Then, the filtered voltage data were iteratively processed using the balance force signal calculation formula to derive the aerodynamic forces.Finally, the average values at corresponding points over ten cycles were calculated.A comparative test in the wind tunnel was conducted to evaluate the proposed method, using both the WDPR-8 support and a traditional tail support [33].As shown in To measure aerodynamic forces and moments, a built-in six component strain-gage balance is used in this experiment [31].The output from the balance is voltage data, which requires processing to obtain aerodynamic forces and moments.Initially, the voltage data from the balance were subjected to a low-pass filter with a cutoff frequency set at five times the motion frequency.Subsequently, the voltage values collected in the no-wind condition were subtracted from those collected during the wind-on condition, in order to obtain the real incremental voltage caused by the aerodynamic forces.Then, the filtered voltage data were iteratively processed using the balance force signal calculation formula to derive the aerodynamic forces.Finally, the average values at corresponding points over ten cycles were calculated. A comparative test in the wind tunnel was conducted to evaluate the proposed method, using both the WDPR-8 support and a traditional tail support [33].As shown in Figure 5, the comparisons between the WDPR-8 with the tail support show good agreements in lift coefficients.To obtain the longitudinal dynamic characteristics, experiments were conducted based on pitch oscillations which can be described by the following equation: where 0  is the initial angle of attack, m A is the oscillation amplitude, and f is the os- cillation frequency. Pitch oscillation experiments are conducted on the WDPR-8.In the pitch oscillations, the reduced frequency  is defined as: To obtain the longitudinal dynamic characteristics, experiments were conducted based on pitch oscillations which can be described by the following equation: where α 0 is the initial angle of attack, A m is the oscillation amplitude, and f is the oscillation frequency. Pitch oscillation experiments are conducted on the WDPR-8.In the pitch oscillations, the reduced frequency κ is defined as: where the mean aerodynamic chord length of the aircraft model c A is 0.2522 m, the freestream velocity in the test section V 0 is 20 m/s, and the Reynolds number is approximately 3.414 × 10 5 .With an oscillation amplitude of 5 • , the reduced frequency κ is calculated to be 0.0269 and 0.0538, corresponding to oscillation frequencies of 0.34 Hz and 0.68 Hz, respectively.The experiments were conducted at initial angles of attack ranging from −10 • to 30 • , with increments of 10 • .Figure 6 shows the results of single-degree-of-freedom pitch oscillation tests at various reduced frequencies.To obtain the longitudinal dynamic characteristics, experiments were conducted based on pitch oscillations which can be described by the following equation: where 0  is the initial angle of attack, m A is the oscillation amplitude, and f is the os- cillation frequency. Pitch oscillation experiments are conducted on the WDPR-8.In the pitch oscillations, the reduced frequency  is defined as: where the mean aerodynamic chord length of the aircraft model A c is 0.2522 m, the free- stream velocity in the test section 0 V is 20 m/s, and the Reynolds number is approxi- mately 5 3 414 10 . . With an oscillation amplitude of 5°, the reduced frequency  is cal- culated to be 0.0269 and 0.0538, corresponding to oscillation frequencies of 0.34 Hz and 0.68 Hz, respectively.The experiments were conducted at initial angles of attack ranging from −10° to 30°, with increments of 10°. Figure 6 shows the results of single-degree-offreedom pitch oscillation tests at various reduced frequencies.Furthermore, high-angles-of-attack pull-up tests, frequency sweep tests, amplitude sweep tests, and multi-degree of freedom oscillation tests were also conducted.Figure 7 shows the pull-up of angle of attack for the aircraft model with WDPR-8 from 50° to 80°. To train and validate the prediction performance at high angles of attack of the proposed model, additional training data from wind tunnel experiments performed by the Aviation Industry Corporation of China Aerodynamics Research Institute were also utilized [34].The dataset includes results of the same aircraft model with both static experiments, the angle of attack ranging from 0° to 80°, and dynamic wind tunnel experiments, which were based on large amplitude pitch oscillations and can be described by the Equation (12).Typical dynamic experiments include the oscillation amplitude of 40° at various oscillation frequencies, including 0.2, 0.4, 0.6, and 0.8 Hz.The performance of the model is evaluated using the mean squared error.A smaller MSE indicates better performance.To train and validate the prediction performance at high angles of attack of the proposed model, additional training data from wind tunnel experiments performed by the Aviation Industry Corporation of China Aerodynamics Research Institute were also utilized [34].The dataset includes results of the same aircraft model with both static experiments, the angle of attack ranging from 0 • to 80 • , and dynamic wind tunnel experiments, which were based on large amplitude pitch oscillations and can be described by the Equation (12).Typical dynamic experiments include the oscillation amplitude of 40 • at various oscillation frequencies, including 0.2, 0.4, 0.6, and 0.8 Hz. Data derived from the above experiments were divided into two subsets.Training sets were used to adjust the weights and parameters of the model.Test sets were used to evaluate the model's interpolation and extrapolation prediction performance. The performance of the model is evaluated using the mean squared error.A smaller MSE indicates better performance. Model Training In this experiment, the flight state vector u is chosen to consist of the angles of attack α, pitch rate q, oscillation frequency f , and oscillation amplitude A, denoted as u t = [α t , q t , f t , A t ] T .The pitching moment coefficient C m , as an example, is used as the model output, while other aerodynamic coefficients are not additionally displayed. Figure 8 presents a comparison of the training curves for models with and without a feature standardization layer after 100 rounds of training.It is evident from Table 1 that data standardization improves both the training efficiency and prediction accuracy of the model.As the training results (Figure 9) show, the outputs of the WFFM match quite well with the experimental data, which exhibits the powerful non-linear fitting capability of the WFFM.The performance of the models in the interpolation test is shown in Figure 10.The Model Testing The performance of the models in the interpolation test is shown in Figure 10.The MSE of the results are shown in Table 2.In addition to the WFFM, the state-space (SS) model, LSTM model, and feature fusion model (FFM, without the feature weighting layer) were also employed for comparison.The results show that the pitch moment predictions from all models match with the experimental data.The results presented in Table 2 show that the black-box model was better than the state-space model in the interpolation test.Furthermore, by combining the state-space model with the black-box model, the results of the FFM and WFFM are improved in prediction accuracy.It is also observed that the introduction of the feature weighting layer leads to more accurate results for the WFFM.The performance of the models in the interpolation test is shown in Figure 10.The MSE of the results are shown in Table 2.In addition to the WFFM, the state-space (SS) model, LSTM model, and feature fusion model (FFM, without the feature weighting layer) were also employed for comparison.The results show that the pitch moment predictions from all models match with the experimental data.The results presented in Table 2 show that the black-box model was better than the state-space model in the interpolation test.Furthermore, by combining the state-space model with the black-box model, the results of the FFM and WFFM are improved in prediction accuracy.It is also observed that the introduction of the feature weighting layer leads to more accurate results for the WFFM.The models' prediction results and the MSE of results in the extrapolation test are shown in Figure 11 and Table 3, respectively.It can be observed that the overall error for extrapolation is higher than for interpolation.The MSE of the SS model was slightly lower than that of the LSTM model, which is contrary to the results of the interpolation test.This is because a neural network cannot map the function in regions of the variable space where no training data is available.The FFM and WFFM obtain physical mechanisms by introducing a state-space model, which enables the models to obtain higher-precision prediction results.Compared to the FFM, the WFFM exhibits an accuracy improvement of up to 50%, indicating that feature fusion effectively reduces additional errors.In summary, the results demonstrated the outstanding performance of the WFFM in The models' prediction results and the MSE of results in the extrapolation test are shown in Figure 11 and Table 3, respectively.It can be observed that the overall error for extrapolation is higher than for interpolation.The MSE of the SS model was slightly lower than that of the LSTM model, which is contrary to the results of the interpolation test.This is because a neural network cannot map the function in regions of the variable space where no training data is available.The FFM and WFFM obtain physical mechanisms by introducing a state-space model, which enables the models to obtain higher-precision prediction results.Compared to the FFM, the WFFM exhibits an accuracy improvement of up to 50%, indicating that feature fusion effectively reduces additional errors. In summary, the results demonstrated the outstanding performance of the WFFM in predicting unsteady aerodynamic loads at high angles of attack.Within the framework of combining the state-space model with a neural network, the FFM and the WFFM strengthen the capabilities of extrapolation.The introduction of the feature weighting layer effectively reduces the additional error from the state-space model and improves the prediction accuracy.This method demonstrates its potential for practical application in aircraft design and control. than that of the LSTM model, which is contrary to the results of the interpolation test.This is because a neural network cannot map the function in regions of the variable space where no training data is available.The FFM and WFFM obtain physical mechanisms by introducing a state-space model, which enables the models to obtain higher-precision prediction results.Compared to the FFM, the WFFM exhibits an accuracy improvement of up to 50%, indicating that feature fusion effectively reduces additional errors.In summary, the results demonstrated the outstanding performance of the WFFM in predicting unsteady aerodynamic loads at high angles of attack.Within the framework of combining the state-space model with a neural network, the FFM and the WFFM strengthen the capabilities of extrapolation.The introduction of the feature weighting layer effectively reduces the additional error from the state-space model and improves the prediction accuracy.This method demonstrates its potential for practical application in aircraft design and control.To start the simulation process, it is essential to find steady-state flight conditions for the force and moment equilibriums.Since this paper focuses on longitudinal dynamic characteristics, the lateral-directional motion is considered to be de-coupled.We trim the F-16 for steady level flight with velocity 4. To train and evaluate the aerodynamic models, various control commands are chosen to generate simulated data.For the performance comparison, the coefficient of determination, denoted as 2 R , is used as the evaluation criterion.When evaluating the goodness- of-fit of predicted values against actual values, a higher 2 R value indicates better predic- tive performance.Forces: . . where C X , C Y , and C Z are the force coefficients on the X, Y, and Z axes, respectively.C l , C m , and C n are the rolling, pitching, and yawing moment coefficients, respectively.The force and moment coefficients, derived from experimental data performed in a NASA Langley wind tunnel, are found by interpolating the data points for a given angle of attack, sideslip, and elevator deflection.Then, the Euler angles were computed by using quaternions to allow continuity of attitude motions.Auxiliary equations included: where a n and a y are the normal acceleration and lateral acceleration. To start the simulation process, it is essential to find steady-state flight conditions for the force and moment equilibriums.Since this paper focuses on longitudinal dynamic characteristics, the lateral-directional motion is considered to be de-coupled.We trim the F-16 for steady level flight with velocity V = 152 m/s and altitude H = 4572 m.After trimming, the angle of attack α, pitch angle θ, elevator deflection δ e , and throttle setting δ T are obtained.The trim state and control inputs are presented in Table 4.To train and evaluate the aerodynamic models, various control commands are chosen to generate simulated data.For the performance comparison, the coefficient of determination, denoted as R 2 , is used as the evaluation criterion.When evaluating the goodness-of-fit of predicted values against actual values, a higher R 2 value indicates better predictive performance. where n is the total number of samples, y represents the actual values, y is the mean of the actual values, and ŷ is the predicted values. Training Results for Sinusoidal Input In the flight simulation, the flight states variables of aerodynamic models are selected as angle of attack α, pitch rate q, pitch angle θ, flight speed V, and elevator deflection δ e , denoted as u t = [α t , q t , θ t , V t , δ e ] T , and the output is the vector of aerodynamic coefficients To obtain the flight states such as angle of attack and pitch angle, the aerodynamic model was coupled with the flight dynamics model.By using various combinations of elevator inputs, we can obtain aerodynamic loads for different maneuvers. For aerodynamic modeling, sinusoidal control command on the elevators is designed to generate training data.The elevator input frequencies f for the training set were selected as 0.1, 0.2, 0.3, 0.4, and 0.5 Hz, and the amplitudes A were chosen as 1, 2, 3, 4, and 5 degrees.These different combinations of amplitudes and frequencies resulted in a total of 25 experiments.The simulation duration was set to 15 s. Figure 13 shows a sinusoidal control input with an amplitude of 3 • and a frequency of 0.3 Hz.The WFFM's predictions, as shown in Figure 14, matched well with the F-16 model's response. Aerospace 2024, 11, x FOR PEER REVIEW 14 of 21 For aerodynamic modeling, sinusoidal control command on the elevators is designed to generate training data.The elevator input frequencies f for the training set were se- lected as 0.1, 0.2, 0.3, 0.4, and 0.5 Hz, and the amplitudes A were chosen as 1, 2, 3, 4, and 5 degrees.These different combinations of amplitudes and frequencies resulted in a total of 25 experiments.The simulation duration was set to 15 s. Figure 13 shows a sinusoidal control input with an amplitude of 3° and a frequency of 0.3 Hz.The WFFM's predictions, as shown in Figure 14, matched well with the F-16 model's response.For aerodynamic modeling, sinusoidal control command on the elevators is designe to generate training data.The elevator input frequencies f for the training set were s lected as 0.1, 0.2, 0.3, 0.4, and 0.5 Hz, and the amplitudes A were chosen as 1, 2, 3, 4, an 5 degrees.These different combinations of amplitudes and frequencies resulted in a tot of 25 experiments.The simulation duration was set to 15 s. Figure 13 shows a sinusoid control input with an amplitude of 3° and a frequency of 0.3 Hz.The WFFM's prediction as shown in Figure 14, matched well with the F-16 model's response. Testing Results for Sweep Input To thoroughly assess the predictive capability of the aerodynamic models obtained in Section 3.2.2,two sets of sweep signal with variable frequency and variable amplitude are designed as control inputs, represented as: where the frequency variation is given by: and the amplitude variation for the first set of experiments is given by: and the amplitude variation for the second set of experiments is given by: The maximum frequency f m is set to 0.6 Hz, and the maximum amplitude A m is set to 6 • .The simulation duration is 10 s. Figure 15 displays the frequency and amplitude of the sweep control input variation over time, exhibiting a characteristic S ′′ -shaped curve.Figure 16 presents the sweep control input history. Testing Results for Sweep Input To thoroughly assess the predictive capability of the aerodynamic models obtained in Section 3.2.2,two sets of sweep signal with variable frequency and variable amplitude are designed as control inputs, represented as: where the frequency variation is given by: and the amplitude variation for the first set of experiments is given by: and the amplitude variation for the second set of experiments is given by: The maximum frequency m f is set to 0.6 Hz, and the maximum amplitude m A is set to 6°.The simulation duration is 10 s. Figure 15 displays the frequency and amplitude of the sweep control input variation over time, exhibiting a characteristic S''-shaped curve.Similarly, the aerodynamic model is incorporated into the flight dynamics models to observe the flight state variables.As shown in Table 5, the 2 R values for both the state- space model and LSTM model are less than 0.9, indicating a relatively large error.As shown in Figures 17 and 18 Similarly, the aerodynamic model is incorporated into the flight dynamics models to observe the flight state variables.As shown in Table 5, the R 2 values for both the state-space model and LSTM model are less than 0.9, indicating a relatively large error.As shown in Figures 17 and 18, the outputs of the LSTM model deviate significantly from the real flight states, which shows the limits of the black-box model to accurately predict aerodynamics in flight simulation.The FFM shows minor differences in flight states compared to the real values, and its R 2 values are greater than 0.9.The WFFM outputs R 2 values exceeding 0.99 in both simulation tests.The comparisons between the WFFM and F-16 model show good agreements in all states. Testing Results for Doublet Input To assess the generality of the models, a 4-deg doublet control input, as shown in Figure 19, is applied to the models established in Section 3.2.2.Since the models were trained under sinusoidal inputs, doublet inputs can further evaluate the adaptability and practicality of the established models.To assess the generality of the models, a 4-deg doublet control input, as shown in Figure 19, is applied to the models established in Section 3.2.2.Since the models were trained under sinusoidal inputs, doublet inputs can further evaluate the adaptability and practicality of the established models. From the simulation results shown in Figure 20 and Table 6, it can be observed that the FFM and WFFM exhibit higher precision compared to the SS and LSTM models.This suggests that embedding the physical model into the black-box model can effectively improve both the predictive accuracy and the robustness of the neural network model.However, the FFM still cannot perfectly fit the flight states of the F-16 model.Figure 20 shows that all the state curves from the WFFM match with the F-16 model fairly well.Table 6 shows that the WFFM maintains an 2 R value exceeding 0.99.It can be concluded that the results of the WFFM are not only more precise but also more general in comparison with the FFM.From the simulation results shown in Figure 20 and Table 6, it can be observed that the FFM and WFFM exhibit higher precision compared to the SS and LSTM models.This suggests that embedding the physical model into the black-box model can effectively improve both the predictive accuracy and the robustness of the neural network model.However, the FFM still cannot perfectly fit the flight states of the F-16 model.Figure 20 shows that all the state curves from the WFFM match with the F-16 model fairly well.Table 6 shows that the WFFM maintains an R 2 value exceeding 0.99.It can be concluded that the results of the WFFM are not only more precise but also more general in comparison with the FFM.In this section, the aerodynamic model's accuracy at a high angle of attack was tested.By rapidly increasing the elevator deflection angle, the aircraft achieved a swift increase in both angle of attack and pitch angle within a short duration.During the maneuver, the aircraft was at a high angle of attack, resulting in significant changes in airflow over the aircraft's surface, including flow separation, vortex formation, and complex vortex interactions.These effects lead to nonlinear changes in aerodynamic forces and moments, making them challenging to accurately predict with simple aerodynamic models. In this test, the aerodynamic model obtained in Section 3.2.2 was further trained with elevator deflection increments of 5 • , 8 • , 10 • , and 15 • , and tested with a deflection increment of 13 • .The test input, as shown in Figure 21, involved a step signal where the elevator angle rapidly deflected from a trim angle of −2.24 • to −15.24 • .At this point, the pitch angle and angle of attack, as illustrated in Figure 22, rapidly increased to 65 • and 43 • , respectively.The R 2 value of the WFFM is 0.9912, which demonstrates that the WFFM can accurately predict the aerodynamics of the aircraft during high-angle-of-attack maneuvers.ment of 13°.The test input, as shown in Figure 21, involved a step signal where the elevator angle rapidly deflected from a trim angle of −2.24° to −15.24°.At this point, the pitch angle and angle of attack, as illustrated in Figure 22, rapidly increased to 65° and 43°, respectively.The 2 R value of the WFFM is 0.9912, which demonstrates that the WFFM can accurately predict the aerodynamics of the aircraft during high-angle-of-attack maneuvers. Conclusions In this paper, a novel proposed aerodynamic model called the weighted feature fusion model was implemented.By comparing the results of the state-space model, LSTM model, FFM, and WFFM in high-angles-of-attack aerodynamic prediction and flight simulation tests, the main conclusions of this study are as follows: ment of 13°.The test input, as shown in Figure 21, involved a step signal where the elev tor angle rapidly deflected from a trim angle of −2.24° to −15.24°.At this point, the pit angle and angle of attack, as illustrated in Figure 22, rapidly increased to 65° and 43 respectively.The 2 R value of the WFFM is 0.9912, which demonstrates that the WFF can accurately predict the aerodynamics of the aircraft during high-angle-of-attack m neuvers. Conclusions In this paper, a novel proposed aerodynamic model called the weighted feature f sion model was implemented.By comparing the results of the state-space model, LST model, FFM, and WFFM in high-angles-of-attack aerodynamic prediction and flight sim ulation tests, the main conclusions of this study are as follows: Conclusions In this paper, a novel proposed aerodynamic model called the weighted feature fusion model was implemented.By comparing the results of the state-space model, LSTM model, FFM, and WFFM in high-angles-of-attack aerodynamic prediction and flight simulation tests, the main conclusions of this study are as follows: (1) Compared to the black-box model, embedding the physics model with explicit physical meaning into the neural network improves both the interpolation and extrapolation capability of the model. Figure 1 . Figure 1.A typical neural network used to predict aerodynamic loads. Figure 1 . Figure 1.A typical neural network used to predict aerodynamic loads. Figure 2 . Figure 2. LSTM uses flight state history data to predict aerodynamic loads. Figure 2 . Figure 2. LSTM uses flight state history data to predict aerodynamic loads. data.The fourth layer is the feature fusion layer, which consists of the LSTM model.In this layer, two mappings have been established.One is the mapping from flight states to aerodynamic loads, which is the same as the black-box model.The other is the mapping from low-fidelity data to high-fidelity data, which includes additional physical information.The LSTM model with strong nonlinear fitting capabilities is used to fuse the features: Figure 3 . Figure 3. Architecture of WFFM.Finally, the error between the model's output ŷ and the high-fidelity data high y is calculated.Based on the chain rule, the backpropagation algorithm calculates the error gradient of the loss function with respect to each parameter of the network.Parameters Aerospace 2024 , 21 Figure 5 , Figure 5, the comparisons between the WDPR-8 with the tail support show good agreements in lift coefficients. Figure 6 . Figure 6.Lift coefficients at various reduced frequencies. Figure 6 . Figure 6.Lift coefficients at various reduced frequencies.Furthermore, high-angles-of-attack pull-up tests, frequency sweep tests, amplitude sweep tests, and multi-degree of freedom oscillation tests were also conducted.Figure7shows the pull-up of angle of attack for the aircraft model with WDPR-8 from 50 • to 80 • . Figure 7 shows the pull-up of angle of attack for the aircraft model with WDPR-8 from 50 • to 80 • .Aerospace 2024, 11, x FOR PEER REVIEW 9 of 21 Figure 7 . Figure 7. Pull-up maneuver of aircraft model with WDPR-8.Data derived from the above experiments were divided into two subsets.Training sets were used to adjust the weights and parameters of the model.Test sets were used to evaluate the model's interpolation and extrapolation prediction performance.The performance of the model is evaluated using the mean squared error.A smaller MSE indicates better performance. Figure 8 .Figure 9 . Figure 8.Comparison of training progress with and without standardization. Figure 8 . Figure 8.Comparison of training progress with and without standardization. Figure 8 .Figure 9 . Figure 8.Comparison of training progress with and without standardization. Figure 11 . Figure 11.Comparison of prediction results of WFFM, FFM, SS, and LSTM for extrapolation test. Figure 10 . Figure 10.Comparison of prediction results of WFFM, FFM, SS, and LSTM for interpolation test. Figure 11 . Figure 11.Comparison of prediction results of WFFM, FFM, SS, and LSTM for extrapolation test. Figure 11 . Figure 11.Comparison of prediction results of WFFM, FFM, SS, and LSTM for extrapolation test. 3. 2 . Flight Simulation Tests 3.2.1.Flight Simulation Due to the lack of real flight data, simulated flight data is used for training and validating the proposed aerodynamic models.In this section, a non-linear F-16 flight dynamics model is used.This plant can simulate the response of F-16 aircraft (Lockheed Martin, Bethesda, MD, USA) using the aerodynamic model and aerodynamic data as described by a NASA report [35].The structure of the flight simulation model is shown in Figure 12.Different control signals are input into the F-16 dynamic equations, which are coupled with various aerodynamic models.The resulting flight states are then compared to validate the accuracy of the aerodynamic models.The dynamic equations of this aircraft in the body-fixed reference axes can be expressed as: Aerospace 2024, 11, x FOR PEER REVIEW 13 of 21 Figure 12 . Figure 12.Structure of the flight simulation. , the angle of attack  , pitch angle  , elevator deflection e  , and throttle set- ting T  are obtained.The trim state and control inputs are presented in Table Figure 12 . Figure 12.Structure of the flight simulation. Figure 14 . Figure 14.Response of WFFM and F-16 model to the sinusoidal input with A = 3 • and f = 0.3 Hz: (a) angle of attack; (b) pitch angle; (c) pitch rate; (d) velocity. Figure 16 Figure 15 . Figure 15.Frequency and amplitude of sweep control input variation over time: (a) frequency; (b) amplitude. , the outputs of the LSTM model deviate significantly from the real flight states, which shows the limits of the black-box model to accurately predict aerodynamics in flight simulation.The FFM shows minor differences in flight states compared to the real values, and its 2 R values are greater than 0.9.The WFFM outputs 2 R values exceeding 0.99 in both simulation tests.The comparisons between the WFFM and F-16 model show good agreements in all states. Figure 21 . Figure 21.Step control input and elevator deflection with 13 • increment. Figure 22 . Figure 22.Response of WFFM to the high-angle-of-attack maneuver: (a) angle of attack; (b) pitch angle. ( 2 ) Compared to the FFM, further consideration of limiting the error of the physical model by introducing a weighted coefficient layer can improve the accuracy of aerodynamic prediction and simulation accuracy.(3) In flight simulation, the flight states based on the WFFM's outputs are very close to the F-16 model's, indicating that it can replace existing aerodynamic models. Table 1 . The MSE of WFFM for training sets. Table 1 . The MSE of WFFM for training sets. Table 1 . The MSE of WFFM for training sets. Table 2 . The MSE of WFFM for interpolation test. Table 3 . The MSE of WFFM for extrapolation test. Table 2 . The MSE of WFFM for interpolation test. Table 3 . The MSE of WFFM for extrapolation test. Table 3 . The MSE of WFFM for extrapolation test. Table 5 . The coefficient of determination for sweep input. Table 5 . The coefficient of determination for sweep input. Table 6 . The coefficient of determination for doublet input.   q V Table 6 . The coefficient of determination for doublet input.
11,331.6
2024-04-25T00:00:00.000
[ "Engineering", "Computer Science" ]
Enhancing Lung Cancer Survival Prediction: 3D CNN Analysis of CT Images Using Novel GTV1-SliceNum Feature and PEN-BCE Loss Function Lung cancer is a prevalent malignancy associated with a high mortality rate, with a 5-year relative survival rate of 23%. Traditional survival analysis methods, reliant on clinician judgment, may lack accuracy due to their subjective nature. Consequently, there is growing interest in leveraging AI-based systems for survival analysis using clinical data and medical imaging. The purpose of this study is to improve survival classification for lung cancer patients by utilizing a 3D-CNN architecture (ResNet-34) applied to CT images from the NSCLC-Radiomics dataset. Through comprehensive ablation studies, we evaluate the effectiveness of different features and methodologies in classification performance. Key contributions include the introduction of a novel feature (GTV1-SliceNum), the proposal of a novel loss function (PEN-BCE) accounting for false negatives and false positives, and the showcasing of their efficacy in classification. Experimental work demonstrates results surpassing those of the existing literature, achieving a classification accuracy of 0.7434 and an ROC-AUC of 0.7768. The conclusions of this research indicate that the AI-driven approach significantly improves survival prediction for lung cancer patients, highlighting its potential for enhancing personalized treatment strategies and prognostic modeling. Introduction Lung cancer is very common worldwide and is one of the cancer types with the highest mortality rate.The symptoms of lung cancer usually appear in the later stages of the disease and tend to spread (metastasis) to other organs/tissues, causing the disease to be fatal.Estimated new cases of lung cancer are second only to prostate cancer in men and breast cancer in women, and estimated mortality rates from lung cancer are the highest in both men and women at 21% [1]. With survival analysis, which allows us to estimate the time until an event occurs [2], the time of disease recurrence or death of the patient can be predicted.This prediction is crucial to shaping the treatment processes of cancer patients, as it helps clinicians make informed decisions about treatment plans and monitoring strategies.For example, patients with a poor prognosis can be monitored more closely and benefit from more aggressive treatment and advanced care planning [3], while standard treatment protocols with regular monitoring can be applied to patients with a better prognosis.Survival analysis also allows us to understand the course and consequences of the disease by evaluating the prognosis of different types of cancer and examining the survival rates of cancer patients within a certain period from the diagnosis and the factors affecting these rates. Although the survival rate of lung cancer varies depending on the stage of diagnosis, the type of cancer, and the general health status of the patient, Siegel et al. expressed the 5-year relative survival rate for lung and bronchus cancer between 2012 and 2018 network called Lite-ProSENet, which takes clinical data and CT scans as input.The textural tower is responsible for modeling clinical data, while the visual tower is responsible for extracting features from CT scans.Comprehensive experiments were carried out in the study, and they showed that Lite-ProSENet outperformed the other studies considering the c-index metric [14]. In the literature, survival analysis is widely considered as a survival classification problem for lung cancer.The survival classification problem focuses on individuals classifying whether an event (death) will occur within a certain time interval.The classification problem is generally evaluated as 2-class (1-year, 2-year, 5-year) or 3-class (Class 1: ≤ 6-month, Class 2: 6-24 months, Class 3: ≥ 24-month or Class 1: ≤36-month, Class 2: 36-60 months, Class 3: ≥60-month), using the determined threshold as reference.Using Machine Learning (ML) or Convolutional Neural Network (CNN) models trained with various methods, the success of the models is evaluated with many classification metrics, especially accuracy (ACC) and area under the curve (AUC) metrics.Doppalapudi et al. addressed survival analysis on the lung section of the SEER dataset as a classification and regression problem.In the study, Artificial Neural Network (ANN), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Random Forest (RF), Support Vector Machines (SVM), and Naïve Bayes models were compared with different metrics to solve the 3-class survival classification problem, and the authors emphasized that the ANN-based model was more successful than other models, with the best accuracy result [15].Lai et al. developed a multimodal deep neural network combining gene expression profiles and clinical data to accurately predict the 5-year overall survival of Non-Small Cell Lung Cancer (NSCLC) patients.In the study, survival status was estimated with 15 biomarkers combined with clinical data, and the results were compared with other well-known classifiers (K-Nearest Neighbors (KNN), RF, SVM) [16].Tang et al. introduced a new capsule network called CapSurv with a new loss function called survival loss to perform survival analysis with whole slide pathological images.In the study, semantic-level features extracted by VGG16 are used to train CapSurv to distinguish distinctive patches from whole slide pathological images.The method was tested as a 1-year survival classification problem in two different datasets and showed that the proposed CapSurv model could improve the prediction performance [17].Paul et al. utilized transfer learning to extract deep features from CT images of lung cancer patients, then, in order to predict short-and long-term survivors, classifiers were trained [18].Han et al. proposed a new multi-branch spatiotemporal residual network (MS-ResNet) for disease-specific survival prediction by integrating CT images and clinical data.This model extracts deep features from CT images with an improved residual network.With the feature selection algorithm, the most relevant subset of features is selected from the clinical data.Finally, the features are combined to leverage the two data types.Experiments have shown that it provides better results than other methods in the literature for the short, medium, and long survival classification problems [19].Wang et al. proposed an unsupervised deep learning method (residual convolutional autoencoder) to take advantage of unlabeled data in survival analysis and observed that deep learning features gave better results than radiomic features in 1-year classification [20].Parmar et al. compared fourteen feature selections and twelve classification methods on their performance in predicting the overall survival of lung cancer by utilizing the Lung1 dataset, which is also employed here.In the study, a total of 440 radiomic features were extracted from the patients' pre-treatment CT images.It has been demonstrated that the feature selection method based on the Wilcoxon test and the Random Forest classification method has the highest prognostic performance and stability [21].In order to show that the tissue surrounding the tumor is also clinically important, Vial et al. estimated the 2-year survival classification of the annular region by extracting tissue features from the outer part of the tumor.In the study, it was shown that radiomic features obtained from regions located outside but close to the tumor also have prognostic value [22].Braghetto and Braghetto et al. handled the survival analysis of lung cancer patients as a 2-year cut-off classification problem and compared the performance of radiomics and deep learning-based methods in survival prediction.The study included the CNN module, which provides direct feature extraction from CT images, the radiomic features module, and the module in which the obtained features are subjected to feature selection and dimensionality reduction.Deep learning-based applications gave worse results than radiomic data due to the lack of data in the study, inaccurate reconstruction with Convolutional Auto Encoder (CAE), and poor synthetic data produced with the Generative Adversarial Network (GAN) [23,24]. In Table 1, we provide a comprehensive overview of the literature studies addressing the survival classification problem for lung cancer.Each study is meticulously documented with key details including reference, cancer type, the dataset used, the model employed (highlighting the best-performing model where applicable), classification type, and corresponding performance metrics.This compilation offers insights into the diverse methodologies and performance outcomes achieved in survival classification research.In this study, our main purpose is to enhance lung cancer survival prediction by using modern Artificial Intelligence (AI) based methodologies.Toward this aim, we conducted analyses for survival classification from CT images of lung cancer patients, using a publicly available lung cancer database, and performed ablation studies to assess the classification success.Along with this, we provided a comprehensive literature analysis for the lung cancer survival analysis.The primary contributions of this study can be summarized as follows: • Introduction of a novel feature termed GTV1-SliceNum, which considers the number of Gross Tumor Volume-1 (GTV-1) tumor-containing slices in patients' CT scans.Its integration into the clinical data and its impact on classification success are demonstrated.The remainder of this study is structured as follows: in Section 2, the proposed method is provided; experimental studies and ablation studies are presented in Section 3; findings are analyzed in Section 4, and conclusions are supplied in Section 5. Materials and Methods Survival analysis is a statistical method used to analyze time-to-event data that is often applied in medical research to study the time until an event of interest occurs, such as death or disease recurrence.In this study, we employed a survival analysis to classify whether individuals would die within a certain time.Patients' lung CT images were evaluated using a three-dimensional convolutional neural network model to predict the survival time interval.In this context, we used a well-performing 3D CNN architecture which yields outstanding performance on visual object detection in computer vision applications by properly modifying the model to be used for lifetime estimation.The core modifications implemented in the architecture can be succinctly encapsulated in two key alterations.Firstly, the integration of a novel loss function into the model.Traditional computer vision paradigms typically benefit from ample and balanced sample sizes per class during the training phase.However, in the context of the present study, despite leveraging the most extensive labeled dataset available in the domain, namely the Lung1 dataset, class sample distributions remain imbalanced.Consequently, a custom loss function has been devised to address this challenge.Secondly, we used an additional feature in clinical observations.These two alterations have resulted in enhanced performance in the prediction of life expectancy.The general block diagram of the method applied in this manuscript is given in Figure 1. integration into the clinical data and its impact on classification success are demonstrated.The remainder of this study is structured as follows: in the second section, the proposed method is provided; experimental studies and ablation studies are presented in the third section; findings are analyzed in the following section, and conclusions are supplied in the conclusion section. Materials and Methods Survival analysis is a statistical method used to analyze time-to-event data that is often applied in medical research to study the time until an event of interest occurs, such as death or disease recurrence.In this study, we employed a survival analysis to classify whether individuals would die within a certain time.Patients' lung CT images were evaluated using a three-dimensional convolutional neural network model to predict the survival time interval.In this context, we used a well-performing 3D CNN architecture which yields outstanding performance on visual object detection in computer vision applications by properly modifying the model to be used for lifetime estimation.The core modifications implemented in the architecture can be succinctly encapsulated in two key alterations.Firstly, the integration of a novel loss function into the model.Traditional computer vision paradigms typically benefit from ample and balanced sample sizes per class during the training phase.However, in the context of the present study, despite leveraging the most extensive labeled dataset available in the domain, namely the Lung1 dataset, class sample distributions remain imbalanced.Consequently, a custom loss function has been devised to address this challenge.Secondly, we used an additional feature in clinical observations.These two alterations have resulted in enhanced performance in the prediction of life expectancy.The general block diagram of the method applied in this manuscript is given in Figure 1.Further elaboration on the methodology employed in this study will be expounded upon in subsequent sections, following the introduction of data representation and descriptions in the next subsection. Data Representation and Descriptions Lung cancer is common worldwide, and the most common lung cancer is Non-Small Cell Lung Cancer (NSCLC), with a rate of 80-85% [25].Lung cancer can be identified through various methods, including imaging tests, biopsy, sputum cytology, blood tests, or molecular testing.Since our study focuses on predicting survival time from CT images of lung cancer patients, CT imaging data was used as the primary form of data in this study.Among the openly shared datasets available for this purpose, the NSCLC-Radiomics (Lung1) dataset emerges as the most suitable for addressing the problem in our study.NSCLC-Radiomics, also known as the Lung1 dataset, is CT imaging data from 422 publicly available NSCLC patients available at The Cancer Imaging Archive (TCIA) [26][27][28].In this dataset, there is a file with a CSV extension containing the clinical data and three folders containing CT slices, segmentation images, and information about the segmentation images in the folder for each patient.Details of the clinical and CT image data in the dataset are given in the following sections. Tabular Data (Clinical Information) Clinical data in the dataset are in a file named NSCLC Radiomics Lung1.clinical-version3-Oct2019.csv.This file includes information about the age of the patients, the T, N, M stages of the cancer, overall stage, histology, gender, survival time, and survival status.A description of the clinical data in the dataset is briefly given in Table 2. Survival times of patients in the dataset vary between 10 days and 4454 days.In addition, 373 of 422 patients consist of uncensored data, and 49 of them consist of censored data. Image Data (CT and RTSTRUCT Information) In the dataset, CT images are kept in folders defined by PatientID in the CSV file for each patient.Within each folder, there are three folders (CT slices, segmentation images, and the Radiotherapy Structure Set (RTSTRUCT) file [29], which contains regions of interest (ROI) information and is used to transfer patient structures and related data between devices in the radiotherapy department). Computed tomography images are 3D images that are composed of many consecutively taken 2D images of a patient.Computed tomography images can be taken from three different cross-sectional areas: axial, sagittal, and coronal.Tomography images of the lung patients used in the study were obtained in the axial plane.An illustration of how coronal, sagittal, and axial planes are obtained from a patient is given in Figure 2. tively taken 2D images of a patient.Computed tomography images can be taken from three different cross-sectional areas: axial, sagittal, and coronal.Tomography images of the lung patients used in the study were obtained in the axial plane.An illustration of how coronal, sagittal, and axial planes are obtained from a patient is given in Figure 2. CT image data in the folders is set in DICOM (Digital Imaging and Communications in Medicine) format, which is a standard protocol for the management and transmission of medical images and related data [30].DICOM data contains metadata with different names.By using this data, images in DICOM format can be preprocessed and details of the images can be obtained.Within the scope of this study, a web application developed by Innolitcs to enable software developers, researchers, and radiologists to easily navigate the DICOM standard was used to learn the meaning of tag information about DICOM data [31].CT images of patients in the Lung1 dataset contain different numbers of slices (75-297).Meta-data specific to the DICOM format described in Table 3 were used to analyze the available slices in this study.The information provided is crucial for the correct preprocessing of the image.CT image data in the folders is set in DICOM (Digital Imaging and Communications in Medicine) format, which is a standard protocol for the management and transmission of medical images and related data [30].DICOM data contains metadata with different names.By using this data, images in DICOM format can be preprocessed and details of the images can be obtained.Within the scope of this study, a web application developed by Innolitcs to enable software developers, researchers, and radiologists to easily navigate the DICOM standard was used to learn the meaning of tag information about DICOM data [31]. CT images of patients in the Lung1 dataset contain different numbers of slices (75-297).Meta-data specific to the DICOM format described in Table 3 were used to analyze the available slices in this study.The information provided is crucial for the correct preprocessing of the image. Meta Data Description SOP Instance UID Represents the identification for each slice. Pixel Array Represents the 512 × 512-pixel matrix of the image data. Slice Position Represents the z-coordinates of the slices along the axial axis. Rescale Intercept Intercept parameter used to transform the pixel matrix. Rescale Slope Slope parameter used to transform the pixel matrix. Slice Thickness Represents the distance between two consecutive slices in mm. Pixel Spacing Represents the distance between pixels in the Pixel Array component. There are segmented images with different labels (GTV, Spinal-Cord, Lung-Left, Lung-Right, Esophagus) for each patient in the dataset.Among these labels, GTV contains Rescale Slope Slope parameter used to transform the pixel matrix. Slice Thickness Represents the distance between two consecutive slices in mm.Pixel Spacing Represents the distance between pixels in the Pixel Array component. There are segmented images with different labels (GTV, Spinal-Cord, Lung-Left, Lung-Right, Esophagus) for each patient in the dataset.Among these labels, GTV contains location information for the gross tumor volume, Lung-Left for the left lung, Lung-Right for the right lung, Spinal-Cord for the spinal cord region, and Esophagus for the esophagus. Each patient's file may contain different types and numbers of segmented images.The type and number of segmented images are not standard.For example the number of CT images (number of slices) taken from patient LUNG1-001 is 134, and there are a total of 358 segmented images labeled 139 Left-Lung, 134 Right-Lung, 84 Spinal-Cord, and 21 GTV-1, while LUNG1 has 94 slices, LUNG1-243 has a total of 327 segmented images, 113 of which are labeled Left-Lung, 101 Right-Lung, 94 Spinal-Cord, 6 GTV-2, and 13 GTV-1.Additionally, different types and/or numbers of segmented image data may be present in different slices of the same patient.For example, different types of segmented data in CT slice number 28 of patient LUNG1-243 are given in Figure 3.As seen in Figure 3, segmented data with all different labels may not be present in a slice of a patient.This is because, in the axial CT image, each slice represents a scan of a specific region of the lung.In the example above, blue indicates the Lung-Left, green the Lung-Right, red the GTV-1 region, and black the Spinal-Cord.There is no GTV-2 image of the patient in the 28th CT image for LUNG1-243. It is not possible to directly access the available segmented images.To access segmented images, the Radiotherapy Structure Set (RTSTRUCT) document in each patient's folder is used.Since the segmented image information of patient number LUNG1-128 was not available in the dataset used in this study, data from 421 patients were used.The As seen in Figure 3, segmented data with all different labels may not be present in a slice of a patient.This is because, in the axial CT image, each slice represents a scan of a specific region of the lung.In the example above, blue indicates the Lung-Left, green the Lung-Right, red the GTV-1 region, and black the Spinal-Cord.There is no GTV-2 image of the patient in the 28th CT image for LUNG1-243. It is not possible to directly access the available segmented images.To access segmented images, the Radiotherapy Structure Set (RTSTRUCT) document in each patient's folder is used.Since the segmented image information of patient number LUNG1-128 was not available in the dataset used in this study, data from 421 patients were used.The DICOM format file located in the RTSTRUCT folder contains many meta-data.A description of the meta-data used within the scope of the study is given in Table 4. Meta Data Description Referenced SOP Instance UID Represents the identity of the slice on which the segmentation process is applied.Structure Set ROI Sequence Contains ROI information for the current structure set. ROI Contour Sequence Refers to the boundary sequences that will define the ROI. Contour Sequence Refers to boundary sequences. Contour Image Sequence Contains arrays of images containing the boundary. ROI Name It is the array containing the names of the segmentation sets for slices. Contour Data These are the values that hold the boundary data of segmented regions. In the previous literature addressing similar problems to ours, only tumor regions from patients labeled as GTV-1 were utilized.The dataset contains a variable number of images with GTV-1 labeled tumors, ranging from 2 to 97. Figure 4 shows CT slices from a patient (LUNG1-243) with GTV-1 regions marked.There are 94 slices and 13 GTV-1 labeled tumor regions (Slice 21 through Slice 33) for this patient.For all patients in the dataset, Slice Thickness is given as 3.0 mm and Pixel Spacing is 0.977 mm. ROI Contour Sequence ROI. Contour Sequence Refers to boundary sequences. Contour Image Sequence Contains arrays of images containing the boundary. ROI Name It is the array containing the names of the segmentation sets for slices. Contour Data These are the values that hold the boundary data of segmented regions. In the previous literature addressing similar problems to ours, only tumor regions from patients labeled as GTV-1 were utilized.The dataset contains a variable number of images with GTV-1 labeled tumors, ranging from 2 to 97. Figure 4 shows CT slices from a patient (LUNG1-243) with GTV-1 regions marked.There are 94 slices and 13 GTV-1 labeled tumor regions (Slice 21 through Slice 33) for this patient.For all patients in the dataset, Slice Thickness is given as 3.0 mm and Pixel Spacing is 0.977 mm.Upon examination of Figure 4, it is evident that GTV-1 information is absent in each slice.As a result of a detailed analysis of the Lung1 dataset, Braghetto stated that 5 patients were incorrectly segmented due to incorrect labeling of tumor regions, 62 patients due to interpolation of segmentation images in consecutive slices, and 3 patients due to the presence of more than one tumor in one image [23].The representation of each error type is given in Figure 5. Upon examination of Figure 4, it is evident that GTV-1 information is absent in each slice.As a result of a detailed analysis of the Lung1 dataset, Braghetto stated that 5 patients were incorrectly segmented due to incorrect labeling of tumor regions, 62 patients due to interpolation of segmentation images in consecutive slices, and 3 patients due to the presence of more than one tumor in one image [23].The representation of each error type is given in Figure 5. Preprocessing Image Data In order to appropriately utilize the acquired data in the models, it is necessary to perform preprocessing steps initially.These preprocessing steps are presented in the following two subsections as processing of DICOM images and rectification of errors in the database. Preprocessing Image Data In order to appropriately utilize the acquired data in the models, it is necessary to perform preprocessing steps initially.These preprocessing steps are presented in the following two subsections as processing of DICOM images and rectification of errors in the database. Processing of DICOM Images and RTSTRUCT Data Within the scope of this study, the initial task involves locating slices containing GTV-1 regions.This process was conducted by following the steps outlined in Braghetto's study [23], thereby benefiting from their insights. 1. By reading the RTSTRUCT file of each patient, the index of the segmentation image with the GTV-1 label in the file is found (the "ROIName" property of each label ID in the "StructureSetROISequence" is looked at and the index of the region with the "GTV-1" label is kept).2. Using the index information, the number of slices containing cancerous cells (labeled GTV-1) in the patient can be found ("ContourSequence" information belonging to the GTV-1 index is used in the "ROIContourSequence"). 3. The ID information of the segmentation image with the GTV-1 label is obtained ("Ref-erencedSOPInstanceUID" information of the 0th element of the "ContourImageSequence" is used for the segmentation image containing each cancerous area). 4. The ID information of the relevant patient's CT slices is obtained ("SOPInstanceUID" information is used).5. The ID information of patient CT slices and the ID information of slices labeled GTV-1 have common elements.In this way, information about the slices containing GTV-1 belonging to the patient is obtained. The region of interest (RoI) is obtained by using the coordinate information of the tumor area in the slices labeled GTV-1.The following steps were taken to perform this task. 1. Borders in each segmentation image with the GTV-1 label were found (with the help of "ContourData" information)."ContourData" expresses the information of the tumor regions in mm for the x, y, and z axes (For example, '−56.15', '−230.73','−491.5'). 2. The border of the tumor area obtained in mm is converted into pixels. a. Each image in CT slices labeled GTV-1 has a reference point in the x, y, and z axes (with the help of "ImagePositionPatient"). b. The pixel spacing of each image in the CT slices labeled GTV-1 in the x, y, and z axes is found (with the help of "PixelSpacing").c. Coordinate transformation is performed using Equation (1).In the equation, x mm and y mm represent the coordinate information in millimeters, x s and y s represent the pixel spacing of the image, x 0 and y 0 represent the position of the image reference frame, and x pixel and y pixel represent the position of the image in pixels. x pixel = x mm − x 0 x s and y pixel = Then, DICOM slices were converted to JPEG format for cropping tumor regions and performing other preprocessing.While the conversion process was performed, the values in the DICOM images were normalized between 0-255.Equation (2) was used in the normalization process. In the above equation, dcm and jpeg expressions refer to DICOM and JPEG pixel values, respectively. Handling Incorrectly Segmented Images in the Dataset Improper images in the dataset given in the previous section were re-examined and solutions were developed against these incorrectly segmented images.As a result of the analysis, no solution was developed in Figure 5a because it was not possible to verify the error type by the clinician.Against the error type in Figure 5b, when there were sudden changes in the number of pixels covering the regions with each GTV-1 label, the relevant error was detected and the GTV in the slice where the error was found was made by interpolating between the slice before and the slice after where the errors were found.Figure 6 shows the update in the slice where the error was found.Against the error type in Figure 5c, it was determined whether the tumor was on the right or left, and only one region was focused on. Ready-to-Use Input Images for the Model While performing the tests, the censored observations are discarded, and the input images of the remaining patients are converted to gray level, 240 × 240 in size, and 5-slice (240 × 240 × 5).When adjusting the 5-slice images, the slice with the largest tumor area among the slices containing the tumor areas is selected.In order to perform this process, first, the contours of the tumor regions are determined in the CT slices of each patient containing GTV-1.Using the extreme points of the tumor perimeter, the area is found by drawing the minimum rectangle surrounding the tumor area.To preserve spatial information, the largest tumor slice has two adjacent slices before and two after that contain the tumor.If the number of slices containing the tumor region is greater than or equal to 5, it and four neighboring slices are kept.If the number of slices containing tumor regions is less than 5, they are oversampled (copied) and saved until the number of slices containing tumors is five.Finally, all input images are normalized between 0-1 to ensure that the neural network gives more successful results.Figure 7 shows an example input image sent to the model. Ready-to-Use Input Images for the Model While performing the tests, the censored observations are discarded, and the input images of the remaining patients are converted to gray level, 240 × 240 in size, and 5-slice (240 × 240 × 5).When adjusting the 5-slice images, the slice with the largest tumor area among the slices containing the tumor areas is selected.In order to perform this process, first, the contours of the tumor regions are determined in the CT slices of each patient containing GTV-1.Using the extreme points of the tumor perimeter, the area is found by drawing the minimum rectangle surrounding the tumor area.To preserve spatial information, the largest tumor slice has two adjacent slices before and two after that contain the tumor.If the number of slices containing the tumor region is greater than or equal to 5, it and four neighboring slices are kept.If the number of slices containing tumor regions is less than 5, they are oversampled (copied) and saved until the number of slices containing tumors is five.Finally, all input images are normalized between 0-1 to ensure that the neural network gives more successful results.Figure 7 shows an example input image sent to the model. 3D ResNet-34 Architecture In this study, 2-year survival classification was performed using only CT images with the 3D ResNet-34 network.ResNet architectures are deep neural network architectures that add extra shortcut connections to the model and vary between 18 and 152 layers in order to eliminate the vanishing (zero) or exploding (large value) gradient problems caused by the increasing number of layers in deep convolutional neural networks [32].In the tests performed, the 5-slice 3D input images were sent to the 3D version of the ResNet-34 model. A New Feature for Clinical Data: GTV1-SliceNum Tumor thickness is the measurement in millimeters of the perpendicular distance between the highest point of the tumor surface and the deepest point of the infiltrative front of the tumor [33].There are many studies in the literature that reveal a significant relationship between tumor thickness and overall survival.In [34], it was noted that the median survival time was 24.2 months for a lung pleural thickness of less than or equal to 5.1 mm, and 17.7 months for a thickness exceeding this value.Hsu et al. divided NSCLC patients into three groups, taking into account operation notes (ONs) and pathology reports (PRs), and performed a 5-year survival analysis.According to the ONs and PRs, the survival results were 70.1% for Group 1 patients with tumors 3 cm or smaller (ON and PR); 49.1% for Group 2 patients with tumors larger than 3 cm (ON and PR); 51.1% for Group 3 patients with tumors larger than 3 cm (ON) and tumors 3 cm or smaller (PR) [35].Gonzalez-Moles et al. showed that tumor thickness in tongue cancer has the greatest impact on survival, and patients with a tumor thickness of less than or equal to 3 mm had a 5-year survival of 85.7%; 58.3% in patients with tumor thickness between 4-7 mm; and 57% in patients with >7 mm [36].In the study, it was emphasized that tumor thickness was significantly associated with survival in Merkel Cell Carcinoma (MCC).The 5-year diseasefree survival was found to be 18% in tumors >10 mm thick and 69% in tumors ≤ 10 mm thick, and the disease-specific 5-year survival was found to be 74% in tumors >10 mm thick and 97% in tumors ≤ 10 mm thick [37]. Slice Thickness for CT image slices in the dataset is specified as 3.0 mm.Each patient has a variable number of slices (75-297) as well as a variable number of GTV-1 labeled slices (2-97).Therefore, the number of slices with different numbers of GTV-1 tags in the dataset may constitute a meaningful feature for survival classification.For example, while 3D ResNet-34 Architecture In this study, 2-year survival classification was performed using only CT images with the 3D ResNet-34 network.ResNet architectures are deep neural network architectures that add extra shortcut connections to the model and vary between 18 and 152 layers in order to eliminate the vanishing (zero) or exploding (large value) gradient problems caused by the increasing number of layers in deep convolutional neural networks [32].In the tests performed, the 5-slice 3D input images were sent to the 3D version of the ResNet-34 model. A New Feature for Clinical Data: GTV1-SliceNum Tumor thickness is the measurement in millimeters of the perpendicular distance between the highest point of the tumor surface and the deepest point of the infiltrative front of the tumor [33].There are many studies in the literature that reveal a significant relationship between tumor thickness and overall survival.In [34], it was noted that the median survival time was 24.2 months for a lung pleural thickness of less than or equal to 5.1 mm, and 17.7 months for a thickness exceeding this value.Hsu et al. divided NSCLC patients into three groups, taking into account operation notes (ONs) and pathology reports (PRs), and performed a 5-year survival analysis.According to the ONs and PRs, the survival results were 70.1% for Group 1 patients with tumors 3 cm or smaller (ON and PR); 49.1% for Group 2 patients with tumors larger than 3 cm (ON and PR); 51.1% for Group 3 patients with tumors larger than 3 cm (ON) and tumors 3 cm or smaller (PR) [35].Gonzalez-Moles et al. showed that tumor thickness in tongue cancer has the greatest impact on survival, and patients with a tumor thickness of less than or equal to 3 mm had a 5-year survival of 85.7%; 58.3% in patients with tumor thickness between 4-7 mm; and 57% in patients with >7 mm [36].In the study, it was emphasized that tumor thickness was significantly associated with survival in Merkel Cell Carcinoma (MCC).The 5-year disease-free survival was found to be 18% in tumors >10 mm thick and 69% in tumors ≤ 10 mm thick, and the disease-specific 5-year survival was found to be 74% in tumors >10 mm thick and 97% in tumors ≤ 10 mm thick [37]. Slice Thickness for CT image slices in the dataset is specified as 3.0 mm.Each patient has a variable number of slices (75-297) as well as a variable number of GTV-1 labeled slices (2-97).Therefore, the number of slices with different numbers of GTV-1 tags in the dataset may constitute a meaningful feature for survival classification.For example, while one patient has only 2 slices with the GTV-1 label, another patient has 21 slices with the GTV-1 label, which, in a sense, indicates the tumor thickness.For each patient in the dataset, the region with the GTV-1 label in each slice is obtained through the RTSTRUCT label called ROIName, and the total number of slices with the GTV-1 label for each patient is added to the clinical data as a new feature. Feature importance score is a value that measures the contribution of each feature (or variable) in a machine learning model to its predictive performance.The calculated feature importance score provides detailed information about the dataset and reveals which feature(s) is/are more dominant in the relevant problems.In this way, the features with high scores can be selected, while the features with low scores can be eliminated and the model can be simplified.Statistical correlation scores, coefficients calculated of models, and many other techniques are used when calculating feature importance scores.The importance score of the added GTV1-SliceNum feature was tested with a Decision Tree and a Random Forest, which are well-known machine learning methods, and the results are shown in Figure 8. Diagnostics 2024, 14, x FOR PEER REVIEW 14 of one patient has only 2 slices with the GTV-1 label, another patient has 21 slices with th GTV-1 label, which, in a sense, indicates the tumor thickness.For each patient in the d taset, the region with the GTV-1 label in each slice is obtained through the RTSTRUC label called ROIName, and the total number of slices with the GTV-1 label for each patie is added to the clinical data as a new feature.Feature importance score is a value that measures the contribution of each feature ( variable) in a machine learning model to its predictive performance.The calculated fe ture importance score provides detailed information about the dataset and reveals whic feature(s) is/are more dominant in the relevant problems.In this way, the features wi high scores can be selected, while the features with low scores can be eliminated and th model can be simplified.Statistical correlation scores, coefficients calculated of model and many other techniques are used when calculating feature importance scores.The im portance score of the added GTV1-SliceNum feature was tested with a Decision Tree an a Random Forest, which are well-known machine learning methods, and the results a shown in Figure 8.As depicted in Figure 8, the proposed GTV1-SliceNum feature has demonstrated i significance by ranking as the second most influential feature in survival classificatio The proposed GTV1-SliceNum feature can be considered a special interpretation of a po ular GTV concept that is widely used in oncology as a prognostic factor.The GTV measu had been mentioned as significant in a pivotal study, presented in [38], which involve stage III NSCLC patients and demonstrated its critical role in survival prediction an treatment planning. A New Loss Function: Penalized Binary Cross Entropy (PEN-BCE) During training of neural networks, the loss function is very important to learn mod parameters and produce robust results.Loss functions are handled in different ways f classification and regression problems.A cross-entropy loss function, which is a measu of the difference between real class labels and the probabilities predicted by the model, often preferred in classification. The performance of the model's predictions between two classes is measured by u ing the binary cross-entropy loss function, which is specialized for binary classificatio problems.However, this loss function does not directly account for false positives (FP and false negatives (FNs), which provide critical information about how the model pe forms in real-world scenarios.However, to better adapt to real-world scenarios and cla sify imbalanced datasets, different loss functions can be used, such as weighted cross-e tropy loss, focal loss (FL) [39], asymmetric loss (ASL) [40], and real-world weight cros entropy loss (RWWCE) [41].As depicted in Figure 8, the proposed GTV1-SliceNum feature has demonstrated its significance by ranking as the second most influential feature in survival classification.The proposed GTV1-SliceNum feature can be considered a special interpretation of a popular GTV concept that is widely used in oncology as a prognostic factor.The GTV measure had been mentioned as significant in a pivotal study, presented in [38], which involved stage III NSCLC patients and demonstrated its critical role in survival prediction and treatment planning. A New Loss Function: Penalized Binary Cross Entropy (PEN-BCE) During training of neural networks, the loss function is very important to learn model parameters and produce robust results.Loss functions are handled in different ways for classification and regression problems.A cross-entropy loss function, which is a measure of the difference between real class labels and the probabilities predicted by the model, is often preferred in classification. The performance of the model's predictions between two classes is measured by using the binary cross-entropy loss function, which is specialized for binary classification problems.However, this loss function does not directly account for false positives (FPs) and false negatives (FNs), which provide critical information about how the model performs in real-world scenarios.However, to better adapt to real-world scenarios and classify imbalanced datasets, different loss functions can be used, such as weighted cross-entropy loss, focal loss (FL) [39], asymmetric loss (ASL) [40], and real-world weight cross-entropy loss (RWWCE) [41]. The loss functions mentioned above do not fully address the FN and FP cases.The proposed Penalized Binary Cross Entropy Loss (PEN-BCE) provides a loss function that is more suitable for real-world scenarios by adding deviations in the produced output probability values as a penalty parameter for both FN and FP cases.To understand the PEN-BCE loss function, the binary cross-entropy loss given in Equation (3) should first be examined. In this loss function, N refers to the total number of training data, y i refers to the ground truth target variable of the relevant training data, and p i refers to the classification probability of the relevant training data.The weighted BCE loss used with reference to Equation ( 3) includes an additional weight (w) parameter that emphasizes the importance of positive labels, as in Equation ( 4). Unlike the BCE weighted loss, the focal loss has been reshaped to reduce the weight of easily classified examples for problems arising from imbalanced datasets, thus enabling training to focus on difficult examples [39].To achieve this, a modulation factor (1 − p t i ) γ was added to the cross-entropy loss with the focusing parameter γ (γ ≥ 0), as given in Equation ( 5). Here, p ti varies depending on the value of the label of the relevant training example (p i if the label, y i , is 1, otherwise 1 − p i ). Ridnik et al., in their study, proposed a new loss function that allows dynamically reducing the weight of easy negative examples and exceeding difficult thresholds [40].The ASL loss function detailed in Equation ( 6) combines the mechanisms of asymmetric focusing and probability shifting. The γ+ and γparameters given in Equation ( 6) are the focusing parameters that adjust the focusing levels of positive and negative samples.Asymmetric focusing reduces the contribution of negative samples to loss when their probabilities are low and adds a probability shift parameter ((p m i = max(p i − margin, 0))), which is an additional mechanism that performs hard thresholding of easy negative samples, that is, completely eliminates negative samples when their probabilities are very low to the loss function.The margin value specified in the function is greater than 0 and is only integrated into the right side of the equation to obtain asymmetric probability shifting focus loss. Ho and Wookey define the RWWCE loss function, and the weights related to the cost of missing positive and negative samples separately, as given in Equation ( 7) [41]. While w mcfn , given in Equation ( 7), expresses the marginal cost of the false negative relative to the true positive, w mcfp expresses the marginal cost of the false positive relative to the true negative. The loss functions given above are derived from BCE-based loss functions for realworld scenarios.However, these functions do not directly address or penalize the possibility of incorrect predictions.Therefore, the proposed PEN-BCE loss function takes advantage of the loss functions in the literature to both penalize misclassifications caused by FPs and FNs and emphasize the effect of incorrectly estimated probabilities.In order to achieve this, PEN-BCE adds a penalty parameter to the BCE loss, as seen in Equation (8). ∝ and β given in Equation ( 8) represent FN and FP weights, respectively, and p iFN and p iFP parameters refer to FN and FP probability threshold values, respectively.In addition to the standard Binary Cross Entropy Loss function, the function includes extra terms for the FN and FP cases.Thanks to these additional terms, the model can impose more penalties on FNs and FPs.For y i = 1 value of the function, the equation is simplified as PEN − BCE = −log p i + ∝ .max(0,p i FN − p i ) 2 .Using this equation, PEN-BCE loss values for a range where p i values vary between 0 and 1 can be plotted, as shown in Figure 9. ∝ and β given in Equation ( 8) represent FN and FP weights, respectively, and piFN and piFP parameters refer to FN and FP probability threshold values, respectively.In addition to the standard Binary Cross Entropy Loss function, the function includes extra terms for the FN and FP cases.Thanks to these additional terms, the model can impose more penalties on FNs and FPs.For yi = 1 value of the function, the equation is simplified as − = − log + ∝ .max (0, − ) .Using this equation, PEN-BCE loss values for a range where pi values vary between 0 and 1 can be plotted, as shown in Figure 9.The original Binary Cross Entropy Loss (blue dashed lines) function and PEN-BCE Loss function (for different pFN values) are given above.PEN-BCE produces higher loss values compared to the original BCE function, especially at lower prediction probabilities (when pi values are low).This indicates that the model aims to reduce FN predictions by giving them a larger penalty.Increasing the value of pFN means that the penalty will become larger, and the model will be more directed towards minimizing false negatives. Additionally, increasing the value of ∝ results in more penalties for underestimation probabilities.This encourages the model to further reduce such errors by penalizing false negative predictions more stringently.Looking at the graph, it can be seen that, as the value of ∝ increases, the loss values increase greatly, especially for low probabilities. Results In this study, we conducted a 2-year survival classification analysis using the 3D Res-Net-34 architecture on CT images of lung cancer patients, employing the NSCLC-Radiomics (Lung1) dataset.Subsequently, various ablation studies were conducted to assess their impact on classification efficacy, followed by a detailed analysis presentation. All experiments detailed in the subsequent sections were conducted on a computing system equipped with Pop!_OS 22.04 LTS, powered by an AMD Ryzen 9 5980HS with Radeon Graphics CPU @ 3.30GHz, 32 GB LPDDR4X RAM, and an NVIDIA GeForce ® RTX3080 eGPU.The system utilized CUDA 11.2 and CUDNN 8.1, operating within the Keras framework with a Tensor-flow 2.8.0 backbone. During the experiments, the dataset was split into training and testing sets using an 85%-15% ratio.The Lung1 database utilized in this study comprises clinical data and CT images from 422 patients.However, due to errors in the segmentation file for patient LUNG1-128, the clinical data and CT slices of this patient were excluded from the experiment.Therefore, tests were conducted on the remaining 421 patients.For those 421 observations, there were 122 patients (32.7%) with a survival time exceeding 2 years, while 251 patients (67.3%) had a survival time of less than 2 years at the 2-year classification threshold.Maintaining class balance during the train-test split was prioritized.Hence, the 2- The original Binary Cross Entropy Loss (blue dashed lines) function and PEN-BCE Loss function (for different p FN values) are given above.PEN-BCE produces higher loss values compared to the original BCE function, especially at lower prediction probabilities (when p i values are low).This indicates that the model aims to reduce FN predictions by giving them a larger penalty.Increasing the value of p FN means that the penalty will become larger, and the model will be more directed towards minimizing false negatives.Additionally, increasing the value of ∝ results in more penalties for underestimation probabilities.This encourages the model to further reduce such errors by penalizing false negative predictions more stringently.Looking at the graph, it can be seen that, as the value of ∝ increases, the loss values increase greatly, especially for low probabilities. Results In this study, we conducted a 2-year survival classification analysis using the 3D ResNet-34 architecture on CT images of lung cancer patients, employing the NSCLC-Radiomics (Lung1) dataset.Subsequently, various ablation studies were conducted to assess their impact on classification efficacy, followed by a detailed analysis presentation. All experiments detailed in the subsequent sections were conducted on a computing system equipped with Pop!_OS 22.04 LTS, powered by an AMD Ryzen 9 5980HS with Radeon Graphics CPU @ 3.30GHz, 32 GB LPDDR4X RAM, and an NVIDIA GeForce ® RTX3080 eGPU.The system utilized CUDA 11.2 and CUDNN 8.1, operating within the Keras framework with a Tensor-flow 2.8.0 backbone. During the experiments, the dataset was split into training and testing sets using an 85%-15% ratio.The Lung1 database utilized in this study comprises clinical data and CT images from 422 patients.However, due to errors in the segmentation file for patient LUNG1-128, the clinical data and CT slices of this patient were excluded from the experiment.Therefore, tests were conducted on the remaining 421 patients.For those 421 observations, there were 122 patients (32.7%) with a survival time exceeding 2 years, while 251 patients (67.3%) had a survival time of less than 2 years at the 2-year classification threshold.Maintaining class balance during the train-test split was prioritized.Hence, the 2-year classification threshold in the randomly generated training set was adjusted to 30.9% and 69.1% for survival times exceeding 2 years and those below 2 years, respectively, mirroring the proportions observed in the entire dataset. Two-Year Survival Classification with 3D ResNet-34 Model In model training, sigmoid was defined as the activation function in the output layer, and binary cross entropy was defined as the loss function.A 5-fold cross-validation process was performed in training the model.The optimization method used when training the models was Stochastic Gradient Descent (SGD), and the initial learning rate was set to 2 × 10 −5 and the weight decay parameter was set to 1 × 10 −6 .To ensure better convergence of SGD, a Nesterov accelerator was used, and the momentum value was determined as 0.9.In training the models, the batch size was set to 16, and, if the validation loss remained constant for 25 cycles, the learning rate was reduced by 0.9.Additionally, Early Stopping was applied to prevent overlearning of the model.Accuracy (ACC) and Area Under the Curve (AUC) metrics were used to evaluate the success of the models and the model was trained for 200 epochs. The conducted tests involved the exclusion of censored observations, with 5-slice 3D input images being fed into the ResNet-34 model.The resultant tests yielded an average test loss value of 0.6380, with average test accuracy (ACC) and area under the receiver operating characteristic curve (AUC) values obtained as 0.6377 and 0.7548, respectively.Furthermore, the ROC-AUC curve for the test is presented in Figure 10. Diagnostics 2024, 14, x FOR PEER REVIEW 17 of 25 year classification threshold in the randomly generated training set was adjusted to 30.9% and 69.1% for survival times exceeding 2 years and those below 2 years, respectively, mirroring the proportions observed in the entire dataset. Two-Year Survival Classification with 3D ResNet-34 Model In model training, sigmoid was defined as the activation function in the output layer, and binary cross entropy was defined as the loss function.A 5-fold cross-validation process was performed in training the model.The optimization method used when training the models was Stochastic Gradient Descent (SGD), and the initial learning rate was set to 2 × 10 −5 and the weight decay parameter was set to 1 × 10 −6 .To ensure better convergence of SGD, a Nesterov accelerator was used, and the momentum value was determined as 0.9.In training the models, the batch size was set to 16, and, if the validation loss remained constant for 25 cycles, the learning rate was reduced by 0.9.Additionally, Early Stopping was applied to prevent overlearning of the model.Accuracy (ACC) and Area Under the Curve (AUC) metrics were used to evaluate the success of the models and the model was trained for 200 epochs. The conducted tests involved the exclusion of censored observations, with 5-slice 3D input images being fed into the ResNet-34 model.The resultant tests yielded an average test loss value of 0.6380, with average test accuracy (ACC) and area under the receiver operating characteristic curve (AUC) values obtained as 0.6377 and 0.7548, respectively.Furthermore, the ROC-AUC curve for the test is presented in Figure 10. Ablation Study In this study, an ablation study was conducted to comprehend the influences of slice numbers in input images, data augmentation, censored observations, and the proposed loss function on resolving the survival classification problem under consideration.The contributions of each individual component to the achievement are elucidated in subsequent paragraphs. Effect of Number of Slices: During the tests, the impact of varying the number of slices in the input image transmitted to the 3D CNN architecture on the classification outcome was demonstrated.In this context, the efficacy of the 5-slice structure utilized in the experiment conducted in the previous subsection was compared to that of the 4-slice structure, and the findings are presented in Table 5. Ablation Study In this study, an ablation study was conducted to comprehend the influences of slice numbers in input images, data augmentation, censored observations, and the proposed loss function on resolving the survival classification problem under consideration.The contributions of each individual component to the achievement are elucidated in subsequent paragraphs. Effect of Number of Slices: During the tests, the impact of varying the number of slices in the input image transmitted to the 3D CNN architecture on the classification outcome was demonstrated.In this context, the efficacy of the 5-slice structure utilized in the experiment conducted in the previous subsection was compared to that of the 4-slice structure, and the findings are presented in Table 5.Effect of Data Augmentation: While performing the tests, data augmentation was performed by rotating the input images and shifting them horizontally and vertically.The comparison of the data-augmented test and the test performed in Section 3.1 is given in Table 6.Effect of Censored Observations: The Lung1 dataset contains 11.4% censored observations.Katzman et al. stated that, when survival analysis is considered as a standard regression problem, right-censored data should be discarded [42].Right-censoring is a statistical method used to estimate the time until an event by taking into account the time elapsed from the moment an event is observed.It is often employed in survival analysis, where the survival time of a group of individuals regarding a specific event (such as death) is examined.However, some individuals may not experience the event during the observation period or fail to report their outcomes.Such instances are termed right censoring because the dates of the event occurrence are censored from the right side (i.e., beyond the end of the observation period).Since survival analysis was considered a survival classification problem in this study, extra tests were performed by adding censored data.In tests performed with censored observations, if the patient's follow-up exceeded 730 days (2 years), the patient's survival class was set to more than 2 years.To observe the effect of censored observations, the classification performance in cases without censored observations (373 patients) and in cases with censored observations (421 patients) was compared and shown in Table 7.Effect of the Input Image: In the experiments, the main motivation was to facilitate a more efficient process by utilizing both the original versions of CT slices belonging to patients and cropped regions of interest (ROIs) corresponding to GTV-1 tumor areas.Within this scope, the initial step involves cropping the tumor regions from the images.This necessitates identifying the surroundings of the tumor regions within the CT slices containing GTV-1 tumors.Subsequently, the midpoint of the minimum rectangle surrounding the tumor region is determined using the endpoints of the tumor boundary.Finally, the tumor region is cropped to a size of 128 × 128 pixels with the midpoint of the rectangle as the center.Figure 11 illustrates the cropped tumor region alongside the CT slice containing the largest GTV-1 circumference for a patient (LUNG1-243).more efficient process by utilizing both the original versions of CT slices belonging to patients and cropped regions of interest (ROIs) corresponding to GTV-1 tumor areas.Within this scope, the initial step involves cropping the tumor regions from the images.This necessitates identifying the surroundings of the tumor regions within the CT slices containing GTV-1 tumors.Subsequently, the midpoint of the minimum rectangle surrounding the tumor region is determined using the endpoints of the tumor boundary.Finally, the tumor region is cropped to a size of 128 × 128 pixels with the midpoint of the rectangle as the center.Figure 11 illustrates the cropped tumor region alongside the CT slice containing the largest GTV-1 circumference for a patient (LUNG1-243).To assess the impact of the input image, 5-slice input images comprising the GTV-1 RoI regions are provided to the model.The classification results are contrasted with those To assess the impact of the input image, 5-slice input images comprising the GTV-1 RoI regions are provided to the model.The classification results are contrasted with those obtained in the experiment conducted in the previous subsection and presented in Table 8.Impact of Loss Function: While the binary cross-entropy loss function is commonly employed for binary classification tasks, it may prove inadequate for directly addressing real-world scenarios and imbalanced datasets as it does not explicitly consider false positives and false negatives produced by the model.To address this limitation, the proposed PEN-BCE loss function incorporates penalty parameters to specifically address FP and FN occurrences alongside the BCE loss.Consequently, the influence of the proposed loss function on classification success was evaluated by comparing it with the test conducted in the previous subsection, and the outcomes are presented in Table 9. 10.Hyper-parameter combinations were selected empirically.As indicated in Table 10, the hyperparameter configuration yielding the highest accuracy (ACC) and area under the curve (AUC) success was attained with the following values: ∝ = 1.0, β = 5.0, p FN = 0.50, and p FP = 0.20. Discussion The study's findings, resulting from a range of tests conducted, are presented below. 1. The newly introduced feature (GTV1-SliceNum) holds significant importance in survival classification, as evidenced by its correlation with the number of tumor slices and survival duration, akin to the relationship observed between tumor thickness and overall survival.Moreover, it has been noted that the quantity of slices forwarded to the 3D ResNet-34 model influences the success of classification.2. Notably, superior outcomes are achieved upon discarding censored data.This observation aligns with the assertion made by Katzman et al. in their study [42], suggesting that right-censored data should be excluded when treating survival problems as standard regression tasks. 3. It has been noted that classification performance tends to decrease when utilizing solely the GTV-1 tumor regions, designated as regions of interest, within the input image.This phenomenon may stem from the fact that, beyond the tumor itself, surrounding tissues or other structures relevant to the tumor could also bear significance in survival prediction.Moreover, while convolutional neural networks (CNNs) excel at automatically extracting features from input images, training the model solely on a restricted region might hinder its ability to grasp broader patterns comprehensively. 4. The observation reveals that the AUC metric yields more reliable results compared to the accuracy metric.This phenomenon is attributed to the AUC metric's superior performance in imbalanced datasets, as it effectively mitigates the shortcomings of accuracy.Specifically, accuracy metrics can inaccurately depict model performance by favoring the larger class, even if the model's predictive ability for the smaller class is poor. 5. It has been noted that the proposed novel loss function (PEN-BCE) enhances classification performance and adeptly manages false positive (FP) and false negative (FN) cases.6. In the conducted tests, it was evident that the proposed method for the Lung1 dataset outperformed all previous studies documented in the literature.The comparative analysis of the conducted tests with the studies listed in Table 1 is presented in Table 11.While our study provides valuable insights into the field of survival classification, it is not without limitations.The reliance on a single dataset and the inherent complexities of medical image analysis pose challenges that warrant further exploration in future research endeavors.In conclusion, this study contributes to a deeper understanding of survival classification in lung cancer patients and offers practical implications for clinical decisionmaking.By addressing the identified gaps and leveraging innovative methodologies, future research can continue to advance the field toward more accurate and personalized prognostic models. Conclusions In this study, we aimed to address various aspects of survival classification in lung cancer patients using advanced image analysis techniques and novel methodologies.Through a comprehensive analysis of the Lung1 dataset, several key findings emerged.The primary conclusion of our study is that the integration of imaging features and a novel loss function significantly improves the performance of survival predictions for lung cancer patients.Our investigation revealed the importance of incorporating detailed features such as the number of tumor slices and the utilization of surrounding tissues in the input image for improved classification accuracy.This finding shows a similar dynamic to the relationship between tumor thickness and overall survival and emphasizes how critical a detailed examination of tumor structure is in survival predictions.In the clinic, tumor sizes are usually assessed by systems such as TNM staging, but the use of the new quantitative feature revealed by our study may contribute to the development of more accurate and personalized prognostic models.The 3D CNN architecture used in this study can automatically extract a wide range of features from the visual CT images.These features can capture complex patterns, textures, and spatial relationships within the tumor and surrounding tissues.Furthermore, the effectiveness of the proposed PEN-BCE loss function in handling false positive and false negative cases was demonstrated, leading to enhanced classification performance.This is of great importance in terms of improving model performance, especially considering that misdiagnoses can have serious consequences in the medical field.Notably, our results surpassed those reported in previous studies, underscoring the significance of our approach in advancing the state-of-the-art in survival classification for lung cancer and demonstrating the potential of AI-based approaches, providing an important basis for future research in this field.The predictive models and post-treatment monitoring pathways used in current clinical practice are generally based on standard clinical parameters and imaging techniques.This study demonstrates how effective image analysis and innovative methodologies can be in clinical applications and can make significant contributions to the development of clinical decision support systems and the creation of more personalized treatment strategies. For future research, testing the model on larger and more diverse datasets will increase the generalizability of the findings.Moreover, applying similar methodologies to different tumor types and other types of cancer could expand the overall performance and scope of the application of the model.In addition, the development of more integrated and comprehensive models for post-treatment monitoring pathways and long-term follow-up of patients may provide more accurate and reliable results in survival analysis. Figure 1 . Figure 1.Block diagram of the proposed methodology.(a) Original CT slice images in Lung1 database.(b) Pre-processing steps including detection of slices containing GTV-1, finding the slice with the largest tumor area, detection of neighborhood slices of this slice, and preparation of the 5-slice input image in accordance with the CNN model.(c) Training 3D ResNet-34 model with BCE and PEN-BCE losses for survival classification.(d) The 2-year cut-off survival classification results and evaluation with ROC-AUC performance metric. location information for the gross tumor volume, Lung-Left for the left lung, Lung-Right for the right lung, Spinal-Cord for the spinal cord region, and Esophagus for the esophagus.Each patient's file may contain different types and numbers of segmented images.The type and number of segmented images are not standard.For example the number of CT images (number of slices) taken from patient LUNG1-001 is 134, and there are a total of 358 segmented images labeled 139 Left-Lung, 134 Right-Lung, 84 Spinal-Cord, and 21 GTV-1, while LUNG1 has 94 slices, LUNG1-243 has a total of 327 segmented images, 113 of which are labeled Left-Lung, 101 Right-Lung, 94 Spinal-Cord, 6 GTV-2, and 13 GTV-1.Additionally, different types and/or numbers of segmented image data may be present in different slices of the same patient.For example, different types of segmented data in CT slice number 28 of patient LUNG1-243 are given in Figure 3. Figure 3 . Figure 3. Different types of segmentation data contained in a CT slice (28) of a patient (LUNG1-243). Figure 3 . Figure 3. Different types of segmentation data contained in a CT slice (28) of a patient (LUNG1-243). Figure 4 . Figure 4. CT slices from patient LUNG1-243 and representation of tumor areas with GTV-1 labeling. Figure 4 . Figure 4. CT slices from patient LUNG1-243 and representation of tumor areas with GTV-1 labeling. Figure 6 . Figure 6.Illustration of the incorrectly segmented slice caused by interpolation in sequential segmentation in slices of an example patient (LUNG1-127).(a) Original version of Slice 43, Slice 44, Slice 45, (b) interpolation of Slice 44 based on Slice 43 and Slice 45. Figure 6 . Figure 6.Illustration of the incorrectly segmented slice caused by interpolation in sequential segmentation in slices of an example patient (LUNG1-127).(a) Original version of Slice 43, Slice 44, Slice 45, (b) interpolation of Slice 44 based on Slice 43 and Slice 45. Figure 7 . Figure 7.An example input image for the model. Figure 7 . Figure 7.An example input image for the model. Figure 8 . Figure 8. Importance score of features in classification of GTV1-SliceNum feature for Lung1 dataset.(left) Decision Tree.(right) Random Forest. Figure 9 . Figure 9. Change of BCE and PEN-BCE loss functions according to the estimated probability.(Left) for ∝ = 1 and (right) for ∝ = 5. Figure 9 . Figure 9. Change of BCE and PEN-BCE loss functions according to the estimated probability.(Left) for ∝ = 1 and (right) for ∝ = 5. Figure 12 . Figure 12.Comparison between BCE and PEN-BCE.(Left) Test loss; (Right) Test ROC-AUC.Impact of Parameters in the Loss Function: The PEN-BCE loss function includes four new hyper-parameters (∝, β, pFN, pFP) in addition to the existing hyper-parameters in the training process.Among these hyper-parameters, ∝ and β represent FN and FP weights, respectively, and pFN and pFP parameters refer to FN and FP probability threshold values, respectively.Within the scope of this study, tests were carried out on the Lung1 dataset with various hyper-parameter combinations, as given in Table10.Hyper-parameter combinations were selected empirically. Table 1 . Summary of the literature studies addressing the survival classification problem for lung cancer. • Surpassing benchmarks established in the existing literature.In this study, performance is evaluated with ACC and AUC metrics.The 2-year survival classification results have been obtained as 74.34% and 77.68%, respectively, both of which exceed the benchmarks established by methods found in the literature. • Proposal of a new loss function, Penalized Binary Cross Entropy Loss (PEN-BCE), designed to account for false negative (FN) and false positive (FP) values.The effect of this loss function on classification performance is elucidated.• Surpassing benchmarks established in the existing literature.In this study, performance is evaluated with ACC and AUC metrics.The 2-year survival classification results have been obtained as 74.34% and 77.68%, respectively, both of which exceed the benchmarks established by methods found in the literature. Table 2 . Clinical data and descriptions in the dataset. Table 3 . DICOM tags and descriptions used. Table 4 . RTSTRUCT tags and descriptions used. Table 5 . Effect of number of slices utilized. Table 5 . Effect of number of slices utilized. Table 6 . Effect of data augmentation. Table 7 . Effect of censored data. Table 8 . Effect of input image. Table 9 . Impact of loss function. Binary Cross Entropy (BCE) Loss (Original Test) Penalized Binary CE Loss (PEN-BCE) (∝=1.0, β=5.0, p FN =0.5, p FP =0.2) Figure12illustrates the comparisons between PEN-BCE and BCE results in terms of loss and ROC-AUC curves.Impact of Parameters in the Loss Function: The PEN-BCE loss function includes four new hyper-parameters (∝, β, p FN , p FP ) in addition to the existing hyper-parameters in the training process.Among these hyper-parameters, ∝ and β represent FN and FP weights, respectively, and p FN and p FP parameters refer to FN and FP probability threshold values, respectively.Within the scope of this study, tests were carried out on the Lung1 dataset with various hyper-parameter combinations, as given in Table10.Hyper-parameter combinations were selected empirically.Figure 12 illustrates the comparisons between PEN-BCE and BCE results in terms of loss and ROC-AUC curves. Table 10 . Classification performance of some PEN-BCE hyper-parameters. Table 11 . Comparison of the proposed methods that use classification in a similar manner to our study by using the Lung1 dataset.
15,573.6
2024-06-01T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
ON THE THERMAL BUCKLING BEHAVIOR OF LAMINATED HYBRID COMPOSITE PLATES DUE TO SQUARE / CIRCULAR CUT-OUTS Cut-outs like circular, rectangular, elliptical, triangular are generally used in composite structures due to access ports for mechanical and electrical systems, damage inspection, to serve as doors and windows, and sometimes to reduce the overall weight of the structure. In this paper the effects of cut-outs on the thermal buckling behavior of hybrid composite plates in cross-ply and angle-ply laminate are presented. The effects of eccentric cut-out size in different plate aspect ratios and boundary conditions on the thermal buckling behavior of the cross-ply and angle-ply laminated hybrid composite plates are also investigated. Finite element analysis is also performed to calculate thermal buckling temperatures for Kevlar/Epoxy, Boron/Epoxy and E-glass/Epoxy. Several outcomes and behavioral characteristics are discussed. These outcomes include the effects of cut-out size, shape, plate aspect ratio and boundary conditions. Key WordsThermal buckling, Hybrid composite plates, Cut-out, Finite element INTRODUCTION Fiber reinforced composites structures are used in aerospace, marine and automotive applications due to their light weight and directional properties.During the operation life of vehicles, high temperature throughout their structure is to be experienced.As a result of this environmental condition, thermal buckling can occur without the application of mechanical loads.Sometimes thermal stability of composite laminates is one of the factors governing their design. There are many publications about thermal buckling of composite plates.Murphy and Ferreira [1] presented the results of a thermal buckling analysis of a clamped rectangular plates based on energy considerations.Shariyat [2] worked on the thermal buckling analysis of rectangular composite multilayered plates under uniform temperature rise by using layer-wise plate theory.Kabir et al. [3] presented an analytical solution to thermal buckling response of moderately thick symmetric angle-ply laminated, rectangular plate which is clamped from all the edges.Li et al. [4] investigated the axisymmetric vibrations of a statically buckled polar orthotropic circular plate due to uniform temperature rise.Laura and Rossit [5] worked on thermal bending of thin, anisotropic, clamped elliptic plates and their study deals with the exact analytical solution of thermal bending.Kalyan and Bhaskar [6] studied on the buckling of rectangular orthotropic plates subjected to non-uniform compressive loads using Galerkin method.Lee [7] derived governing buckling equations from the variational principle and a finite element method to analyze thermal buckling of laminated composites by using a layer-wise theory.Lee and Lee [8] investigated the behaviors of thermally post-buckled anisotropic plates.The finite element model is used based on the first-order shear deformable plate theory and von Karman strain-displacement relation to account for large deflection.Prabhu and Dhanaraj [9] researched the thermal buckling of symmetric cross-ply, symmetric angle-ply laminated composite plates using the finite element method based on the Reissner-Mindlin first order shear deformation theory.Huang and Tauchert [10] investigated the buckling behavior of moderately thick symmetric angle-ply laminates having clamped edges, subjected to a uniform temperature rise.Nath and Shukla [11] investigated the buckling and post-buckling analysis of the moderately thick angle ply laminated composite rectangular plates subjected to combined in-plane mechanical load and temperature gradient across the thickness.Shiau et al. [12] studied thermal buckling behavior of composite laminated plates by making the use of finite element method.The thermal buckling mode shapes of cross-ply and angle-ply laminates with various E1 /E2 ratios, aspect ratios, fiber angle, stacking sequence and boundary condition were studied in detail.Jones [13] worked on to derive simple solutions to the most fundamental thermal buckling problems for uniformly heated unidirectional and symmetric cross-ply laminated fiberreinforced composite rectangular plates that are restrained in-plane at their edges in a single direction on two of the four edges, but are free to rotate on all edges.Barton [14] presented an approximate closed-form solution to compute the thermal buckling response of a symmetric angle-ply laminates that are clamped in one edge and free along the other edge.Results are compared with Rayleigh-Ritz method solutions. Aydogdu [15] researched the thermal buckling analysis of rectangular cross-ply laminated beams subjected to different sets of boundary conditions by applying the Ritz method.Yapici [16] studied on the thermal buckling analysis of symmetric and antisymmetric angle-ply laminated hybrid composite plates with an inclined crack subjected to a uniform temperature rise.Avci et al. [17] performed the thermal buckling analysis of symmetric and antisymmetric cross-ply laminated hybrid composite plates with an inclined crack subjected to a uniform temperature rise.Avci et al. [18] extended their work on the thermal buckling analysis of symmetric and antisymmetric laminated composite plates with clamped and simply supported edges, and containing a hole.Then, Avci et al. [19] studied on the thermal buckling analysis of symmetric and antisymmetric cross-ply laminated hybrid composite plates with a hole subjected to a uniform temperature rise for different boundary conditions.Sahin [20] worked on thermal buckling analysis of symmetric and antisymmetric laminated hybrid composite plates with a hole subjected to a uniform temperature rise for different boundary conditions.Akbulut and Sayman [21] used finite element method to investigate buckling behavior of laminated composite plates with central square openings for various boundary conditions and stacking sequences.Erklig and Yeter [22] studied on the effects of different cutouts on the mechanical buckling behavior of plates made of polymer matrix composites.Circular, rectangular, square, elliptical, triangular were used in experimental and finite element analysis. In this paper the effects of rectangular and circular cut-outs on the thermal buckling behavior of hybrid composite plates in cross-ply and angle-ply laminates are investigated based on the first order shear deformation theory.This study also contains the effect of eccentric cut-out size in different plate aspect ratios and boundary conditions on the thermal buckling behavior of the cross-ply and angle-ply laminated hybrid composite plates.The finite element method is used to calculate critical thermal buckling temperatures for Kevlar/Epoxy, Boron/Epoxy and E-glass/Epoxy. FINITE ELEMENT FORMULATION The first order shear deformation theory, used in the analysis, assumes a linear variation of in-plane displacement fields, u and v through the depth of the plate.The transverse displacement w(x,y) is assumed to be constant throughout the thickness of the plate.The displacement field of a rectangular shear deformable plate can be expressed as (1) From the large displacement theory, the strain-displacement relations can be written as (2a) and (2b) where ( ) ,x and ( ) ,y represent partial differentiation with respect to x and y.The relationship between stress resultants and the strain terms for the laminated plate may be written as (3) and the shear resultants may be written as where stretching, stretching-bending and bending stiffnesses are defined as where represents transformed plane stress reduced stiffness matrix of the kth lamina, which is a function of ply-angles and is the thickness of the lamina (Figure1). The thermal load vector is given by the expression Following the procedure given in Reference [23] (equating the first variation of total potential energy to zero) the governing equation of the problem may be written as (4) (5) where and are the linear and geometric stiffness matrices respectively.The lowest eigenvalue (λ) gives the buckling temperature T c . Figure 1.Geometry of laminated composite plate FINITE ELEMENT SIMULATION Equivalent buckling analysis is performed by using finite element analysis program.Finite element analysis of square cut-out composite lamina is performed by using via ANSYS 11.0 with an first order shear deformation element (SHELL91). Critical thermal buckling values were found out by computational method.To test the correctness of the finite element model, simply supported (±45 3 ) T laminated rectangular plates with a/h=100 and a/h=80 ratios having the following material properties are investigated: The finite element model thermal buckling results are compared in table 1 with the first-order shear deformation theory results of reference [2,24] and the higher-order shear deformation theory results of reference [25].As it can be seen in table 1, finite element model results are nearly same the reference results.The geometric model of the laminated plate with parametric dimensions is shown in Figure 2. Laminated hybrid composite plate material properties are given in table 2. The critical thermal buckling values are evaluated for cross-ply and angle-ply layers bonded symmetrically with different boundary conditions.The square hole edge is taken as free.The stacking sequences of hybrid composite plates are listed in Table 3.The letters B, K and G represent Boron/Epoxy, Kevlar/Epoxy and E-glass/Epoxy composites, respectively.Each layer has 0.15 mm thickness and rectangular plate and square hole is selected (a/b=1 and c/d=1).c/b ratio represents the ratio of the square hole size to length of one side of composite plate and h/b ratio represents the ratio of total thickness to length of one side of composite plate. EFFECT OF CUT-OUT SIZE AND SHAPE In this section the effect of eccentric square cut-out size is taken into consideration.The plate dimension is 120 x 120 mm.The plate normal is aligned in z direction and plate area is located on xy plane.The four different boundary conditions Figure 4 gives the buckling temperatures for cross-ply and angle-ply laminated hybrid composite plates with the cut-out width to the laminate width ratio (d/b) varying from 0.0 to 0.5.The cut-out width and length ratio (c/d) is taken as 1.Buckling temperature is increased by changing the cut-out size from 0.0 to 0.5 up to 55%.The buckling temperatures don't have any considerable change for the cut-out size from 0.0 to 0.25.The buckling temperature of perfect plate is initially decreased after the cut-out opened.Type 4 ((45 G /-45 B /45 K /-45 G ) S ) gives the higher buckling temperature results.Because of design requirements and philosophy different cut-out shape may be used.The effect of circular cut-out is also taken into account.It is assumed that cut-out to be located at the center of line of the rectangular plates.The boundary conditions and stacking sequences are taken as constant. Thermal buckling temperatures for cross-ply and angle-ply laminated hybrid composite plates with the cut-out diameter to the laminate width ratio (d/b) varying from 0.0 to 0.5 are displayed in Figure 5.It is seen that larger cut-out area causes the higher buckling temperature.Type 1 gives the better buckling temperature for the circular cut-out.Type 2 and type 3 plates give the worst result compare to the perfect plate.Sqaure and circular cut-outs are compared in Figure 6 for material type 4. As can be seen in the figure square cut-out has greater thermal buckling load against circular cut-outs for higher d/b ratios. EFFECT OF PLATE ASPECT RATIO This section deals with the buckling behavior of perforated cross-ply and angle ply laminated plates in different plate aspect ratios.In this study the plate aspect ratios is selected to have integer value i.e. a/b = 1, 2, 3.The widths of these plates are equal to 120 mm, and all of the cut-outs are positioned in the center of the plates.The results of buckling temperature in different cut-out size are shown in Figure 6.As mentioned before the buckling temperature of the rectangular plate decreases with cut-out dimensions. The buckling temperature change of plate with aspect ratios of 2 and 3 for d/b from 0 to 0.5 is not affected by cut-out size.Results show that the buckling temperature for the aspect ratio of 1 increases with increasing the cut-out size but aspect ratio 2 and 3 is not much affected by cut-out size. Figure 6 Variation of buckling temperature with circular cut-out dimensions in different plate aspect ratios. EFFECT OF BOUNDARY CONDITION The boundary condition has a significant effect on buckling temperatures.In this study the cross-ply laminated hybrid composite plates is evaluated at four different boundary conditions; as four edges simply supported (SSSS), two edges simply supported and two edges are free (SFSF), four edges clamped (CCCC), two edges clamped and two edges are free (CFCF).The size of plate is 120 mm. Figure 7 shows the results of buckling temperatures in different cut-out size and different boundary conditions.Because of the rigidity of clamped boundary condition buckling temperature is higher than simply supported boundary conditions. CONCLUSION This study considers the buckling response of laminated rectangular perforated hybrid composite plates under temperature loading with different boundary conditions. The laminated composite plates have varying d/b ratio, aspect ratio, cut out shape and ply orientation.From the present study, the following conclusions can be made: -The buckling temperature of rectangular plates containing square cut-out increases by the increment of cut-out dimension.-It is seen that the different fiber orientation angles affected the critical buckling temperature.The plate with square cut-out and with (45 G /-45 B /45 K /-45 G ) S layup has the highest buckling temperature and the plate with (15 G /-15 B /15 K /-15 G ) S layup has the lowest buckling temperature.-Imperforated plate buckling temperature is greater than small cut-outs.After cutout ratio (d/b) 0.1 buckling temperature is increasing.-Plate with square cut-out buckling temperature is higher than plate with circular cut-out.The type 4 gives the highest buckling temperature in a 0.5 cut-out ratio.-By selecting integer value for plate aspect ratio the buckling temperature is increased by selecting the higher value for aspect ratio.-The buckling temperatures of perforated hybrid composite plates are highly influenced by its boundary conditions.The buckling temperature for the plate with fully clamped boundary condition is higher than the buckling temperature for the plate with simply supported boundary conditions. Figure 2 . Figure 2. Dimensions of rectangular laminated hybrid composite plate with square cutout. each four edges.The plates are meshed with quadratic composite shell elements as illustrated in Figure3. Figure 3 . Figure 3.Typical mesh for a rectangular laminated plate with a square cut-out. Figure 4 . Figure 4. Variation of buckling temperatures with square cut-out dimensions Figure 5 .Figure 6 . Figure 5. Variation of buckling temperatures with circular cut-out dimensions Figure 7 . Figure 7. Variation of buckling temperatures with square cut-out dimensions in different boundary conditions.
3,151.4
2013-12-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Sharp conditions for scattering and blow-up for a system of NLS arising in optical materials with $\chi^3$ nonlinear response We study the asymptotic dynamics for solutions to a system of nonlinear Schr\"odinger equations with cubic interactions, arising in nonlinear optics. We provide sharp threshold criteria leading to global well-posedness and scattering of solutions, as well as formation of singularities in finite time for (anisotropic) symmetric initial data. The free asymptotic results are proved by means of Morawetz and interaction Morawetz estimates. The blow-up results are shown by combining variational analysis and an ODE argument, which overcomes the unavailability of the convexity argument based on virial-type identities. Introduction In this paper, we consider the Cauchy problem for the following system of nonlinear Schrödinger equations with cubic interaction with initial datum (u, v)| t=0 = (u 0 , v 0 ). Here u, v : R × R 3 → C, u 0 , v 0 : R 3 → C, and the parameters γ, µ are strictly positive real numbers. The system (1.1) is the dimensionless form of a system of nonlinear Schrödinger equations as derived in [29] (see also [30]), where the interaction between an optical beam at some fundamental frequency and its third harmonic is investigated. More precisely, from a physical point of view, (1.1) models the interplay of an optical monochromatic beam with its third harmonic in a Kerrtype medium (we refer to [28] for the latter terminology, as well as for a sketch of the derivation of (1.1)). Models such as in (1.1) arise in nonlinear optics in the context of the so-called cascading nonlinear processes. These processes can indeed generate effective higher-order nonlinearities, and they stimulated the study of spatial solitary waves in optical materials with χ 2 or χ 3 susceptibilities (or nonlinear response, equivalently). Let us mention, following [10], the difference between χ 2 (quadratic) and χ 3 (cubic) media. The contrast basically reflects the order of expansion (in terms of the electric field) of the polarization vector, when decomposing the electrical induction field appearing in the Maxwell equations as the sum of the electric field E and the polarization vector P. Indeed, for "small" intensities of the electric field, the polarization response is linear, while for "large" intensities of E, the vector P has a non-negligible nonlinear component, denoted by P nl . Thus, when considering the Taylor expansion for P nl , one gets the presence of (at least) quadratic and cubic terms whose coefficients χ j , which depend on the frequency of the electric field E, are called j-th optical susceptibility. For j = 2, 3, they are usually denoted by χ 2 and χ 3 . Therefore quadratic media arise from approximation of the type P nl ∼ χ 2 E 2 , and similarly one can define cubic media. The so-called non-centrosymmetric crystals are typical examples of χ 2 materials. Moreover, it can be shown, see [15], that isotropic materials have χ 2n = 0 susceptibility, namely even orders of nonlinear responses are zero. In the latter case, the leading-order in the expansion of P nl is cubic, and these kind of isotropic materials are called Kerr-materials. See the monographs [5,15,31] for more discussions. In addition, we refer to [1,6,7,10,19,24,29,30,36], and references therein, for more insights on physical motivations and physical results (both theoretical and numerical) about (1.1) and other NLS systems with cubic and quadratic interactions. Models as in (1.1) are therefore physically relevant, and they deserve a rigorous mathematical investigation. In particular, we are interested in qualitative properties of solutions to (1.1). Our main goal is to understand the asymptotic dynamics of solutions to (1.1), by establishing conditions ensuring global existence and their long time behavior, or leading to formation of singularities in finite time. Let us mention since now on, that once the Strichartz machinery has been established, and this is nowadays classical, local well-posedness of (1.1) at the energy regularity level (i.e. H 1 (R 3 ), mathematically speaking) is relatively straightforward to get (see below for a precise definition of the functional space to employ a fixed point argument). The dynamics of solution of NLS-type equation is intimately related to the existence of ground states (see below for a more precise definition). The analysis of solitons is a very important physical problem, and the main difference between χ 2 media and χ 3 media, is that, in the latter case, the cubic nonlinearity is L 2 supercritical, while in the former quadratic nonlinearities are L 2 subcritical. The last two regimes dramatically reflect the possibility for the problem to be globally well-posed, and the stability/instability properties of the solitons are different. See [10] for further discussions, and a rigorous analysis for solitons in quadratic media. Regarding system (1.1), existence of ground states and their instability properties were established in a recent paper by Oliveira and Pastor, see [28]. Our aim is to push forward their achievements to obtain a qualitative description of solutions to (1.1), by giving sharp thresholds, defined by means of quantities linked to the ground state, are sufficient to guarantee a linear asymptotic dynamics for large time (i.e. scattering) or finite time blow-up of the solutions. Let us start our rigorous mathematical discussion about (1.1). The existence of solutions is quite simple to obtain. As said above, it is well-known that (1.1) is locally well-posed in H 1 (R 3 ) × H 1 (R 3 ), (see e.g., [8]). More precisely, for (u ) for any Strichartz L 2 -admissible pair (q, r), i.e., 2 q + 3 r = 3 2 , for 2 ≤ r ≤ 6. See Section 2. In addition, the maximal times of existence obey the blow-up alternative, i.e., either T + = ∞, or T + < ∞ and lim tրT + (u(t), v(t)) H 1 (R 3 )×H 1 (R 3 ) = ∞, and similarly for T − . When T ± = ∞, we call the solution global. Solutions to (1.1) satisfy conservation laws of mass and energy, namely where It is worth introducing since now the Pohozaev functional and, for later purposes, we rewrite the functionals P (see (1.4)) by means of its density: namely The previous conservation laws can be formally proved by usual integration by part, then a rigorous justification of them can be done by a classical regularization argument, see [8]. In order to introduce other invariance of the equations, let us give the following definition. Definition 1.1. We say that the initial-value problem (1.1) satisfies the mass-resonance condition provided that γ = 3. For γ = 3, (1.1) has the Galilean invariance: namely, if (u, v) is a solution to (1.1), then is also a solution to (1.1) with initial data (e ix·ξ u 0 , e 3ix·ξ v 0 ). As, in this paper, we are interested in long time behavior of solutions to (1.1), let us recall the notion of scattering. Definition 1.2. We say that a global solution (u(t), v(t)) to (1.1) are linear Schrödinger propagators. Note that the set of initial data such that solutions to (1.1) satisfy (1.8) is non-empty, as solutions corresponding to small H 1 (R 3 ) × H 1 (R 3 )-data do scatter (see Section 2). As already mention above, it is well-known that the dynamics of nonlinear Schrödinger-type equations is strongly related to the notion of ground states. Hence, we recall some basic facts about ground state standing waves related to (1.1). By standing waves, we mean solutions to where ω ∈ R is a frequency and (f, g) is a real-valued solution to the system of elliptic equations (1.10) It was proved by Oliveira and Pastor, see [28], that solutions to (1.10) exist, provided that Moreover, a non-trivial solution (φ, ψ) to (1.10) is called ground state related to (1.10) if it minimizes the action functional over all non-trivial solutions to (1.10). Under the assumption (1.11), the set of ground states related to (1.10) denoted by is not empty, where A ω,µ,γ is the set of all non-trivial solutions to (1.10). In particular, G(0, 3γ, γ) = ∅. It was shown (see [28,Theorem 3.10] where (φ, ψ) ∈ G(0, 3γ, γ), then the corresponding solution to (1.1) exists globally in time. The proof of this result is based on a continuity argument and the following sharp Gagliardo-Nirenberg inequality This type of Gagliardo-Nirenberg inequality was established in [28,Lemma 3.5]. Note that in [28], this inequality was proved for real-valued H 1 -functions. However, we can state it for complexvalued H 1 -functions as well since P (f, g) ≤ P (|f |, |g|) and ∇(|f |) We are now in position to state our first main result. The following theorem provides sufficient conditions to have scattering of solutions. More precisely, for data belonging to the set given by conditions (1.13) and (1.14), solutions to (1.1) satisfy (1.8), for some scattering state (u ± , v ± ). . Assume that the initial data satisfies (1.13) and (1.14). Provided that Our proof of the scattering results is based on the recent works by Dodson and Murphy [13] (for non-radial solutions) and [12] (for radial solutions), using suitable scattering criteria and Morawetz-type estimates. In the non-radial case, we make use of an interaction Morawetz estimate to derive a space-time estimate. In the radial case, we make use of localized Morawetz estimates and radial Sobolev embeddings to show a suitable space-time bound of the solution. Let us highlight the main novelties of this paper, regarding the linear asymptotic dynamics. For the classical focusing cubic equation in H 1 (R 3 ), scattering (and blow-up) below the mass-energy threshold, was proved by Holmer and Roudenko in [17] for radial solutions, by exploiting the concentration/compactness and rigidity scheme in the spirit of Kenig and Merle, see [18]. The latter scattering result has been then extended to non-radial solution in Duyckaerts, Holmer, and Roudenko [14]. To remove the radiality assumption, a crucial role is played by the invariance of the cubic NLS under the Galilean boost, which enables to have a zero momentum for the soliton-like solution. As observed in Remark 1.1, equation (1.1) lacks the Galilean invariance unless γ = 3. Hence we cannot rely on a Kenig and Merle road map to achieve our scattering results, and we instead build our analysis on the recent method developed by Dodson and Murphy, see [12,13]. In the latter two cited works, Dodson and Murphy give alternative proofs of the scattering results contained in [14,17], which avoid the use of the concentration/compactness and rigidity method. They give a shorter proofs, though quite technical, based on Morawetz-type estimates. In our work, by borrowing from [12,13], we prove interaction Morawetz and Morawetz estimates for (1.1), and we prove Theorem 1.1 for non-radial solutions which do not fit the massresonance condition, as well as for radially symmetric solutions. In this latter case, instead, we only need (localized) Morawetz estimates, which are less involved with respect to the interaction Morawetz ones, as we can take advantage of the spatial decay of radial Sobolev functions. Our second main result is about formation of singularities in finite time for solutions to (1.1). We state it for two classes of initial data. Indeed, besides the fact that these initial data must satisfy the a-priori bounds given by (1.13) and (1.16) -the latter (see below) replacing the condition (1.14) yielding to global well-posedness -they can belong either to the space of radial function, or to the anisotropic space of cylindrical function having finite variance in the last variable. The Theorem reads as follows. we assume moreover that (1.13) holds and (1.16) If the initial data satisfy then the corresponding solution to (1.1) blows-up in finite time. Let us now comment previous known results about blow-up for (1.1) and the one stated above, and highlight the main novelties of this paper regarding the blow-up achievements with respect to the previous literature. In the mass-resonance case, i.e., γ = 3, and provided µ = 3γ = 9, the existence of finite time blow-up solutions to (1.1) with finite variance initial data was proved in [28, Theorems 4.6 and 4.8]. More precisely, they proved that if satisfying either E 9 (u 0 , v 0 ) < 0 or if E 9 (u 0 , v 0 ) ≥ 0, they moreover assumed that where (φ, ψ) ∈ G(0, 9, 3), then the corresponding solution to (1.1) blows-up in finite time. The proof of the blow-up result in [28] is based on the following virial identity (see Remark 3.3) where Using (1.17), the finite time blow-up result follows from a convexity argument. For the powertype NLS equation, this kind of convexity strategy goes back to the early work of Glassey, see [16], for finite variance solutions with negative initial energy. See the works by Ogawa and Tsutsumi [27] for the removal of the finiteness hypothesis of the variance, but with the addition of the radial assumption. See the already mentioned paper [17] for an extension to the cubic NLS up to the mass-energy threshold, of the results by Glassey, and Ogawa and Tsutsumi. If we do not assume the mass-resonance condition, or we do not assume that µ = 3γ, the identity (1.17) ceases to be valid. Thus the convexity argument is no-more applicable in our general setting. The proof of Theorem 1.2 above relies instead on an ODE argument, in the same spirit of our previous work [11], using localized virial estimates and the negativity property of the Pohozaev functional (see Lemma 5.1). We point-out that our result not only extends the one in [28] to radial and cylindrical solutions, but also extends it to the whole range of µ, γ > 0. It worth mentioning that blow-up in a full generality, i.e. for infinite-variance solutions with no symmetric assumptions, is still an open problem even for the classical cubic NLS. We conclude this introduction by reporting some notation used along the paper, and by disclosing how the paper is organized. 1.1. Notations. We use the notation X Y to denote X ≤ CY for some constant C > 0. When X Y and Y X (possibly for two different universal constants), we write X ∼ Y, or equivalently, we use the 'big O' notation O, e.g., with the usual modifications when either r or q are infinity. When q = r, we simply write and if q = r, we simply write . The L p (R 3 ) spaces, with 1 ≤, p ≤ ∞, are the usual Lebesgue spaces, as well as spaces W k,p (R 3 ) spaces, and their homogeneous versions, are the classical Sobolev spaces. To lighten the notation along the paper, we will avoid to write R 3 (unless necessary), as we are dealing with a threedimensional problem. 1.2. Structure of the paper. This paper is organized as follows. In Section 2, we state preliminary results that will be needed throughout the paper, and we will prove some coercivity conditions which play a vital role to get the scattering results. In Section 3, we introduce localized quantities, and we derive localized virial estimates, Morawetz and interaction Morawetz estimates which will be the fundamental tools to establish the main results. The latter a-priori estimates will be shown in both radial and non-radial settings. In Section 4, we give scattering criteria for radial and non-radial solutions. We eventually prove, in Section 5, the scattering results and the blow-up results, by employing the tools developed in the previous Sections. We conclude with the Appendixes A and B, devoted to the proofs of some results used along the paper. Preliminary tools In this section, we introduce some basic tools towards the proof of our main achievements. Specifically, we give a small data scattering result, as well as useful properties related to the ground states. We postpone the proof of some of the following results to the Appendix A. 2.1. Small data theory. We have the following small data scattering result, which will be useful in the sequel. then the solution scatters forward in time. Proof. See Appendix A. Variational analysis. We first recall some basic properties of ground states in G(0, 3γ, γ) and then show a coercivity condition (see (2.8)), which play a vital role to get scattering results. It was shown in [28, Lemma 3.5] that any ground state (φ, ψ) ∈ G(0, 3γ, γ) optimizes the Gagliardo-Nirenberg inequality (1.15), that is Using the Pohozaev identities (see [28,Lemma 3.4]) we have To employ some Morawetz estimates in the proof of the scattering theorem, we will also use the following refined Gagliardo-Nirenberg inequality. We conclude this preliminary section by giving the following two coercivity results. (1.14). Then the corresponding solution to (1.1) exists globally in time and satisfies for all t ∈ R. Proof. See Appendix A. Virial and Morawetz estimates This section is devoted to the proof of virial-type, Morawetz-type, and interaction Morawetztype estimates, which will be crucial for the proof of the main Theorems 1.1 and 1.2. 3.1. Virial estimates. We start with the following identities. In what follows we use the Einstein convention, so repeated indices are summed. Lemma 3.1. Let µ, β, γ > 0, and (u, v) be a H 1 -solution to (1.1). Then the following identities hold: where N is as in (1.6). In particular, we have Proof. See Appendix B. A direct consequence of Lemma 3.1 is the following localized virial identity related to (1.1). Lemma 3.2. Let µ, γ > 0, and ϕ : R 3 → R be a sufficiently smooth and decaying function. Let Then we have for all t ∈ (−T − , T + ), The following Corollary is easy to get. Proof. See Appendix B. We now aim to construct precise localization functions that we will use to get the desired main results of the paper. Let ζ : [0, ∞) → [0, 2] be a smooth function satisfying We define the function ϑ : For R > 0, we define the radial function ϕ R : We readily check that, ∀x ∈ R 3 and ∀r ≥ 0, We are ready to state the first virial estimate for radially symmetric solutions. for some constant C > 0 depending only on µ, γ, and We rewrite, using G − K + 3P = 0, As ∆ 2 ϕ R L ∞ R −2 , the conservation of mass implies that The latter, together with ϕ ′′ R (r) ≤ 2 for all r ≥ 0, ∆ϕ R L ∞ 1, ϕ R (x) = |x| 2 on |x| ≤ R, and Hölder's inequality, yield where we have used the fact that (see (1.6)) To estimate the last term, we recall the following radial Sobolev embedding (see e.g., [9]): for a radial function f ∈ H 1 (R 3 ), we have (3.11) Thanks to (3.11) and the conservation of mass, we estimatê It follows that The proof is complete. Proof. By (3.7), we have for all t ∈ (−T − , T + ), As ψ ′′ R (ρ) ≤ 2 and ∆ y ψ R L ∞ 1, the Hölder's inequality implies that which, by the conservation of mass, implies that sup z∈R u(t, z) 2 By the radial Sobolev embedding (3.11) with respect to the y-variable, we havê L 2 x . The latter and (3.15) give (3.14). The proof is complete. Let R > 1 be a large parameter. We define the following radial functions where χ R (x) := χ x R and ω 3 is the volume of unit ball in R 3 . We also define the functions We collect below some properties of the above functions. Here repeated indices are summed. Moreover, by using integration by parts, we readily see that We are able to prove the following interaction Morawetz estimates, which will play a fundamental role for the proof of the scattering theorem in the non-radial framework. (3.35) Hence ψ R − φ R is radial and non-negative, by the Cauchy-Schwarz inequality, we infer that On the other hand, as χ R is radial and non-negative, we have where Notice that B(u, v) is invariant under the gauge transformation (u(t, x), v(t, x)) → (u ξ (t, x), v ξ (t, x)) := (e ix·ξ u(t, x), e iγx·ξ v(t, x)) for any ξ ∈ R 3 . Indeed, we see that provided that the denominator is non-zero; otherwise we can define ξ(t, z, R) ≡ 0. With this choice of ξ, we haveˆχ Combining this with (3.36), we infer that Therefore, by the above identity, (3.26), (3.27), (3.29), and (3.34), we get Now, we consider (3.37). Sincê (3.42) Thus, substituting (3.42) in (3.37) and using Lemma 2.4 with χ R instead of Γ R , we see that there exists ν > 0 such that By the conservation of mass and the fact that ∆(χ R ) L ∞ R −2 , the absolute value of the second term in the right hand side can be bounded by This implies that By (3.19), the conservation of mass, (2.5), and Sobolev embedding, we have where we have used the fact that , Using (3.19), we see that where we have used the fact that We thus get Finally, as |γ − 3| < η and |∇Θ R (x)| R, we infer from the conservation of mass, (2.5), and Sobolev embedding that which shows (3.25) by choosing σ = ǫ, J = ǫ −3 , R 0 = ǫ −1 , T 0 = e ǫ −3 , and η = e −ǫ −3 . The proof is complete. 3.3. Morawetz estimates. Radial setting. We now turn our attention to the proof of the radial version of the Morawetz estimate which will be essential in the proof of the scattering theorem in the radially symmetric setting. In this context, we take advantage of the radial Sobolev embedding to get some spatial decay. Scattering criteria In this section, we give scattering criteria for solution to (1.1) in the spirit of Dodson and Murphy [12,13] (see also [33]). Let us start with the scattering criterion for non-radial solutions. for some constant E > 0. Then there exist ǫ = ǫ(E) > 0 small enough and T 0 = T 0 (ǫ, E) > 0 sufficiently large such that if for any a ∈ R, there exists t 0 ∈ (a, a + T 0 ) such that (u(t), v(t)) then the solution scatters forward in the time. Proof. By Lemma 2.1, it suffices to show that there exists T > 0 such that To prove (4.3), we first write By Sobolev embedding, Strichartz estimates, and the monotone convergence theorem, there exists T 1 > 0 sufficiently large such that if T > T 1 , then We take a = T 1 and T = t 0 , where a and t 0 are as in (4.2), we write where To estimate H 2 , we observe that By choosing ǫ small enough, we get (4.5). By Sobolev embedding and Strichartz estimates, we see that which together with (4.2) and (4.5) imply On the other hand, we claim that In fact, we notice that which, by Strichartz estimates, implies Moreover, as we have from the dispersive estimate (A.2) and Young's inequality that By interpolation, we get which proves (4.7). Collecting (4.4), (4.6), and (4.7), we obtain (4.3), and the proof is complete. Let us give now an analogous of the previous Criterion in the radial setting. then the solution scatters forward in time. Proof. Let ǫ > 0 be a small constant. By Lemma 2.1, it suffices to show the existence of T = T (ǫ) > 0 such that To show this, we follow the argument of [12,Lemma 2.2]. By the Strichartz estimates and the monotone convergence theorem, there exists T = T (ǫ) > 0 sufficiently large such that As in the proof of Proposition 4.1, we write where By (4.9) and enlarging T if necessary, we havê where ̺ R (x) = ̺(x/R) with ̺ : R 3 → [0, 1] a smooth cut-off function satisfying Using the fact (see Lemma 3.1) that ∂ t (|u| 2 + 3γ|v| 2 ) = −2∇ · Im(u∇u) − 6∇ · Im(v∇v), (4.8), and ∇̺ R L ∞ (R 3 ) R −1 , an integration by parts and the Hölder inequality yield Taking R sufficient large such that R −1 ǫ − 1 4 ≪ ǫ 2 , we infer from (4.12) that This inequality implies that ǫ. (4.14) Thanks to the radial Sobolev embedding (3.11), we have from (4.8) and (4.14) that A similar estimate holds for v. In particular, we get Moreover, we have from the local theory that By Sobolev embedding and Strichartz estimates, we see that that On the other hand, the same argument developed in the proof of (4.7) shows that Collecting (4.11), (4.16), and (4.17), we prove (4.10), and the proof is complete. Proofs of the main Theorems By exploiting the tools obtained in the previous parts of the paper, we are now able to prove the scattering for non-radial and radial solutions to (1.1) given in Theorem 1.1. See [26,33,34] for analogous results for NLS systems of quadratic type. Proof of Theorem 1.1 for radial solutions. We fix ǫ > 0 and R as in Proposition 4.2. From (3.48) and the mean value theorem, we infer that there exist sequences of times t n → ∞ and radii R n → ∞ such that Choosing n sufficiently large so that R n ≥ R, the Hölder inequality yieldŝ |x|≤R |u(t, x)| 2 + 3γ|v(t, x)| 2 dx R 5.2. Proof of the blow-up results. It remains to prove the blow-up results as stated in Theorem 1.2. Let us start with the following observation. We are now able to provide a proof of Theorem 1.2. To the best of our knowledge, the strategy of using an ODE argument -when classical virial estimates based on the second derivative in time of (localized) variance break down -goes back to the work [4], where fractional radial NLS is investigated. See instead [11,22] for some blow-up results for quadratic NLS systems. Proof of Theorem 1.2. We only consider the case of radial data, the one for Σ 3 -data is treated in a similar manner using (3.14). Let (u 0 , v 0 ) ∈ H 1 × H 1 be radially symmetric and satisfy either E µ (u 0 , v 0 ) < 0 or if E µ (u 0 , v 0 ) ≥ 0, we assume that (1.13) and (1.16) hold. Let (u, v) be the corresponding solution to (1.1) defined on the maximal time interval (−T − , T + ). We only show that T + < ∞ since the one for T − < ∞ is similar. Assume by contradiction that T + = ∞. By Lemma 5.1, we have for ε > 0 sufficiently small, there exists c = c(ε) > 0 such that For t 1 > t 0 , we integrate over [t 1 , t] to obtain , ∀t ≥ t 1 . In particular, we have M ϕ R (t) ≤ −Az(t) → −∞ as t ր t * , hence K(u(t), v(t)) → +∞ as t ր t * . Thus the solution cannot exist for all time t ≥ 0. The proof is complete. Appendix A. Proofs of Lemmas 2.1, 2.2, 2.3, and 2.4 Let I ⊂ R be an interval containing zero. We recall that a pair of functions (u, v) ∈ C(I, H 1 (R 3 ))×C(I, H 1 (R 3 )) is called a solution to the problem (1.1) if (u, v) satisfies the Duhamel formula (u(t), v(t)) = (S 1 (t)u 0 , S 2 (t)v 0 ) + iˆt The linear operators S 1 and S 2 introduced in (1.9) satisfy the following dispersive estimates: for j = 1, 2, and 2 ≤ r ≤ ∞, for all t = 0, which in turn yield the following Strichartz estimates: for any interval I ⊂ R and any Strichartz L 2 -admissible pairs (q, r) and (m, n), i.e., pairs of real numbers satisfying we have, for j = 1, 2, where (m, m ′ ) and (n, n ′ ) are Hölder conjugate pairs. We refer the readers to the boos [8,23,32] for a general treatment of the Strichartz estimates for NLS equations. We are ready to prove Lemma 2.1. In the following, we provide the proofs for Lemmas 2.2, 2.3, and 2.4.
6,687.8
2020-11-27T00:00:00.000
[ "Mathematics" ]
Excellent option for mass testing during the SARS-CoV-2 pandemic: painless self-collection and direct RT-qPCR The early identification of asymptomatic yet infectious cases is vital to curb the 2019 coronavirus (COVID-19) pandemic and to control the disease in the post-pandemic era. In this paper, we propose a fast, inexpensive and high-throughput approach using painless nasal-swab self-collection followed by direct RT-qPCR for the sensitive PCR detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This approach was validated in a large prospective cohort study of 1038 subjects, analysed simultaneously using (1) nasopharyngeal swabs obtained with the assistance of healthcare personnel and analysed by classic two-step RT-qPCR on RNA isolates and (2) nasal swabs obtained by self-collection and analysed with direct RT-qPCR. Of these subjects, 28.6% tested positive for SARS-CoV-2 using nasopharyngeal swab sampling. Our direct RT-qPCR approach for self-collected nasal swabs performed well with results similar to those of the two-step RT-qPCR on RNA isolates, achieving 0.99 positive and 0.98 negative predictive values (cycle threshold [Ct] < 37). Our research also reports on grey-zone viraemia, including samples with near-cut-off Ct values (Ct ≥ 37). In all investigated subjects (n = 20) with grey-zone viraemia, the ultra-small viral load disappeared within hours or days with no symptoms. Overall, this study underscores the importance of painless nasal-swab self-collection and direct RT-qPCR for mass testing during the SARS-CoV-2 pandemic and in the post-pandemic era. Introduction Despite highly promising vaccines for the 2019 coronavirus disease , the key to bringing the pandemic under control worldwide and normalising all aspects of daily life in the near future is to combine vaccination with existing preventive measures and effective mass testing to detect individuals in the acute phase [25]. Therefore, cheap, easy, rapid, sensitive and high-throughput testing strategies are critical. Despite the introduction of promising rapid antigen tests, RT-qPCR protocols, which can detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) nucleic acid in respiratory tract specimens, remain the gold standard for COVID-19 diagnostics, mainly due to their excellent sensitivity and specificity [12,22]. Additionally, there is an urgent need to find reliable alternatives to sample collection by healthcare personnel to expand the testing capacity and to provide easier access to testing and painless sampling [26,29]. Many challenges remain regarding the interpretation of obtained RT-qPCR data, mainly relating to grey-zone viraemia (which includes samples with near-cut-off cycle threshold [Ct] values in RT-qPCR) and the identification of variants of concern and their influence on diagnostic settings. The present study argues for a transition to painless self-collected nasal swabs and direct RT-qPCR to accelerate and streamline COVID-19 diagnostics during the pandemic and in the post-pandemic era. Additionally, RT-qPCR results in the grey zone are discussed, as these subjects may have an ultra-low viral load without inducing a specific immune response. Materials and methods In this prospective study performed in October and November 2020 at the testing centres of University Hospital Olomouc and Sumperk Hospital, Czechia, 1038 enrolled subjects underwent nasopharyngeal-swab sampling carried out by healthcare personnel, followed by self-collected nasal-swab sampling for SARS-CoV-2 RT-qPCR detection on the same day. All collected swabs were stored at 4 °C and analysed within 24 h of the sampling. The study design is shown in Fig. 1. The subjects signed their informed consent, approved by the Ethical Committee of University Hospital Olomouc, and completed a questionnaire comparing their comfort during both types of sampling. Two-step RT-qPCR was performed on the nasopharyngeal swabs collected by the healthcare personnel in 2 ml of universal transport media (UTM, COPAN Diagnostics Inc.). Viral RNA isolation was performed on 200 μl of swabs in UTM using an automated nucleic acid magnetic Fig. 1 a Study design: comparison of nasal-swab self-collection followed by direct RT-qPCR (left panel) vs nasopharyngeal-swab healthcare personnel-assisted sampling with two-step RT-qPCR (RNA isolation followed by PCR; right panel). b Nylon-flocked swab tips tested for self-collected nasal swabs. (1) FLOQSwabs MFS-98000KQ (iClean), (2) MFS-97000KQ (iClean) and 3) 520CS01 (COPAN Diagnostics Inc.) bead extraction platform, Zybio EXM 3000 (Zybio, Shenzhen, China), and a nucleic acid extraction kit (Zybio). The final elution volume was 50 μl. RT-qPCR was then performed using a Novel Coronavirus (2019-nCoV) Real-Time Multiplex RT-PCR Kit (LifeRiver, Shanghai, China) to target the ORF1ab, E and N genes, according to the manufacturer's recommendations (20 μl Master Mix and 5 μl isolated RNA; 40 cycles) [24]. The detection limit was five copies per reaction. For the direct RT-qPCR, nasal-swab self-collection was individually performed after instruction and under the supervision of trained personnel using nylon-flocked swab tips (FLOQSwabs MFS-98000KQ, iClean; MFS-97000KQ, iClean; 520CS01, COPAN Diagnostics Inc.) (Fig. 1). Different types of swab tip were used in this study because of the shortages caused by the COVID-19 pandemic, but all were similar in performance, as assessed by the expression of the control RP gene. Briefly, midturbinate swabbing was performed using a nylon-flocked swab tip inserted ~ 2.5-cm deep in both nasal cavities for 10 s and gently rotated. Then, the swab was immersed in 0.2 ml of COVID media in a 1.5-ml Eppendorf-type tube (part of the DIOS-RT-qPCR Kit, IABio, Czechia). Before analysis, the swab was heat inactivated at 75 °C for 10 min while being shaken, followed by spin centrifugation. Subsequently, the DIOS-RT-qPCR Kit (IABio) was used to target the N1/N2/RP genes (16 μl of Master Mix and 14 μl of inactivated swab eluate in COVID media; 40 cycles) [15]. The detection limit was seven copies per reaction. A comparison of the performance of the DIOS-RT-qPCR Kit, starting with the nasopharyngeal swabs leached in UTM, and classic two-step RT-qPCR on RNA isolated from the swabs had already been conducted, with the majority of samples delivering the same results in terms of positivity/negativity and Ct values in both settings [15]. To minimise the potential of false-negative results, positive (SARS-CoV-2 RNA Control 1, Twist Bioscience, USA) and negative controls (nuclease-free water) were added to each run, the RP gene served as an internal control for the amplification and amount of material collected with the swab in each sample. Strict laboratory procedures were established to avoid false positives, including separate laboratory rooms for the RT-qPCR setup and PCR amplification, with special shoes and coats and no transfer of disposables between the two rooms. The presented Ct data were unnormalised for the amount of starting material. The relationship between the Ct values for both sampling methods was calculated by Pearson's rank correlation using the Analysis ToolPak add-in in Excel. The sensitivity, specificity, positive and negative predictive values and corresponding confidence intervals were calculated using a 2 × 2 table with the help of an online tool (https:// www. medca lc. org/ calc/ diagn ostic_ test. php). Results and discussion The COVID-19 vaccines are promising. However, worldwide vaccination will take months, and the only possible way to control the spread of COVID-19 and normalise daily life is to combine vaccination, preventive measures and effective mass testing to detect infected individuals in the acute phase. The gold standard for SARS-CoV-2 testing is still the classic two-step RT-qPCR with RNA isolation and nasopharyngeal-swab collection by healthcare personnel, as introduced at the beginning of the pandemic. In 2021, we are now facing new testing requirements: the test should be painless and easily accessible, limit the exposure of patients and staff to infection, be capable of recognising an infection with more contagious strains and be followed by fast highthroughput assays to obtain results within two hours while maintaining the desired sensitivity. To fulfil these new requirements, we tested painless nasal-swab self-collection followed by direct one-step RT-qPCR and nasopharyngeal-swab collection and then by classic two-step RT-qPCR on RNA isolates on a cohort of 1038 subjects. Of these subjects, 297 (28.6%) were found to be positive and 741 (71.4%) negative for SARS-CoV-2 RNA using the classic two-step RT-qPCR with nasopharyngeal swabs. Upon comparing direct RT-qPCR with two-step RT-qPCR, an agreement of 94.8% (both positive and negative) was proven between the protocols. Moreover, 54 samples (5.2%) were found to be positive using only one protocol (48 samples by two-step RT-qPCR and 6 samples by direct RT-qPCR). Of the 54 positive results from only one protocol, 38 samples (70.4%) exhibited very low viral loads within the defined grey zone (Ct 37-40), corresponding to less than five SARS-CoV-2 copies per reaction, which was below the detection limits of the kits used. These results also emphasised the uneven distribution of the virus through the upper respiratory tracts (nasal, nasopharyngeal, left, right) in the case of an ultra-small viral load, in which the virus disappeared within hours or days with no symptoms. In our large real-world cohort, a specificity of 99%, sensitivity of 95%, positive predictive value of 0.99 and negative predictive value of 0.98 were achieved between the direct and two-step protocols in the samples with clear SARS-CoV-2 positivity (Ct < 37) ( Table 1). Self-collected swabs in COVID-19 diagnostics and screenings offer significant benefits. They are easy to use and highly acceptable to the public; they limit the exposure of subjects and healthcare personnel to infection and reduce the requirement for personal protective equipment [29], as shown in the diagnostics of other respiratory pathogens [1,14]. Regarding COVID-19, both nasopharyngeal and nasal swabs are recommended for SARS-CoV-2 RT-qPCR detection [11], and an update on 30 December 2020 added nasal mid-turbinate swabs as another acceptable method for home or on-site selfcollection [5]. There is evidence that nasopharyngeal and nasal swabs have a similar performance in SARS-CoV-2 diagnostics, but nasal sampling is painless, less invasive Table 1 The sensitivity and specificity of direct RT-qPCR on self-collected nasal swabs in samples with clear SARS-CoV-2 positivity (Ct < 37) detected by two-step RT-qPCR on nasopharyngeal swabs # RT-qPCR positivity is defined as having Ct values lower than or equal to 37 * Direct RT-qPCR positivity for these nasal-swab samples was confirmed by two-step RT-qPCR from RNA isolates Fig. 2 Detection of a control human RNase P (RP) gene (a) and virus-specific N1/N2 genes* (b) in self-collected nasal swabs by direct RT-qPCR; the specimen was heat inactivated before the PCR analysis. To avoid false-negative results in the RT-qPCR, the human RP gene had to be investigated to control for proper specimen collection and amplification reaction inhibition. Positive (red line) and negative (black line) controls from direct RT-qPCR (DIOS-RT-qPCR Kit) were included in each run. *The RT-qPCR setup, primers and probe sequences have been reported previously [15] and more comfortable [19,20,26,27,29], based on our questionnaire results, 90% of the subjects noted that the nasal swab was more comfortable, while 10% did not feel any difference between the sampling methods. Another advantage of self-collection is its independence from testing centres and the reduced COVID-19 exposure risk to healthcare personnel. It may enable sampling to be performed 24/7 on a large scale anywhere, e.g. cars, households, companies and schools (Fig. 1), and thus help identify infected subjects before sports and cultural events, festivals, parties, weddings, business meetings, etc. To avoid incorrect sampling and exclude RT-qPCR inhibition, each sample is controlled by the human RP control gene during direct RT-qPCR analysis, similar to the two-step RT-qPCR (Fig. 2). As shown in our realworld cohort, the majority (> 99%) of enrolled subjects obtained a sample specimen appropriate for SARS-CoV-2 analysis. Another important step in the mass-testing setup is choosing the right analysis method. Rapid antigen testing has shown great promise for symptomatic patients; however, the sensitivity in asymptomatic and presymptomatic subjects reaches only ~ 73% [12]. Asymptomatic individuals are those who test RT-qPCR positive but experience no COVID-19 symptoms, they occur at a rate of 17-20% [3,4,18], with a higher prevalence in younger subjects [2]. Presymptomatic individuals are those who initially present as asymptomatic and develop symptoms days or weeks later [3,23]. Unrecognised 'asymptomatics' and 'presymptomatics' might both contribute to a sizeable portion of the transmission events in a community because they are more likely to be a part of the community than 'symptomatics' , who are isolated [3]. Unfortunately, we did not have the complete clinical data for our cohort of contacts and family members of patients diagnosed with COVID-19. Based on the available follow-up data in approximately a quarter of the positive individuals, we estimated that most of our positive cases were presymptomatic (~ 80%) and that approximately 20% were asymptomatic subjects (mostly younger individuals aged 18-30 years old). Therefore, mass testing in the post-pandemic era should still be based on RT-qPCR or its combination with antigen testing. For mass RT-qPCR testing, we and others have emphasised the use of direct RT-qPCR on nasopharyngeal swabs because of the minimal handling steps, speed, high throughput and simple design while maintaining the required sensitivity [13,15,16]. This study is the first to validate the use of selfcollected nasal swabs for direct RT-qPCR in SARS-CoV-2 testing, demonstrating the required diagnostic accuracy for the detection of infected subjects. To estimate the inhibitory effect of mucosal secretions and epithelia in nasal swabs on direct RT-qPCR performance and sensitivity, we titrated viable SARS-CoV-2 viruses with a known number of copies with swabs from SARS-CoV-2-negative subjects (after thorough wiping of the nasal cavity and insertion of the swabs in COVID medium) and performed RT-qPCR (Fig. 3a). This analysis revealed a ~ tenfold inhibition of the direct RT-qPCR reaction compared to the mixing of the titrated virus with COVID medium alone, as calculated from the Ct difference reaching ~ 3 cycles for the same viral load (Fig. 3b). Nevertheless, due to the low amount of collection medium and large volume of real subject´s swabs added to the RT-qPCR reaction mixture, our approach achieved the required sensitivity requested by the FDA [11]. As shown by the scatter plots for the paired SARS-CoV-2-positive samples in Fig. 4a, nasal-swab sampling with direct RT-qPCR correlated with the nasopharyngeal samples across the whole range of Ct values. The lower correlation coefficient may be associated with the analysis of unnormalised Ct values [8] and the diversity of the distribution of the virus on different mucosal surfaces [17]. For better visualisation, Ct values for paired nasopharyngeal and nasal samples in the SARS-CoV-2-positive subjects are shown in Fig. 4b. Importantly, this direct RT-qPCR setup may also be applied to the detection of variants of concern (e.g. SARS-CoV-2 B.1.1.7, B.1.351 and P.1). While RT-qPCR was introduced for COVID-19 diagnosis in December 2019, the interpretation of the results has not changed since then, and many laboratories report 'positive or negative' results based only on Ct values below 40. In general, however, there is growing evidence that diagnostic tests for 'black or white' decisions often do not reflect the reality of clinical settings; some values may be within the grey zone due to kit sensitivity, uncertainty about the disease status, test reliability or observer, instrumental and biological components of variance [7,28]. In SARS-CoV-2 diagnostics, we and others have declared the diagnostic grey zone to be within the range of Ct 37-40 and have recommended the resampling and subsequent re-testing of samples from clinically affected sites to minimise misclassification errors [28,30,31]. Samples with ultra-low viral loads are often repeatedly analysed in laboratories for the final reporting of the results, which slows down the analytical process. In line with these observations, we followed 20 subjects with test results within the defined grey zone (median Ct 38.1, min-max Ct 37.0-39.9) using direct RT-qPCR on self-collected nasal swabs, all of whom became SARS-CoV-2 negative within two days in subsequent direct RT-qPCR tests (Ct > 40). Additionally, according to information given by the infected subjects, their close contacts did not become infected within the subsequent 14 days. This may suggest that the ultra-low viral loads in reported cases are effectively removed by innate immune mechanisms, thus preventing virus amplification and acute infection, with the viral antigen load below the threshold for recognition by specific immunity [6,10]. Therefore, future research should address the association of ultra-low SARS-CoV-2 loads with infectivity, specific immunity and, particularly, the induction of neutralising antibodies. Based on the above arguments, The distance between the Ct values from the two-step RT-qPCR (blue dot) and paired direct RT-qPCR (orange dot) represents the rate of Ct discordance between the techniques for an individual study subject semiquantitative results relative to viral load (high, middle and low viral load, borderline within the grey zone) instead of only qualitative RT-qPCR test results (positive × negative) would better assist clinicians in riskstratifying patients and their contacts and choosing more appropriate quarantine conditions [9] and, eventually, more appropriate therapies [21,32]. This study has several limitations, especially in relation to the specific conditions of COVID-19 RT-qPCR testing. Worldwide, hundreds of thousands of measurements, primarily following WHO guidelines [30,31], are performed daily in approved diagnostic laboratories with different FDA-and CE-IVD-approved kits and/or homebased methods based on different gene sets, detection limits and instruments and using different disposables due to the shortages caused by the pandemic. Regarding the data, results are required within 48 h after sampling and are reported only qualitatively (negative/positive). The Ct values recorded internally in diagnostic laboratories are not normalised for sampling variability, and the dispersion of the data reaches up to four log units (10,000-fold). The Ct values differ for used instruments and different targets and primer/probe designs [8], which makes it difficult to perform a statistical analysis on the data obtained and compare different approaches. When comparing diagnostic kits and approaches, the only measure of quality in SARS-CoV-2 RT-qPCR diagnostics is a correct quality result in external quality control runs and the detection limit of 20 copies of SARS-CoV-2 per reaction given by the FDA [11]. In this regard, both kits used in this study met FDA requirements for the detection limit and gave the same results in terms of positivity/ negativity in the external quality control runs (for a comparison of Ct values in the WHO (2020) Testing Program for the Detection of SARS-CoV-2 by PCR, see Table 2). Our study demonstrated that painless self-collection followed by direct RT-qPCR represent an excellent option for mass testing during the SARS-CoV-2 pandemic as well as for the post-pandemic era. Further enhancement of testing capacity and lowering the price per one tested subject may be achieved by the RT-qPCR pooling method followed by the re-testing of positive individual samples [33]. Conclusion This is the first large-scale validation study on the use of painless nasal-swab self-collection in conjunction with direct RT-qPCR, proving the diagnostic utility of this approach for mass SARS-CoV-2 testing. This fast, inexpensive and easy SARS-CoV-2 testing method could significantly increase the capacity of the test programmes needed to control the spread of COVID-19 during the pandemic and in the post-pandemic era.
4,296.6
2021-05-04T00:00:00.000
[ "Medicine", "Biology" ]
The ways and means of ITER: reciprocity and compromise in fusion science diplomacy ABSTRACT ITER (short for International Thermonuclear Experimental Reactor, and the Latin word for ‘the way’, as in ‘the way to new energy’), a controlled thermonuclear fusion experiment currently being built in Cadarache, France, is one of the world’s largest technoscientific collaborations. ITER’s complex organisation is rooted in decisions taken during the early negotiation phase in the 1990s. This article focuses on this initial period of the ITER negotiations, showing the importance of reciprocity and compromise in the organizational decisions of the project. These decisions were enacted by actors and organisations who strived to keep ITER together through continuous ‘backstage’ diplomacy work. This work included finding acceptable compromises for the involved Parties on both a diplomatic and scientific level. Looking closely at such work reveals the entangled character of science and diplomacy in large international technoscientific collaborations, as well as the need for compromise to make a project like ITER materialise. Introduction The construction of the controlled nuclear fusion experiment ITER (short for International Thermonuclear Experimental Reactor, as well as the Latin word for 'the way', as in 'the way to new energy'), currently underway in Cadarache, Southern France, is one of the largest technoscientific collaborations in the world today. The project is ambitious in scope as well as in aims: to build knowledge and ability in the fusion field in each of the nations involved while simultaneously constructing a functioning 'first-of-its-kind' reactor. Building a machine such as ITER is, to say the least, a complicated process, where diplomacy, complex management, and negotiation are at the heart of the project. This is true not only for the top-level politicians who sign agreements regarding scientific collaboration on the so called 'front-stage', or the State level, of diplomatic action. It is also true for the science policy advisors, scientists, engineers, lawyers, economists, and managers working on the project, all the way down to the work site itself where German welders may work under Indian supervision following French nuclear-safety protocols. One might say that technoscientific diplomacy is performed there on a day-to-day basis. The aim of this article is to unpack the way in which actors and organizations have strived to keep the project together through the work of 'backstage' science diplomacy, enacted through reciprocity and compromise in the initial phase of the project. 1 I will discuss the consequences of early compromises as manifested in the scientific organization of ITER and show how scientific and diplomatic decisions are entangled in this process. In doing this, I argue that while grand gestures and perhaps grand conflict may go on at the front-stage of science diplomacy, on the backstage compromise is necessary in order to make a project like ITER materialize. Reciprocity and compromise in science diplomacy While the results of ITER are eagerly awaited by many, the project has also been heavily criticized. 2 One reason for criticism has been its organizational approach, as well as many delays and lack of efficiency. As late as 2015 an evaluation almost led to the end of the project, before a change in leadership and a revision of the project schedule. 3 Even ITER personnel admit that the way the project is conceived is, in many ways, unsatisfactory from a scientific and project management point of view. 4 A case in point: ITER uses an in-kind system, where participants contribute by constructing parts of the reactor in their respective countries and then sending them for assembly to Cadarache. Around 85 to 90% of ITER project funds are given through in-kind contribution. This is a particularly high amount for projects of this scale. While an in-kind system is partially used in other big science projects, such as the ongoing construction of the European Spallation Source, the ITER machine is almost entirely built in this fashion, resulting in a network system with a weak central organization. In addition, since the aim is to increase knowledge for all participants, the same component is often built simultaneously by several different parties. The vacuum vessel, for example, is constructed in both Europe and Korea, and the same kind of magnetic coil is manufactured in both Russia and China. Akko Maas, Knowledge Management Officer at ITER, has commented regarding the construction of the vacuum vessel that it 'is the first safety barrier. If you would ask any scientist, technologist, or safety person what not to do, they will tell you, you have to have the vacuum vessel fabricated by one single entity'. 5 Yet, at ITER, it is not. One clue as to why the project was developed in this distributed fashion can be found in the initial citation from Maas: It represents a compromise, and it is based on an ideal of reciprocity. The concept of reciprocity has been a core characteristic of diplomacy over time. 6 The ideal of reciprocity in this sense implies equality in exchange, a balance between parties of a negotiation. Everybody has to gain something and gain as equally as possible, although reciprocity does not imply that all parties involved are necessarily of equal standing. 7 ITER is an example of so called specific reciprocity, namely 'situations in which specified partners exchange items of equivalent value in strictly delimited sequences'. 8 The attempts to make sure that reciprocity is ensured during the negotiation and construction of ITER has, in turn, resulted in quite a few compromises. An example of such a compromise developed during the discussion of the siting, when the EU decided to propose the French site, Cadarache, as the EU candidate, thus bypassing the Spanish site, Vandellos. In order to ensure a certain measure of reciprocity for Spain, the European Agency for the Joint Undertaking, responsible for coordinating the EU's in-kind contribution to ITER, was placed in Barcelona instead of at the ITER site itself. Bernard Bigot, current Director General of ITER, reminisces a propos the choice of placing the Agency for the Joint Undertaking in Vandellos, that 'it was a trade-off. [. . .] But it was the price to pay in order to get out [of the situation]. And you can see how maybe [the] wrong decision could be taken, wrong decision from the point of view of technical matter, but it is the only way you could make the project move on'. 9 This idea, that compromise is the only way for the project to survive, is echoed by several members of the ITER leadership. As pointed out by G.S. Lee, current Deputy Director General of ITER, they had to 'do it this way, deliver this way, or not do it . . . Either one is not very good, but the worst is not doing it'. 10 Compromise is another central concept in diplomacy, where it has often been seen as a tool of a 'realist' mode of diplomacy. 11 To reach a compromise can be construed as a success, but also as a failure if we consider compromise in the sense of accepting standards that are lower than desirable. Meanwhile, despite its centrality, the concept has not been much explored in recent science diplomacy work. Those observers who consider science and diplomacy as radically different practices, often refer precisely to the tension between the supposed realist necessities of diplomacy as opposed to the idealist aspirations of science, and from that perspective, compromise may be a difficult topic to tackle. 12 For example, the central science diplomacy text published by the Royal Society and the American Association for the Advancement of Science (AAAS) in 2010 starts with the proclamation that science and diplomacy are not 'obvious bedfellows', since science is 'in the business of establishing truth, while the opposite may be true of diplomacy'. 13 Similarly, Turekian et al. point out that while science 'is neither inherently political nor ideological, but represents a type of universal language', diplomacy is 'characterized by dialogues, negotiation and compromise'. 14 However, if seen from somewhat less essentialist vantage points, diplomacy has both idealist and realist aspirations, insofar as they can be separated, and so does technoscientific work. In particular in large international technoscientific projects such as ITER, decisions need to be based on both a scientific and a diplomatic rationale, and often these two are entangled. 15 Thus, in order to understand the way the ITER project is organized, it is vital to explore its origins in reciprocity and compromise as well as the ways diplomatic and scientific rationales are entangled in the process of project negotiation. As pointed out by political scientist Alin Fumuresco, 'compromise looks messy, the dreary stuff of day-to-day politics'. 16 This 'dreary' work of trying to reconcile the wills, means and materialities engaged in a large technoscientific project does not happen on the front-stage of science diplomacy. Instead, it happens on the backstage among the many actors on different levels working on the project and its design, from science and technology policy advisors to scientists working at research sites. For the purpose of this article, backstage diplomacy encompasses the practices and processes that both lead up to and deal with the consequences of front-stage diplomatic negotiations. This includes the work of the ITER council, the Management Advisory Committee and the Technical Advisory Committee, as well as the scientific work at the Home Teams and the Joint Central Teams. In this article, I will trace this work in the early history of the ITER project through the two separate so-named Design Activities, the Conceptual and the Engineering, taking place from 1987 to 2001. The history of the ITER project has been told mainly by actors involved in the project over time from different countries through articles and informational material on the project. From the Russian perspective, the period that I will be covering has been described in the book ITER: A decisive step (ИТЭР: Решающий Шаг) published in 2004 by the Russian Ministries of Atomic Energy and of Education in collaboration with the Moscow State University. 17 Two French monographs have also been published on ITER by actors involved in the projects. The first is ITER: le chemin des étoiles, by Robert Arnoux and Jean Jacquinot, which outlines the history of ITER up until 2006. 18 More recently, Michel Claessens, former head of communications of ITER, published a monograph on the history of ITER, while also engaging in some of the current debates. 19 In addition, there is an increasing amount of material discussing the pros and cons of the ITER project, including monographs, articles, podcast episodes, and the documentary 'Let There be Light' (2017). 20 A recent example is the book Soleil Trompeur by Isabelle Bourboulon. 21 In general, while these texts sometimes touch on the origins of ITER, they rarely delve deeper into the discussions taking place during the 1990s, but instead focus on the later developments regarding the siting procedure and construction start. This article will examine that earlier period more closely, to tease out the processes that lay the groundwork for the later developments. Overall, ITER has so far not been the subject of much historical or social science research. One exception is Patrick McCray who has examined ITER as an example of a global research project and transnationalism in research, focusing on the role of the European Union. 22 McCray's description of the political game behind the siting procedure in the early 2000s, shows how fusion is used on the level of front-stage diplomacy, and highlights the tensions between national programs and international collaboration. These tensions are clear in the processes described in this article. If largely absent from historical and social science literature, ITER has often been used as an example in recent science diplomacy literature. An article was dedicated to it in the first ever issue of the journal Science & Diplomacy in 2012, and it features in the first chapter of the only edited volume to date on science diplomacy. 23 It is further the first example in the recent and central volume on science diplomacy by Bruno Ruffini, and in the earlier mentioned monograph on the history of ITER, Michel Claessens classifies ITER as a diplomatic technology (téchnologie diplomatique). 24 In these texts, ITER is held up as an example of science diplomacy in the sense of diplomacy aiding science collaboration, as well as a successful large international collaboration. Of these authors, Claessens is the only one to scrutinize what being a 'diplomatic technology' might mean in more detail, focusing on the creations of scientific and diplomatic communities as well as technology and expertise. 25 The way ITER is used in this literature aligns with descriptions by Kaltofen and Acuto of how science diplomacy is often conflated with the idea of 'epistemic communities', leading to a rather superficial description of heterogenous and complex phenomena such as science diplomacy. They argue that one way of deepening the analysis would be to use a more practice-based approach. 26 In the remainder of this article, I will focus on a time period in ITER history that I see as crucial for the set-up and organisation of the project, but which is generally glossed over in the earlier literature. Further, while McCray has shown the kind of rhetoric and processes that have been used on the front-stage of ITER negotiations, my analysis focuses on a different level of negotiations, and the decisions needed for the project to function in practice. The study also adds to the science diplomacy literature by focusing on practices of compromise and reciprocity, as opposed to ideas of science diplomacy as predominantly exchange of knowledge and expertise. The road to ITER The summit in Geneva in 1985 where Ronald Reagan and Mikhail Gorbachev met for the first time is often described as the starting point for ITER. Through lobbying by the scientific community from the Soviet Union and the US, and, in particular, through the close relation between Mikhail Gorbachev and scientist Evgenii Velikhov, fusion cooperation was put on the agenda of the meeting and during the planning phase Japan and the European Community (EC) also became involved. Thus, one of the results of the summit was an agreement to cooperate in the field of fusion. 27 Historically, since the Geneva meetings in 1956 and 1958, when the main fusion (and nuclear) powers including the Soviet Union, the US and the UK, formally declassified their fusion research, fusion has embodied the possibility for reciprocity and cooperation in a high-profile area, without short-term risks. 28 The possibility to collaborate in an important and highly politicized scientific field while knowing that applicable results would only be forthcoming in the long term made fusion fitting for diplomatic relations. Historian Barbara Curli has pointed out that the gesture of declassification can already in itself be seen as using fusion cooperation as a tool for foreign policy. 29 In this vein, fusion also became one of the areas for US and USSR cooperation after the Geneva meeting in 1958. 30 During the Cold War, fusion research would repeatedly intersect with international, national and regional politics, and, as pointed out by McCray, often as a continuation of politics by other means. 31 The Geneva meeting of 1985 was no exception. Moreover, during the 1970s, fusion research had entered a more hardware-focused phase. 32 Up until the end of the 1960s, several types of reactors were envisaged by research groups in different countries, making collaboration on a machine challenging. However, in late 1960s, the Soviets had an important breakthrough in their tokamak reactor design, causing scientists in other countries to turn to the same design. 33 The Soviet breakthrough led to a heightened interest in fusion technology, and this coincided with the oil crisis and new investments in alternative energy during the 1970s. As a result, three large tokamaks were constructed during the 1970s and came into operation in the late 1970s and early 1980s: the Tokamak Fusion Test Reactor (TFTR, created by the US in Princeton), the Japan Torus-60 (JT-60, by Japan in Naka) and the Joint European Torus (JET, by the EC in Culham). JET was the first international cooperation around the construction of fusion hardware, a result of a European fusion network that had slowly been established over the 1960s, largely due to the diligent work of the director for the Euratom fusion program, Donato Palumbo. JET also became the first so called 'Joint Undertaking' of the European Communities, and such a setup is also being used for the common European engagement in ITER. 34 While the JET project in many ways showcased some of the problems that the international collaboration around ITER would later face, it paved the way for the possibility of the European Community entering ITER as a single, collective actor. The heightened interest in fusion in the 1970s had prompted the International Atomic Energy Agency (IAEA) to form an International Fusion Research Council, and to gather information regarding the objectives of national fusion programs. 35 In 1978, the Soviet Union proposed an international tokamak collaboration under the auspices of the IAEA, with Japan, the US, the EC and the USSR participating. 36 This led to a number of workshops being initiated by the IAEA, but discussions did not lead to any concrete designs, and international cooperation around the peaceful uses of fusion came to suffer from heightened tension between the USSR and the US in the early 1980s. Thus, in 1985, due to earlier collaboration, a research infrastructure, an international network and organization around fusion already existed, and even the start of a tokamak project including the US and the USSR. In the words of McCray, fusion 'made sense' as an 'arena for Cold War Superpower collaboration'. 37 However, while the initial agreement in Geneva had been a grand political gesture, many actors still hesitated regarding such a large endeavor, and it would take until 1987 for the parties to meet and officially discuss the project. 38 Up until now, while scientific collaboration had taken place between national research groups, with the exception for JET, national programs had overall retained their autonomy in building larger devices. Thus, all involved actors had their own plans for larger tokamaks, so called 'next step' devices, and their national (and international in the case of the EC) programs competed with ITER for resources. 39 In Europe, the NET device was seen as the 'next step' for a fusion reactor, and its conception was developed in parallel to the ITER discussions. 40 Many in the European scientific community were also suspicious of ITER as a political project between the two superpowers and did not trust that it would become reality. 41 A similar discussion on resource distribution took place in Japan. 42 After having undertaken smaller experiments during the 1950s and 1960s, in 1975, the Japanese government made the fusion program a prioritized national program. This resulted in JT-60, and the Japanese researchers worked on their own 'next step' machine called the Fusion Experimental Reactor (FER). 43 In the US, actors similarly hesitated due to the expected rivalry between the national programs and the larger one, but also due to a reluctance to participate in technological transfer with the Soviet Union. 44 Despite these misgivings, global international cooperation was still seen by many actors as the only way to be able to build a larger demonstration reactor, since no one actor had the resources to do so independently. 45 Fusion is technoscience in the sense that new technologies are needed both to develop a fusion energy system and to produce knowledge on fusion. The research ensembles needed to undertake fusion research are resource-heavy, and 'visible, and accountable to other researchers and to the public, and so become more tightly coupled to diverse communities'. 46 Thus, as noted by historian John Krige in the case of CERN, international collaborations are often 'born more of pragmatic needs that of an idealistic commitment to "universality". These "needs" can be for scientific, technical, economic and even political support'. 47 For example, cost estimates and an inability to nationally produce components for the next generation of fusion machines moved the Soviet Union to suggest the INTOR collaboration; as well, EC cooperation on JET was motivated by the fact that without Euratom support, the national programs in the EC were not likely to secure state funding. 48 Similarly, the Japanese government saw an opportunity with ITER to share the costs of building a demonstration reactor with other parties, and ITER was in the end re-conceptualized as a continuation of the Japanese program instead of competing with it. 49 This tension between the wills and reactor plans of different national research groups on one hand, and a perceived need for global cooperation to construct the next large fusion device on the other, would to a large extent shape the ITER project organisation. Ways and means of a fusion reactor: building a structure of reciprocity Two years after the public declaration of the Geneva summit, in March 1987, delegations from the US, the USSR, the EC (through Euratom) and Japan met in Vienna to initiate formal discussions at the invitation of IAEA Director General Hans Blix. 50 These formal front-stage diplomatic discussions set the framework for the ensuing backstage discussions at the level of the ITER council, and at ITER work during the Conceptual Design Activities [CDA] that were launched in 1988 and continued until December 1990. These discussions were central in forming a structure to ensure reciprocity between the Parties in a way that made sense on both a scientific and diplomatic level. 51 This would come to include reciprocity in the form of organizational power, financial responsibility, task division, scientific decision making and representation in terms of staff and work location. The decisions made during this period in turn heavily influenced the work during the Engineering Design Activities [EDA] that followed. The organizational structure of the CDA included the ITER Council, the ITER Management Committee (IMC) and the ITER Science and Technology Advisory Committee (ISTAC). 52 Two members from each Party, including scientists and science policy administrators, were nominated to the ITER Council, responsible for the overall direction of the CDA (and later the EDA), and its execution. 53 All decisions in the Council were to be made unanimously. 54 Krige has noted that the scientific ideal of many physicists is one of shared decisionmaking and power derived from experience and expertise. However, in reality there are often informal hierarchies in scientific collaboration, whether they are acknowledged or not, and in particular during the construction of a technical artefact decisions may need to be imposed from above to a considerable extent. 55 Such tensions between ideals of consensus and reciprocity in the sense of equal sharing of responsibility, and the actual practice of building a machine, would become more pronounced during the EDA, as we will see below. While representatives of the Parties would work in a team at their home institutions (EURATOM for the EU, the Japanese Atomic Energy Agency for Japan, Department of Energy for the US and ROSATOM for the USSR), each of the home teams also sent ten representatives to do joint work in Garching, near Munich. This joint work did not have specific financing; the representatives were stationed there but paid by their home teams. 56 The joint work was done at a European site, but the three main chairing positions were given to the other Parties. The role of ITER council chairman was given to John Clarke from the US Department of Energy; the ISTAC chairman to Boris Kadomtsev from the Kurchatov institute; and the Management Committee was chaired by Ken Tomabechi from the Japan Atomic Energy Agency, JAERI. 57 Thus, at this first stage, reciprocity was ensured through a rigorous division of organizational labor and responsibility. Similarly, in terms of the scientific object at hand, the reactor, these first discussions were an initial attempt to reconcile the different ideas about what a 'next step' machine might entail, and thus ensure reciprocity in terms of scientific gain. Each Party had their own experimental reactors, differing in size, performance and shape, as well as what the reactor was supposed to do or not. 58 As pointed out by Denis Willson in his book about JET, 'an immense gulf' lies between the collaboration to construct a device designed for a particular research purpose, 'and any international cooperation to produce the prototype of any viable reactor'. 59 There are several ways to bridge such a technical and scientific gulf, and the Parties needed to find compromises that all could agree on, and which would benefit not only the ITER project as such, but each institution's own fusion research, in view of the tensions between the larger project and the national programs. However, the 'gulf' between building a research device and an industrial prototype is not only technical. 60 In order to address the transition from the CDA to the EDA, in July 1989 the ITER Council decided to charter a working group to explore possible 'ways and means' for the EDA, to find the 'best reconciliation possible between technical, administrative and political needs and possibilities' on the way forward. 61 The discussions in the group regarded to a large extent reciprocity in terms of intellectual property, procurement, financial organization and siting. Procurement was a central concern, as it determined the task division among Parties and the aim was to divide Research & Development (R&D) tasks and other contributions between the home institutions in a fair manner. Several models were initially discussed, some more 'centralized' and others 'decentralized'. The more centralized ones meant that a strong central team was responsible for the contract design with an open call for a tender and a selection from offered industrial contracts. The decentralized model, on the other hand, would mean that each of the home teams was responsible for an equal part of the R&D, even perhaps going so far as each Party contributing one quarter of the modules of each component. 62 While the EC had already used a centralized model at JET, and considered it to be more efficient than a de-centralized one, it seems as if other Parties found the competitive tendering policy difficult to comprehend and wanted a variant which allowed for less cash flow across frontiers. 63 In the report of the Working Group, a kind of hybrid model was proposed as a compromise, where a general set of tasks was defined at the outset of the EDA, with the help of which the project Director would develop and propose an allocation of comprehensive packages to the Council. After allocation, the individual Parties would themselves organize the fulfilment of their work packages, either by their own personnel or by procurement from other sources. A Joint Central Team (JCT) under the leadership of the ITER Director would be responsible for design integration work. The working group also suggested a system that did not require transfer of funds across the Parties' boundaries, and would thereby be independent of exchange rates, labor rates, overhead rates and other similar complications. 64 This 'currency' so to speak was called the ITER Unit of Account (IUA), or ITER credits. 65 Thus, both the task allocation system and the IUA became tools for equal division of labor and benefits between Parties with vastly different labor contexts. At the same time, this arrangement allowed for all Parties to engage in knowledge production, which was one of the main aims of the project. Thus, ideally, such a model would ensure reciprocity both in terms of scientific benefit, and in terms of amount of scientific work performed. In the EDA agreement, such reciprocity would be formulated as a 'principle of equality of the Parties with regard to their status in, their contributions to, and their benefits from the cooperation'. 66 A more challenging issue proved to be agreeing on a Joint Central Team Site. The idea was to have a central team based at a single site during the EDA, to which personnel from the different Parties would be relocated. 67 In February 1991, new front-stage Quadripartite EDA Negotiations started that included the main actors from each institution involved, as well as formal diplomatic representatives from the Parties. The aim of the negotiation was to sign the EDA agreement. At the first meeting, three Parties proposed to host the JCT: Naka (Japan), Garching (Germany) and San Diego (US). 68 At the second negotiation meeting, it became clear that none of the Parties were willing to withdraw their offers, and the quantifiable comparison of the sites did not lead to a clear view of the best option. 69 Thus a new task force was created with the mission to investigate the consequences of dividing the JCT over two or even all three proposed sites. The draft report of the task force concluded that a single-site solution would be the best solution but having two or three co-sites were also considered viable options. The risks of the latter solution, in comparison with the single-site solution, was the loss of strong central leadership and efficient communication needed for a project of the technical complexity and international charter of ITER, as well as increased costestimates and concerns regarding personnel recruitment. 70 The advantages of a multisite solution were the close connection to each host Party's home program, as well as heightened visibility and support in the three countries. If successful, it could also provide a new model for international mega-projects, which was an ITER Council objective. 71 The connections to each home program as well as the heightened visibility was important in light of the earlier mentioned competition between the national programs and the international project. ITER could not be seen as a rival to the national programs, especially in terms of shared funding. 72 The list of disadvantages of a multi-site solution was, however, double the length compared to the list of advantages, although the delegates' view of the risk of a multisite-solution 'varied from small to significant'. 73 Despite this, at the third meeting of the Quadripartite Negotiations, the negotiating parties accepted the solution of three co-centers 'of equivalent importance' as well as naming Moscow (the Soviet Union had not proposed to host a site) the formal seat of the ITER Council. 74 At stake in these discussions was the reciprocity between the Parties, as well as the control over the overall program and its connection to the scientific work of each home team. Considering the strong practical case for a single-site solution, it may be concluded that a diplomatic rationale rather than a scientific one underpinned this decision. Meanwhile, as pointed out above, the national research teams also had an interest in ensuring that the scientific work had a close connection to each national research program to legitimize the participation in the project. Further, the EDA would also mean a much higher economic stake than the CDA, and all Parties wanted to assure benefit from their investments. The decisions to use a de-centralized procurement organization of ITER, as well as the split JCT are fundamental ones taken against a backdrop of efforts to ensure reciprocity in a large technoscientific project that exists in the tension between international cooperation and national research, between technoscience and politics, and between backstage and front-stage politics. While these decisions can be seen as necessary to ensure the project's development, they would also lead to complications during the EDA. A 'particularly challenging' project: the consequences of reciprocity When the EDA started at the first ITER Council meeting in Vienna in September 1992, the set up was complex, to say the least. In addition to the ITER Council, the Home Teams (HT), and the Joint Central Team (JCT), now divided over three different Joint Work Sites, two permanent advisory committees, the Technical Advisory Committee (TAC) and Management Advisory Committee (MAC), were set up with the task to review the work of the JCT and the HT, and report to the ITER Council. 75 Each Joint team site had a deputy director elected for sites outside of their home countries. 76 Except for the Director and the Deputy Directors, each Home Team also had a Home Team Leader (HTL). Activities continued to take place under the auspices of the IAEA, which provided not only an official multilateral body and a close connection to the rest of the nuclear community not directly involved in ITER, but also practical assistance with publications and economic administration. 77 Adding to the above was a plethora of contact persons, expert groups, special working groups, special review groups and specialized research groups, as well as contacts with industrial actors, and the fusion community at large. This complex set-up soon led to challenges and the need for compromises on several levels, as well as struggles between the Home Teams and the Joint Central Teams. As a part of this complexity, the set-up of the EDA had a built-in tension between the idea of decision-making through consensus, and the delegation of authority needed to manage such a complex hardware project. The Ways and Means working group had emphasized the strong authority of leadership and clear management structure needed due to the division of the JCT, and the EDA agreement clearly stated that the Parties should 'refrain from giving any instructions to their members of the Joint Central Team that may introduce conflict with the Director's management authority'. 78 Meanwhile, the rules guiding procedures for the ITER Council, MAC, and TAC as well as Special Working groups proposed a decision-making process that would strive for consensus, and each Party had to speak with one voice. In the ITER Council, as well as most Special Working groups, all decisions were made unanimously, while the TAC and MAC could make majority decisions in the case that consensus was impossible. 79 According to the de-centralized model of procurement and R&D tasks the principle of equality of the Parties would ideally also apply to the task division. Tasks were assigned through a process by which the director 'through close interaction with the Home Team Leaders' decided on appropriate task packages as well as which Home Team to assign it to and how many ITER units each task was worth. The larger task assignments were approved by the Council. 80 Task assignment procedures needed to be constantly discussed and negotiated in terms of new tasks, task package size, and lack of integration due to split or overlapping tasks among the Parties. 81 In addition, national contexts affected the possibilities for Parties to fulfil their tasks. As an example, the Russian Federation remained in the ITER collaboration after the Soviet Union had dissolved, but the Party was clearly having trouble fulfilling both tasks and staffing, owing to the financial and political situation of the early Russian Federation as well as Russian academia at the time. 82 A guideline regarding 'Inadequate Performance by a Party on Design and R&D tasks' had been formulated, but in practice, the Parties could only work around this problem as best they could. Thus, while the rules of reciprocity were very clearly set, they could in this case be compromised if one of the actors could not fulfill their part of the deal. One reason for why such a compromise was possible was the long-standing trust-building between the Parties. It is important to note that the Parties involved in ITER were all there due to their prominence in fusion research, and many of the actors had collaborated before. The fusion community that had developed over time had also allowed for a certain trust-building and knowledge about the capacities of the other Parties. In such a community, certain compromise can be allowed for even though the participants do not fulfil exact specific reciprocity. 83 The first years of the EDA were thus marked by the efforts of putting into place a functioning management structure for the project in view of the complex organization outlined above. These difficulties translated, among other things, into conflicts between the ITER Council and the Director, which would lead to the Director, Paul-Henri Rebut, stepping down in 1994, to be replaced by Robert Aymar. 84 Rebut later commented that he considered that the quest for compromise often overrode the real needs of the project. 85 The management issues, however, were not only due to tensions between the Director and the IC, but also to those between the Joint Central Team and the Home Teams. In his inaugural speech to the ITER Council, the new director described what he considered to be the conditions for achieving consensus in a technical project: That every Party and every actor in every Party accept that they will 'follow a decision taken in the interest of the project, rather than in accordance with its own proposal. It is the HT responsibility to find solutions for conflicting interests, but always to meet the needs of the project, the success of which is vital for their own national program'. 86 Aymar here pointed to the everyday compromises that needed to be made between ITER and the Home Team programs, a tension that had been present from the very start. The tension was eased by giving the Home Teams more influence on task assignment and overall research design, thus increasing the de-centralized decision-making. 87 While Aymar's declaration that the success of ITER was vital for the national programs offers one way of seeing these developments, it was also true that the relevance of ITER to the national programs was necessary to the project's existence. In principle, the most concerning risk for the ITER collaboration, and one main rationale for all the ideals regarding equality and consensus, was the fact that if a Party did not feel it benefited, or achieved reciprocity, from the collaboration, it could simply leave. As noted, each Party had their special interest to defend regarding what ITER should do, depending on national energy policy and research specialties. As an example, for Japan, fusion was perceived as an important technology to help fill an urgent perceived energy need in the country and their side was thus more invested in a machine that would quickly lead to industrial production. Meanwhile, in the US, many researchers were in favor of smaller scale experiments on already existing machines until some of the many unsolved scientific issues were handled, and the Department of Energy did not consider a new type of energy production as urgent. 88 Overall, the US was the Party whose national fusion efforts were the least devoted to tokamaks. 89 These competing interests led to conflicts regarding the scientific specifications of ITER. The technoscientific discussions between the Technical Advisory Committee, Joint Central Teams and the Home Teams of the four Parties show different views on issues such as materials, blanket construction, physics, interpretations of safety parameters, heat calculations, and resources allocated to parallel solutions. 90 As an example, the issue of whether to construct a shielding blanket that would also breed tritium led to discussions on materials, how much resources to spend on breeding capabilities, and the organizational responsibility for its design. Over time a strategy developed that had the Joint Central Team designing a blanket shield in modules for the first phase of ITER performance, leaving the Home Teams on their own to design breeding blanket modules to be installed at a later phase. 91 This set up was strongly criticized at an early stage by the Russian TAC members, who argued that the goals of the EDA protocol (which included tritium breeding) could be jeopardized if the blanket program was set up this way. 92 The different views of the characteristics of the machine also affected the main issue of discussion: the size and cost. During the first years, the machine grew to large proportions in order to accommodate all the wishes of the Parties. 93 Over the years, criticism of the project grew in the US and in 1998 that Party decided to leave the ITER collaboration. While the technical discussions and management issues were certainly part of the reason, the deciding factors for the withdrawal were national finances and politics. 94 In addition to the US leaving the collaboration, Japan, which had been considered the most likely country to host the machine, was assailed by financial crisis, and asked for a three-year delay for the planned construction phase. 95 Russia was still not financially able to contribute fully to a construction and was at this point not able to contribute with funds. 96 Moreover, at this point in time, oil prices had gone down, and many of the rationales for a fusion reactor both as a Cold War project and a way out of the energy crises were weaker. Despite this, work was continued by the remaining Parties to produce a final report in 2001, but with a cut to the cost of the design of fifty percent, thus reducing the technical objectives. 97 This meant reducing the size of the tokamak as well as compromising on what had been one of the main scientific objectives of the machine, namely to reach ignition in a burning plasma, thus making the plasma sustain itself indefinitely. 98 During the EDA, many of the structures of reciprocity that had been set up for ITER turned out to be problematic for the everyday work of the project. The entanglement between different technical, economic and social interests within and outside of the project as well as the real fear of Parties withdrawing from the collaboration had to be handled at every step of the way, resulting in compromises on all levels. Conclusion The history of international collaboration on fusion research shows how cooperation has always been entangled with and carried out through diplomatic and political means. However, as both the scientific and diplomatic character of the collaboration has changed over time, so have the ways that actors have been forced to consider reciprocity and compromise. In particular, as the more collaborations turned multilateral and hardware oriented, the more economic, political and scientific compromises needed to be made. In the case of the early history of ITER, this meant arranging for reciprocity in order to ensure both political and scientific participation. Based on such reciprocity, the Parties strived to make as optimal as possible decisions from both a diplomatic and a scientific point of view that all Parties could accept. Meanwhile, in such a project, diplomatic and scientific decisions are entangled. These decisions were made through compromise during the everyday grind of backstage scientific and diplomatic work on different levels of ITER. They needed to accommodate tensions between the aims of the project itself and those of the national research teams, as well as between the will to create and share new scientific knowledge on the one hand and build a working industrial machine on the other. They were also dependent on the different social, political and economic contexts of the participating Parties. While compromises were made on the levels of organization, scientific practice, and the characteristics of the machine, certain economic issues were more difficult to compromise on. Economic issues instead led to a complete reimagination of the machine, as well as, in the case of Russia, a Party not being able to contribute financially. In the case of the US, economy was cited as the main reason for the Party to leave the collaboration. On the other hand, it is important to note that the US also was the Party who had the least investment in the particular technical solution proposed in the project. Thus, they counted the least on reciprocity in terms of scientific return. The consequences of the entanglement between diplomatic and scientific decisions continue to show in the ITER project today. After the EDA ended in 2001, the siting and further ITER negotiations would take six more years, and during this time the US rejoined the project, while three other Parties, Korea, China and India, joined. 99 At this point ITER has become one of the largest scientific collaborations in the world, and it may thus be seen as a successful compromise in terms of the achievements of the project so far. Meanwhile, many organizational structures of the early period of the project have remained, including the de-centralized model, and the current in-kind system which resembles the task assignment procedure. Leadership issues as well as the management complexity of the geographical split between the Home Teams, ITER institutions and the ITER site itself have continued to haunt the project and affect its work. The decentralized organization, in particular, was one of the main points of discussion during the assessment in 2015. Thus, this organisation can also be seen as risking compromising the project and its goals. Nevertheless, it is clear that the current organizational and scientific set-up of ITER cannot be fully understood unless it is put in a historical context of both diplomatic and scientific compromises. 3. Clery, "New Review Slams Fusion"; and Butler, "ITER's New Chief." 81. See for example "MAC report and advice, IC-6" (IEDS 6), 139; "ITER R&D Programme Developments and Task Sharing proposals" (IEDS 6), 187; "MAC report and advice, IC-5" (IEDS 6), 59. 82. See for example "ITER EDA status report to IC-6" (IEDS 6), 110. The Russian Federation did not have a Joint Work Site, which meant that less of the joint funds were spent there, something that was also addressed. "Meeting 8 record of decisions, July 1995" (IEDS 8), 15-16. 83. Krige mentions trust-building as one condition for the successful collaboration at CERN. Krige, "Some Socio-historical Aspects," 241. My Interviewees also emphasize the trustbuilding ongoing between the Parties. 84. IC-6 Record of Decisions, July, 1994 (IEDS 6), 97. Traces of this tension can be seen in for example, Letter from Velikhov (IC Chairman) to P-H Rebut (Director), 5 January 1994; and the answer from P-H Rebut from January 11 1994 (IEDS 6), 75-76. 85. As quoted by Jaquinot, "Fifty Years in Fusion," 116. 86. "Address to the ITER Council by R. Aymar, 27 July 1994" (IEDS 6), 114. 87. "Address to the ITER Council by R. Aymar, 27 July 1994" (IEDS 6); "MAC report and advice, IC-6" (IEDS 6), 147. See also the set-up of the research projects in "Detailed Design Report, Cost Review and Safety Analysis" (IEDS 11), 145. The Technical advisory board had earlier pointed out a need for improvement in regard to empowerment of the co-centers of the Joint Central Team and their interaction with the home team and the fusion community. "TAC report to IC-5" (IEDS 6), 50. 88. "The Japanese Nuclear fusion program," 3. Sessler et al. "Build the International Thermonuclear." McCray also notes this division, quoting a discussion with ITER official Hiroshi Masumoto regarding the fact that fusion research in the USA is often "seen as physics whereas in Japan and Europe it is largely seen as engineering done for eventual energy applications." McCray, "Globalization with Hardware," note 74. 89. In a report from 1985 the CIA reported that Japan, Western Europe and the USSR devoted in the area of 80% of their fusion programmes to tokamak research in the early 1980s, whereas the same number in the US was 30%. "The Japanese Nuclear Fusion Program," 3. 90. See for example, "MAC report and advice, IC-5" (IEDS 6), 61; Letter from Prof. E. Adamov to Dr. P Rutherford, 9 December 1994 (IEDS 6), 229; "TAC report to IC-4," (IEDS 4), 85. The TAC also comments that the physics used in the ITER outline design report is not one that has received sufficient acceptance in the fusion community, and that this is a problem. "TAC report to IC-5 (IEDS 6)," 51-52. 91. "TAC report to IC-7" (IEDS 6), 220; "ITER Interim Design Report. Cost Review and Safety Analysis" (IEDS 8), 41; "ITER Detailed Design Report, Cost Review and Safety Analysis" (IEDS 11), 140, 144, 146. 92. Letter from Prof. E. Adamov to Dr. P Rutherford, 9 December 1994 (IEDS 6), 229. 93. The TAC had expressed concerns over the cost already early on, as well as cautioned against expanding the major radius to over 7.75 m or less. "TAC report to IC-4" (IEDS 4), 85; "TAC report to IC-5" (IEDS 6) 49-51. Still, in the Detailed design report from 1996, the major radius had been expanded to 8,1 metres. "ITER Detailed Design Report, Cost Review and Safety Analysis" (IEDS 11), 131. Meanwhile, Aymar is quoted in Claessens saying that the reason the machine became so big to begin with was due to the demands of USA and the USSR. Claessens, ITER,45.
11,309.2
2021-01-02T00:00:00.000
[ "Physics" ]
High-efficiency CRISPR/Cas9 multiplex gene editing using the glycine tRNA-processing system-based strategy in maize Background CRISPR/Cas9 genome editing strategy has been applied to a variety of species and the tRNA-processing system has been used to compact multiple gRNAs into one synthetic gene for manipulating multiple genes in rice. Results We optimized and introduced the multiplex gene editing strategy based on the tRNA-processing system into maize. Maize glycine-tRNA was selected to design multiple tRNA-gRNA units for the simultaneous production of numerous gRNAs under the control of one maize U6 promoter. We designed three gRNAs for simplex editing and three multiple tRNA-gRNA units for multiplex editing. The results indicate that this system not only increased the number of targeted sites but also enhanced mutagenesis efficiency in maize. Additionally, we propose an advanced sequence selection of gRNA spacers for relatively more efficient and accurate chromosomal fragment deletion, which is important for complete abolishment of gene function especially long non-coding RNAs (lncRNAs). Our results also indicated that up to four tRNA-gRNA units in one expression cassette design can still work in maize. Conclusions The examples reported here demonstrate the utility of the tRNA-processing system-based strategy as an efficient multiplex genome editing tool to enhance maize genetic research and breeding. Electronic supplementary material The online version of this article (doi:10.1186/s12896-016-0289-2) contains supplementary material, which is available to authorized users. Background Mutants are critical in genetic research for the study of gene function, and gene editing technologies can efficiently create mutations in targeted genes. The clustered regularly interspersed short palindromic repeat (CRISPR)/ CRISPR-associated protein (Cas) system has evolved from studies of the defense systems of bacteria to a newly established gene editing tool [1]. The CRISPR/Cas9 system is derived from Streptococcus pyogenes and possesses a protospacer adjacent motif (PAM) recognition sequence [2,3]. The Cas9 gene and a 20-bp guide RNA (gRNA) that is complementary to the DNA site being targeted for mutation need to be transformed into the target organism to create a gene disruption. The CRISPR/Cas9 system has been demonstrated for efficient gene disruption in multiple organisms, including bacteria [1], yeast [4], zebrafish [5], fruit flies [6], human cells [7] and plants [8]. In plants, the CRISPR/Cas9 system has been effectively applied in many species, including Arabidopsis thaliana, Citrus sinensis, Nicotiana tabacum, Oryza sativa, Solanum lycopersicum, Sorghum bicolor, Triticum aestivum [9], Zea mays [10][11][12][13], and Glycine max [14]. For highly efficient gene modification, the CRISPR/Cas9 vector construction strategy should always be optimized for the usage in specific organism. Gene editing tools with the capability to manipulate multiple targets are of great value. The CRISPR/Cas9 system is a promising tool for this purpose. Multiplex gene editing can be achieved by expressing Cas9 along with multiple gRNAs, each targeting different sites. Conventional delivery methods involve creating gene constructs containing multiple gRNA expressing cassettes for multiplex gene editing in one plasmid or using multiple plasmids [15][16][17][18][19][20]. Due to the limitations of the delivery method and plasmid capacity, compacting multiple gRNAs into one synthetic gene would be an advanced strategy. Xie et al. [21] demonstrated that multiple gRNAs could be efficiently produced from a single synthetic gene using a tRNA-gRNA architecture that allows for the precise excision of transcripts in vivo by the endogenous RNases, RNase P and Z, in rice. This strategy could be broadly used to generate multiplex gene editing in both monocot and dicot plants after specific optimization, because the tRNA-processing system exists in virtually all organisms. Although the CRISPR/Cas9 system showed high efficiency for genome modification, it did not always create strong mutation with complete abolishment of gene function especially for long non-coding RNAs (lncRNAs). Chromosomal fragment deletion between target sites can be archived with multiplex gene editing strategy and constitutively improves the mutation degree [21]. As the process of gene modification by CRIPSR/Cas9 system depends on the gRNA to search and target to the specific site and the target sites meeting the requirements for targeting cannot always be targeted at the same time [5], the selection of gRNA spacers is essential for high efficient chromosomal fragment deletion. Maize (Zea mays) is one of the most important cereal crops in the world. Here, we report our specific vector construction, sequence design and editing results of using the multiplex gene editing strategy based on the tRNA-processing system in maize. The design of the vectors was optimized for the usage in maize. We found the tRNA-processing system-based method improves the efficiency of CRISPR/Cas9 editing in maize and proposed an advanced gRNA selection strategy for chromosomal fragment deletion purpose. Plant materials The HiII maize parental lines PA and PB were initially obtained from the Maize Genetics Cooperation Stock Center and maintained in the lab. The transgenic lines were generated in the lab. Maize plants were cultivated in the experimental field, green house or growth chambers at the campus of Shanghai University. Construction of plant transformation vectors The maize U6 promoter (U6p) and U6 terminator (U6t) were amplified using gene-specific primers (see Additional file 1: Table S1 for primer sequences) and cloned into the PstI site of the pCAMBIA3301 vector with the maize codon optimized Cas9 gene (from Jinsheng Lai's lab) [12]. gRNAs designed for simplex editing and tRNA-gRNA units (TGUs) designed for multiplex editing were synthesized by Generey (Generey.com) and cloned into the PsiI and XbaI sites between the U6 promoter and U6 terminator. The constructed plasmids, pCAMBIA3301 with UBQp: Cas9 and U6p: gRNA or U6p: TGUs, were used for Agrobacterium-mediated maize transformation. Agrobacterium-mediated transformation of immature maize embryos Agrobacterium-mediated maize transformation was carried out according to Frame et al., [22]. Between 11 and 21 independent transgenic lines were generated for each transformation and genotyped with BAR specific primers (see Additional file 1: Table S1 for primer sequences). Genomic DNA extraction and PCR/Sequencing assay For each BAR-positive transgenic line, three individual tissue samples were used to extract genomic DNA. Maize genomic DNA was extracted with the hexadecyltrimethylammonium bromide method [23]. Target regions were amplified with specific primers pairs flanking the designed target sites (see Additional file 1: Table S1 for primer sequences) using KOD DNA polymerase (Toyobo) to detect mutagenesis at the desired sites. The PCR product was separated on a 1 % agarose gel and stained using ethidium bromide. The stained gels were imaged using the Gel Doc XRS system (Bio-Rad). Selected PCR products were cloned into the pGEM-T Easy Vector (Promega) for DNA sequencing. For PCR product of each tissue sample, twenty clones were sequenced to detect stable editing. Zein extraction and quantification Mature kernels of either WT or MADS/Cas9 line 21 were collected from well-filled ears. Zeins were extracted from 50 mg of dried endosperm flour according to previously described methods [24]. Extracted proteins were measured using a bicinchoninic acid protein assay kit (Pierce) according to the instructions. Measurements of all samples were replicated three times. SDS-PAGE was performed on 12 % polyacrylamide gels and visualized by staining with Comassie brilliant blue (Dingguo). Results Strategy to engineer simplex editing and multiplex editing based on the tRNA-processing system in maize A maize codon optimized Cas9 driven by the maize ubiquitin (UBQ) promoter was inserted into pCAMBIA3301 (see Methods) to construct two binary CRISPR/Cas9 vectors for either simplex editing or multiplex editing (Fig. 1). These two vectors both contain the BAR gene as a plantselectable marker. For the simplex editing vector, we selected and cloned a small nuclear U6 RNA promoter from maize (U6p, Chr 8: 165525624-165548023) and the corresponding U6 terminator (U6t) to facilitate the expression of the gRNA cassette in the CRISPR/Cas9 construct. The gRNA with the target sequence (gRNA spacer) is transcribed from the U6 promoter with a definite transcription initiation site G nucleotide [15]. Therefore, target sequences are commonly selected for the U6 promoter by searching for 5′-GN (19) NGG motifs (NGG: protospacer adjacent motif, PAM). It was reported that the gRNA spacer with extended nucleotides at the 5′ end, derived from the vector ligation site, could also guide genome editing in plants [25]. It is unclear whether this kind of gRNA spacer affects the editing efficiency. Therefore, we found a PsiI restriction site within the U6p to certify that the selected gRNA sequence directly followed the U6p without adding additional nucleotides at the 5′ end (Fig. 2a). For the multiplex editing vector, we compacted a cluster of gRNAs with different spacers into one polycistronic gene to simultaneously produce multiple gRNAs from one primary transcript. Xie et al. [21] have proposed that the tRNA precursors, pre-tRNAs, are cleaved at specific sites in eukaryotes by RNase P and RNase Z to remove extraneous 5′ and 3′ sequences. The tRNAprocessing system is used as an intrinsic mechanism to produce different small RNAs, for example, small nucleolar RNA (snoRNA), from a single polycistronic gene. They successfully obtained multiplex editing using multiple gRNAs produced from a single synthetic gene employing the tRNA-gRNA architecture in rice. We designed multiple tRNA-gRNA units (TGUs) for the simultaneous production of numerous gRNAs to utilize the endogenous tRNA-processing system-based strategy for multiplex editing in maize (Fig. 1b). We selected the maize glycine-tRNA (Chr 5: 15452981-15473056) for the construction of the TGUs, and a spacer sequence starting with a G nucleotide was inserted between the U6p and the first glycine-tRNA to certify the transcription initiation with G nucleotide (Fig. 2b). The multiple TGUs (MTs) consisted of tandem repeats of tRNA-gRNA and would be transcribed under the control of the U6p. The resulting gRNAs would then direct Cas9 to multiple target sites for genome editing. Effective and efficient multiplex editing in stable transgenic maize via the tRNA-processing system-based strategy To explore the efficiency of genome modification by our multiplex editing strategy, we synthesized three gRNAs for simplex editing and three MTs for multiplex editing. The gRNAs for simplex editing target three transcription factors: a maize MADS gene (GRMZM2G059102), a maize MYBR gene (GRMZM2G091201), and a maize AP2 gene (GRMZM2G050851). The MADS gene and the MYBR gene were both reported to be related to the maize endosperm-specific core transcription factor Opaque2 (Fig. 3a) [26,27]. The GN (19) NGG gRNA spacer sequence selected for the targeting of MADS was at the 194 bp of its open reading frame (ORF). The gRNA spacer sequences for MYBR and AP2 were at the 237 and 33 bp of the ORFs, respectively (Fig. 3b). The gRNA-targeted genes for multiplex editing were a maize RPL gene (GRMZM2G024838), a maize PPR gene (GRMZM2G087226), and two reverse overlapping maize long non-coding RNAs (lncRNAs, Chr6:156117287-156117915 & Chr6:156118571-156117710) (Fig. 3a). RPL and PPR may be important for plant development in maize. The two lncRNAs were reported to be regulated by Opaque2 [26] and overlapped in opposite orientations in the maize genome. RPL and PPR were both targeted at two sites with a single 2 TGU construction. The two reverse overlapping lncRNAs were targeted with a single 4 TGU construction that included two sites for lncRNA1 and two sites for lncRNA2. The two GN (19) NGG sequences selected for targeting RPL were at 94 and 417 bp in its ORF, and the gRNA spacer sequences for PPR targeted 327 and 553 bp in its ORF. The two GN (19) NGG sequences selected for targeting The specific sequence of the core elements of the simplex editing and multiplex editing vectors designed for the usage in maize, which can be directly used for broad gene knock-out in further maize genetics and maize breeding studies lncRNA1 were at 62 and 440 bp, and gRNA spacer sequences targeting lncRNA2 were at the 52 and 693 bp (Fig. 3c). We used conventional Agrobacterium-mediated transformation to produce the stable transgenic lines for the six constructs and evaluated the efficacy of our simplex and multiplex editing system. Twenty-one independent transgenic lines were generated for MADS and MYBR. Fourteen independent transgenic lines were generated for AP2 and PPR, 18 independent transgenic lines were generated for RPL, and 11 independent transgenic lines were generated for the lncRNAs (Fig. 3a). Mutagenesis frequency was examined in the T0 generation. Inheritance of the edited sites in T0 transgenic lines was desired for maize genetics and breeding researches and stable editing throughout the whole transgenic plant is heritable. For each BAR-positive transgenic line, three individual tissue samples were used to extract genomic DNA. Target regions were amplified with specific primers pairs flanking the designed target sites. The PCR product was cloned into the pGEM-T Easy Vector. For the three tissue samples of each transgenic line, twenty clones for PCR product of each tissue sample were sequenced to detect stable editing. In the T0 generation of the MADS/CAS9 plants, 57.1 % (12 lines) carried stable editing including In/ Dels and SNPs. The MADS/CAS9-21 transgenic plant had a biallelic mutation. We found stable mutations in 66.7 % (14 lines) of the T0 MYBR/CAS9 plants, and 71.4 % (10 lines) of the AP2/CAS9 T0 lines were mutants ( Fig. 3a and b). Higher mutagenesis efficiency was achieved in the T0 generation of the multiplex editing lines than in the simplex editing lines. In the RPL/ CAS9 plants, 88.9 % (16 lines) of T0 lines had stable mutations. However, the chromosomal-fragment deletion between two targets that can be achieved by the tRNA-processing system as reported by Xie et al. [21] was not detected in the RPL/CAS9 plants. This indicates that the MTs do not always operate on both targets simultaneously. The PPR/CAS9 plants had stable mutations in 85.7 % (12 lines) of T0 lines. The chromosomal-fragment deletion between target1 and target2 was detected in the PPR/CAS9-8, 9 and 13 T0 lines (Fig. 3c). Interestingly, these three lines also carry biallelic mutations. For the lncRNAs targeted from a single 4 TGU construct that included two sites for lncRNA1 and two sites for lncRNA2, 100 % (11 lines) of T0 lines had mutations. Surprisingly, a large region chromosomal-fragment deletion (about 2 kb) beyond the sequence between targets was detected in several transgenic lines (Fig. 3c), and there were only SNPs at target sites in other transgenic lines. Our data also demonstrated that the tRNA-processing system for multiplex editing not only increased the targeted sites but also significantly enhanced mutagenesis efficiency in maize (p-value = 0.021). Generation of phenotypic mutants The biallelic transgenic line MADS/CAS9-21 for MADS gene and transgenic lines PPR/CAS9-9 and −13 for PPR gene were selected for further phenotypic analysis. Zeins are the most abundant storage proteins in maize kernels, and are encoded by different classes of genes. MADS (GRMZM2G059102) was reported to interact with Opaque2 and activate zein gene promoters. In MADS RNAi kernels, the expression of the 22-kD α-zein genes, the 19-kD α-zein genes and the 50-kD γ-zein gene decreased. Relative differences in these zein protein contents can be observed in MADS RNAi kernels [27]. Quantitative analysis showed that zeins were significantly decreased (12.5 %) in MADS/CAS9-21 kernels (Fig. 4a). We also observed differences in relative contents of the 22-kD αzein, the 19-kD α-zein and the 50-kD γ-zein proteins between the wild type and MADS/CAS9-21 kernels through SDS-PAGE, while the contents of 27 kD γ-zein and 14 kD β-zein proteins were not affected (Fig. 4b). In plants, most respiratory chain related proteins are expressed by mitochondrial genome and undergo posttranscriptional processes regulated by nuclear genome expressed factors, including pentatricopeptide repeat (PPR) proteins. PPRs were reported to affect the endosperm, embryo and seedling development [28,29]. We observed a significant developmental delay of the PPR/ CAS9-9 and −13 plants compared to wild type at 35 days after pollination (35 DAP, Fig. 4c). Discussion Maize is one of the most important cereal crops in the world. High efficient and accurate gene modification would benefit maize genetics study and breeding. We provided a framework to design, synthesize and use multiple tRNA-gRNA units for multiplex gene editing with CRISPR/Cas9 in maize (Figs. 1 and 2). These multiple tRNA-gRNA units were expressed under the control of the selected maize Pol III promoter (maize U6p). In this study, we successfully produced simultaneous mutagenesis of multiple genomic loci or deletion of short chromosomal fragments (Fig. 3). Our results showed that the optimized tRNA-processing system-based strategy is a robust and efficient tool for multiply targeted genome modification in maize. Our results also demonstrated that targeting one gene with two gRNAs using multiple tRNA-gRNA units greatly increased the efficiency of gene knock-out in maize. Compared to the parallel simplex editing system, the tRNA-processing strategy enables significantly higher editing efficiency (pvalue = 0.021, Fig. 3). Given the extremely large number of tRNA genes and the fact that RNase P and RNase Z precisely recognize RNA substrates with tRNA-like structures [21,30], there are many other choices of tRNA sequences to be embedded in the multiple tRNA-gRNA units in maize, implying higher efficiency of gene knock-out might be achieved with advanced design. The mutation ratio of different construction for targeting different genes was varied, ranging from 57.1 to 71.4 % for simplex editing and 85.7-100 % for multiplex editing. There might be some factors regulating the mutation ratio, including the efficiency of gRNA to search and target to the specific site and different T-DNA insertion site in the genome [31]. Moreover, the mutation efficiency of CRIPSR/Cas9 system is variable in different plant species [32,33]. The tRNA-processing systembased strategy enables the generation of many doublestrand breaks (DSB) in genomic DNA. It may provide an efficient tool to help dissect the molecular process of chromosomal deletion. Due to the differences in the delivery, expression and activity of gRNAs and Cas9, it is not surprising to see some discrepancies in fragmentdeletion frequency between stable transgenic plants containing different multiple tRNA-gRNA units (Fig. 3). Compared with RPL, the gRNA spacers selected for PPR were physically closer, approximately 200 bp apart in PPR and 300 bp apart in RPL, and had a higher sequence similarity, approximately 25 % similarity in PPR and 45 % similarity in RPL. The accurate chromosomal fragment deletion between two targets only existed in PPR/CAS9 lines but not in RPL/CAS9 lines (Fig. 3). Based on our results, we propose an improved sequence selection of gRNA spacers for high efficient chromosomal fragment deletion in which the distance between the two gRNA spacers should not be too long and the sequences of the two gRNA spacers should have high identity. gRNA targets of this type might have similar chromosome structure, binding ability, delivery and activity, causing generation of DSBs in the genome at the same time. The two reverse overlapping lncRNAs were targeted with a single 4 TGU construction that including two sites for lncRNA1 and two sites for lncRNA2. In plants transformed with this construct, a long-distance deletion beyond the sequence between the targets was observed. The way too high density and number of targets may have been the cause of this unintended mutation. As only SNPs at target sites were not useful for the knock-out of lncRNAs, long chromosomal fragment deletion fulfilled the complete abolishment of lncRNA function. Further more, it is also indicated that up to 4 TGUs design of tRNA-processing system-based strategy can still deliver gene modification in maize. Higher multiple TGUs design might also work in maize. Conclusions This is the first report of successful multiplex gene editing using the tRNA-processing system in maize. This optimized tRNA-processing system-based strategy for maize can be broadly used for stable complete gene knock-out in the future. We propose that the tRNAprocessing system-based strategy improves the efficiency of the CRISPR/Cas9 editing in maize. Additionally, advanced sequence selection of gRNA spacers to generate DSBs in the genome at the same time increases the efficiency and accuracy of chromosomal fragment deletions for complete abolishment of gene function especially lncRNAs, which is important for the enhancement of maize genetic research and breeding.
4,552.8
2016-08-11T00:00:00.000
[ "Biology" ]
Outdoor STEAM Education: Opportunities and Challenges : There is a consensus that students should be involved in interdisciplinary activities that promote a solid education in STEAM subjects from an early age. The outdoor settings of schools present an advantageous context for STEAM education, allowing for a myriad of learning experiences. To understand how teachers perceive the pedagogical use of the school’s outdoor space, a study was carried out in a cluster of schools in a Portuguese city, including one middle school and 10 kinder-garten and elementary schools. A mixed methods approach was used, combining a questionnaire for teachers ( N = 49) with interviews ( N = 8). The results indicate that teachers’ perceptions of the characteristics of their school’s outdoor spaces either facilitate or hinder the implementation of outdoor pedagogical activities. Most teachers concur that the outdoors provides contact with nature and encourages interdisciplinary and collaborative activities. However, the teachers surveyed admit to using the school’s outdoor spaces only occasionally, and this use decreases as the educational level at which they teach increases. The most common use of outdoor spaces is for physical and motor activities, promoting the well-being of children and youth. Although interdisciplinary activities in outdoor spaces are recognised, their implementation is limited and hampered by factors such as the length of curricula and the lack of training for teachers in these approaches. In this sense, there is an urgent need to train teachers in the interdisciplinary use of outdoor spaces to promote a solid education in STEAM subjects. Introduction Addressing pressing contemporary challenges, particularly those related to nature, such as climate change and sustainable food and energy production, as well as healthcare for a growing global population, requires a holistic educational approach capable of connecting different concepts and knowledge [1].STEAM (science, technology, engineering, arts, mathematics) education has therefore been recommended for this purpose from the earliest ages [1][2][3][4]. Outdoor education has also been gaining prominence, particularly with the growing concern over levels of obesity and sedentary lifestyles among young people [5].Furthermore, combining physical activity, digital exploration, and outdoor play can motivate and enhance learning [6].The lessons drawn from the pandemic crisis underscore the unnecessary reliance on traditional educational settings.That period has not only highlighted the significance of digital resources but has also prompted a re-evaluation of outdoor spaces as valuable environments for learning [1].Consequently, there is a growing recognition of the need to expand the learning environment beyond the classroom walls. Outdoor spaces and natural areas such as gardens and schoolyards are resources that can complement classrooms, as they provide a meaningful context for outdoor education, enabling numerous informal and formal learning experiences [7].Nevertheless, in comparison to other educational settings, schoolyards are used less frequently for the learning process [8].An integrated approach, combining outdoor learning and STEAM education, represents quite a challenge for school communities, especially because "teachers tend to teach content skills in an isolated manner" [2] (p.205). To better support pre-and in-service teachers in developing outdoor STEAM learning experiences, this research aims to determine inhibiting factors and opportunities for this integrated teaching approach through teachers' perspectives.Investigating this type of phenomenon demands a comprehensive approach that transcends disciplinary boundaries.Recognising this, a dedicated team consisting of researchers from different schools within the Santarem Polytechnic University was formed.This diverse expertise ensures a wellrounded exploration that integrates insights from education, physical activity, and healthy lifestyles. Theoretical Framework The foundations of STEAM education emerged in the 1990s, in the USA, through the efforts of the National Science Foundation (NSF) with the term SMET (science-mathematicsengineering-technology) and gained greater visibility in the subsequent decade with the expansion of the acronym for STEM, integrating two or more of the disciplines [9,10].It should be noted that 'Arts' brings together the field of humanities, such as visual and plastic arts, design, literature, psychology, sociology, philosophy, and history, among others [11].The integration of 'Arts' allows students to make their involvement more effective in a holistic, sensitive, creative, and thinking approach [12]. STEAM education enables the development of varied cognitive and technical skills, as well as intrapersonal and social competencies, fostering self-regulation, efficient communication, healthy relationships, and the ability to make decisions and solve problems [13].In this way, STEAM education considers students as an active part in the educational process, allowing for the development of autonomy in different types of learning situations.Kang [14] notes that the effects of STEAM experiences are positive on cognitive and affective development.According to Lindeman et al. [15] and McClure et al. [16], the period between infancy and the third grade is crucial for the development of STEM-related thinking dispositions, such as curiosity, investigation, evaluation, and analysis.STEM education offers students contextualised and authentic real-life settings to observe, investigate, and collaborate with others to solve meaningful problems.As such, early STEAM education will also help give students a learning mindset and confidence in the face of challenges. Despite long-standing concerns about childhood obesity, efforts to address the problem have been insufficient [5].In recent years, numerous experts have advocated for outdoor activities not only for health and well-being but also to enhance student's engagement in learning [17][18][19].For instance, a study conducted in Portugal found that the COVID-19 pandemic led to a decline in physical activity and motor skills, highlighting the importance of adopting active learning in outdoor spaces [20].Indeed, motor activities in outdoor contexts afford interdisciplinarity and transferability among different disciplinary fields because they enhance contextualised learning.Non-linear methods, such as guided discovery and problem solving, can reinforce a student-centred learning process, allowing interconnections between motricity, cognition, emotion, and social competences [21].One particularly relevant example is the activity of orienteering, which requires the capacity of wayfinding certain locations, in an unknown place, using a map (or a tool that has the same functions as a map) and, eventually, complementary instruments, such as a magnetic compass.Because orienteering is an outdoor activity that involves travel and enables nature observation, it fosters connections among various disciplines such as physical education, geography, natural sciences, mathematics, or visual arts, at different educational levels [22][23][24].This activity can be also used in teacher education [25], fostering positive student interest when the disciplines are effectively integrated, and teachers are adequately prepared [26,27].Additionally, the integration of outdoor education with digital tools offers benefits to both students and teachers.Using screen time to promote green time can encourage engagement with nature, while physical activity can motivate outdoor science exploration, ultimately enhancing learning through digital exploration and outdoor play [6]. These innovative and active approaches present numerous opportunities and challenges for educators.For instance, difficulties in integrating STEAM content and strategies into school curricula may arise from the lack of confidence of teachers, limited resources, and time constraints [28].Furthermore, as mentioned earlier, another important challenge is that this interdisciplinary approach involves integrating content from at least two disciplines.According to studies such as those by Brown and Bogiages [29], this has proven to be a major difficulty for teachers in terms of planning and implementing learning activities.In addition to these difficulties, there are challenges associated with outdoor learning.While this approach offers a conducive environment for integrating subjects like mathematics and technology [30,31], studies such as Dyment's [30].have shown that teachers primarily use outdoor activities to teach science or physical education. Continuous professional development is essential to overcome these obstacles and effectively integrate STEAM activities from preschool onwards [28].Tulling et al. [32] also emphasised that teacher education should pay more attention to using outdoor learning in their daily teaching practice.Therefore, it is important to understand teachers' perceptions of the challenges and opportunities of these approaches.Numerous studies have addressed this aspect regarding outdoor learning with pre-service teachers [33] and in-service teachers [34][35][36][37][38][39][40][41][42][43].However, only a limited number of studies have delved into understanding teachers' perceptions of outdoor learning for STEAM education [2,4,44] with the majority focusing on early childhood education contexts.Thus, to advance knowledge in this field, the main aim of this research is to describe kindergarten, elementary, and middle teachers' perspectives regarding STEAM outdoor education.To accomplish this, two research questions were formulated: 1. How do teachers perceive the pedagogical use of outdoor spaces? 2. What specific activities do teachers implement in outdoor spaces, particularly within the framework of the STEAM outdoor education approach? Materials and Methods In the present study, a mixed methods approach with a sequential exploratory strategy was applied to the empirical research process [45].This methodological option is not only particularly suited to the development of new survey instruments, but also assumes that the interaction between methods will provide better analytical opportunities and provide more robust answers to the initial questions [46].However, it is recommended that themes or issues are shared between the different techniques, thus ensuring the unity of the research design and increasing the level of their integration [45]. For this study, the sample consisted of a school cluster from the city of Santarém (Portugal), including one middle school and 10 kindergarten and elementary schools.Data were collected in two main phases: (1) semi-structured interviews; and (2) the development and distribution of a self-filled online questionnaire.The first phase of the research consisted of eight semi-structured interviews with kindergarten (3-5 years old), elementary (6-9 years old), and middle school teachers (10-12 years old) from the school cluster. The interviews were conducted in accordance with a pre-established protocol (see Appendix A.1), which encompassed two distinct dimensions: (1) perceptions of the use of outdoor space and (2) teaching practices within an outdoor space.Dimension (1) encompasses the attitudes, beliefs, and experiences of individuals regarding the utilisation of outdoor space.It aims to understand the viewpoints of educators about outdoor space and its role in their daily activities.Dimension (2) seeks to capture how outdoor spaces are used for educational purposes and their effectiveness in enhancing pupils' learning experiences.It focuses on the strategies and activities used by educators when teaching outdoors. A purposive selection method was employed for the interviewees, with the intention of ensuring that a representative sample of the population was included.All the teachers who participated in the interview had volunteered to do so.The interviews were conducted in person, with two kindergarten teachers, two elementary teachers, and four middle school teachers (two of them at the time of the interview were working in elementary schools-T5 and T6).One of the selected middle school teachers was also the director of the school cluster (T8).The average duration of the interviews was 40 min, and they were audio-recorded with the consent of the participants.Table 1 summarises the biographical data of each participant.Most of the interviewed teachers are based in schools located in the urban area of the city, with only two teaching in schools situated on the outskirts of the urban area (peri-urban).Following the transcription of the interviews, a thematic content analysis was conducted.The Braun and Clarke [47] six-phase framework was employed for the thematic analysis.The six phases were as follows: (I) Familiarisation with the data, which involved a deliberate immersion in the data to become familiar with its content in depth and breadth.This immersion involved repeated readings of the data in an active search for meanings and patterns.(II) The generation of initial codes.The codes identified an aspect of the data (either latent content or semantic content) that initially seemed to be of interest to the researcher.The coding was conducted manually, and interesting aspects were identified that could form the basis of repeated patterns (themes).The coding process enabled the data to be organised into groups that brought together meanings.(III) The search for themes.At this stage the various codes were categorised into potential themes.The researcher began by analysing the codes and considering how different codes could be combined to form an overarching theme.(IV) A review of the themes.The primary objective of this phase was the refinement of the themes.The refinement process entailed considering the data contained in the themes in a manner that would demonstrate a commonality between them, while maintaining clear distinctions between each theme individually (internal homogeneity and external heterogeneity).(V) The definition of the themes.At this stage, the essence of what each theme deals with was identified, namely, what aspect of the data each theme captures.And (VI) the write-up, a concise, coherent, and logical description of the data story, with sufficient data extracts to demonstrate the prevalence of the theme. The findings of this analysis are presented in this paper and were utilised not only in the development of the research instrument employed in the subsequent phase, but also to help elucidate the quantitative results. In the second phase of the research, a questionnaire was developed using the themes that emerged from the interviews and relevant theoretical frameworks [38,48].In addition to characterising the participants, the questionnaire was organised into the following dimensions: (1) perceptions of the pedagogical use of outdoor spaces, including characterisation, opportunities, and challenges; and (2) outdoor activities, including types of activities carried out by teachers in outdoor spaces.Regarding the sub-dimension of characterisation, the objective was to evaluate outdoor spaces based on several key characteristics, including their overall appearance, size, accessibility, security, and the availability of resources.The opportunities sub-dimension refers to teachers' perceptions of the potential for pedagogical use and enhanced learning offered by the school's outdoor spaces.The challenges sub-dimension identifies potential difficulties in the pedagogical use of outdoor spaces and explores the challenges faced by teachers, such as institutional constraints, lack of resources, or safety concerns that may affect the use of these spaces.Finally, the types of activities sub-dimension focus on the activities that teachers undertake in the school's outdoor spaces.These activities include STEAM experiences, research projects, playful activities, and other pedagogical practices that can be implemented outdoors (for the components in each sub-dimension, see Appendix A.2). The questionnaire consisted of 10 questions to characterise the participants and 70 closed questions (single or multiple) with the option of "other(s)" and "which one" to allow the collection of new opinions and to avoid conditioning the answers.In 50 questions a 5-point Likert scale was used, ranging from 1 (strongly disagree) to 5 (strongly agree), to indicate the degree of agreement with certain statements.The use of a five-point Likert scale is supported by the notion that (i) as the number of points on the scale increases, the complexity of the respondent's choice and the discrimination between each response option increases [49]; (ii) scales with few response categories may not allow for a sufficient discrimination between respondents' opinions [50]; and (iii) five-point scales are sufficient, as no gain in reliability has been observed for scales with more than five items [51].In 12 questions, the scale ranged from 1 (never) to 5 (always) to determine the frequency of teachers' practice.In addition, three open-ended questions were included, which were not mandatory given the exploratory nature of the research. The questionnaire was designed to be self-filled online using SurveyMonkey©.Once developed, its content and form were validated by considering the following aspects [52]: (1) the opinion of a panel of experts with recognised work in the field of outdoor and/or STEAM education (educational practice and academic research) and ( 2) carrying out a pre-test with some volunteer educators/teachers by sending the link to the questionnaire (https://pt.surveymonkey.com/r/CNSVSVH,accessed on 19 June 2024) by electronic mail, accompanied by an informative text asking for feedback and suggestions on the questionnaire.The questionnaire was reformulated according to the received feedback.And the final questionnaire was distributed online to all kindergarten, elementary, and middle school teachers in the selected group of schools.Out of a total of 166 teachers in the school cluster, 49 responded to the questionnaire (40 females, nine males, with an average age of 54 years), distributed as shown in Table 2. Despite the low response rate, which persisted even after a second round of email communication to boost participation, the data collected were integrated with a more in-depth analysis derived from the interviews. In terms of academic background, most participants have a bachelor's degree (77.6%), followed by those with a master's degree (18.4%).Only a small percentage hold a doctorate (2%).About 30.6% of the participants have a postgraduate degree or specialisation.Among those with a postgraduate qualification, special education (60%) is the most reported specialisation.In terms of teaching experience, the sample ranged from 15 to 44 years, with an average of 30.18 years (SD = 8.06).In terms of the level of education they teach, the most common level is elementary (42.9%), followed by middle school (38.8%) and kindergarten (28.6%).To analyse the statistical significance of these results, we used the binomial test (SPSS, version 20.0.2.0IBM Corp., Chicago, IL, USA), splitting the answers into two categories: one comprising 'totally disagree/disagree' and 'neither agree nor disagree' (cat.≤ 3), and the other including 'totally agree/agree' (cat.> 3).The deliberate choice to include the answer 'neither agree nor disagree' in the first category is justified by its clear distinction from the set of answers that show total or partial agreement.In fact, these last two categories of answers provide a more robust and reliable picture of the trend observed.We opted, however, to keep the 'neither agree nor disagree' category in the survey to avoid an excessive polarisation that would only include more extreme responses, without covering an intermediate level of intensity in the respondents' opinion.However, the priority was to ensure the statistical significance of the strongest categories of agreement, which justifies the decision to set the cut-point at the separation between the first three categories and the other two.According to Marôco [53], this so-called "dichotomisation function" is particularly useful when the variable under study has more than two classes (five classes in this case), and it makes sense to define the limit at which the observations fall into one of the two classes. The open-ended questions were subjected to a categorical content analysis based on the categories that emerged from the data. Results This section presents the results of the interviews and the teacher questionnaire.It is organised according to the sub-dimensions of the questionnaire, namely, the characterisation of the school's outdoor spaces, pedagogical opportunities of outside school spaces, challenges of outside school spaces, and outdoor activities.The school director (T8) provided a comprehensive overview of the school cluster, which is made up of 10 schools: the main school, which is a middle school, seven elementary schools, and two kindergartens.The middle school has 4.5 hectares of woods, landscaping, activity yards, sports areas, and semi-sports areas.These include ball fields without the regular dimensions or markings, or any other limitations, where the children play freely. The school director highlighted that this school is ideally suited to the pursuit of scientific and technological activities, given its ample 4.5 hectares of land.This allows for the implementation of a wide range of educational programmes and initiatives.The external environment is open to the possibility of undertaking projects and obtaining funding, with the objective of ensuring that educational provision is aligned with current realities.This may involve teaching classes and pupils of all aspects, with a particular focus on the STEAM curriculum.Regarding the other schools of the cluster, not all elementary schools have large outdoor spaces and trees.Some schools lack adequate outdoor space, with limited landscaping and organised areas.This can result in a lack of dedicated space for specific activities, which can ultimately lead to a lack of structured learning opportunities. (b) Analysis of the closed-ended responses to the questionnaire The evaluation of the outdoor spaces in the school cluster reveals a predominantly positive view on the part of the teachers.A total of 65.3% (n = 32) agree that these spaces are well maintained and pleasant.Furthermore, a clear majority of 83.7% (n = 41) consider that the size of these spaces is appropriate for the number of users, while 71.4% (n = 35) state that they are accessible to all children/young people, including those with special needs.It is also important to note that 63.3% (n = 31) consider the safety conditions in outdoor spaces to be adequate.However, about half of the teachers, 55.1% (n = 27), agree that there are areas in outdoor spaces that provide protection from the sun, rain, or bad weather, compared to 44.9% (n = 26) who disagree or have no opinion.Finally, a substantial number of teachers, 32.7% (n = 16), do not have a definite opinion on the presence of material resources accessible and manipulable by children/young people, while 20.4% (n = 10) mention the absence of these resources.On the other hand, almost half, 46.9%, (n = 23), recognise that outdoor spaces are equipped with such resources. After carrying out an inferential analysis of statistical significance, it can be concluded that for the characteristics 'overall appearance', 'dimension', 'accessibility,' and 'safety' (the latter refers only to adequate safety conditions), the percentage of teachers who chose the categories 'agree' and 'strongly agree' (cat.> 3) is significantly higher than 50% (p < 0.05, N = 49) (Appendix B.1).This means that these features are significantly recognised by most teachers.Regarding outdoor areas that provide protection from the sun, rain, or bad weather, there is a clear division between the responses given, which may be influenced by the school where each respondent teaches.The percentage of individuals who acknowledge that outdoor spaces have material resources that are accessible and manipulable is not significantly different from those who do not express an opinion or who disagree (p = 0.560 > 0.05, N = 49). Pedagogical Opportunities of Outside School Space (a) Analysis of the interviews All teachers refer to the possibility of using outdoor spaces as a learning context in all content areas.The most common idea among kindergarten, elementary, and middle school teachers is that the outdoors is a more challenging space, where well-being is more often observed and with a greater potential for a connection with nature, as demonstrated by the following statements: "It creates more relaxation because they are in a more informal space and can enjoy everything that the outside environment offers them, the breeze, the sounds, the colours and the opportunity to move more freely" (T4).Elementary and middle school teachers refer to the opportunity for children to change their posture, release energy, and move their bodies in a context of greater freedom.But it is also a context in which children are better able to control their posture and regulate their behaviour. Although almost all of them mention the possibility of working in all the content areas in which they teach, some teachers highlight the areas of expression (physical-motor, dramatic, plastic) as the ones that are most worked in an outdoor space.This is because the outdoors allows the development of skills that are sometimes not worked indoors, such as motor skills, autonomy, and socialisation. When referring to practices that can be adopted to create meaningful learning activities in the outdoor environment, all teachers mention practical and sensorial activities, and exploratory, interdisciplinary, and playful activities.They highlight some examples that are more related to their teaching area or experience, as the following statements show: "Children have a great need for concreteness, so going outside to measure lengths and widths to calculate areas makes it easier because it's more realistic" (T5); or "Outside we do things that are more practical, more playful, through games and group work, like a Peddy-paper with numerical operations" (T3). (b) Analysis of the closed-ended responses to the questionnaire Regarding the pedagogical opportunities afforded by the school's outdoor spaces, it is notable that a significant majority of respondents express a positive view.A total of 61.2% (n = 30) of teachers indicated that these spaces are conducive to the development of a variety of activities that stimulate different skills, including scientific, performative, sporting, and health activities, among others.Furthermore, 71.4% (n = 35) of respondents agreed that they allow interdisciplinary activities and work to be carried out.Finally, 61.2% (n = 30) of respondents agreed that they promote the development of inclusive environments.Furthermore, 67.3% (n = 33) agree that they facilitate contact with nature, offering garden spaces, forests, vegetable gardens, and areas for observing local fauna and flora.Additionally, 64.6% (n = 31) agree that they stimulate students sensorially, promoting their development at both a cognitive and emotional level, encouraging exploration and curiosity.Finally, a significant majority, 81.3% (n = 39), consider that these spaces allow collaborative activities to be carried out. In terms of the potential of outdoor spaces, carrying out 'interdisciplinary activities and work', 'contact with nature', and the promotion of 'collaborative activities' were the responses that showed statistically significant proportions of agreement from the teachers (p < 0.05, N = 49 and N = 48 respectively) (Appendix B.2). Challenges of Outside School Spaces (a) Analysis of the interviews When asked about the reasons for not using outdoor space, some respondents indicated that this space is not always equipped with the necessary resources to facilitate multiple learning activities.Weather conditions are one of the factors mentioned for not carrying out outdoor activities.Teachers also note that the organisation and management of the curriculum, involving multiple teachers contributing different components and the fragmentation into different subjects, makes it more difficult to manage the time for outdoor activities (which teachers think take more time).External constraints such as an extensive curriculum and non-efficient leadership were also referred to by the teachers.Teachers also highlight the use of strategies such as the need to "write everything in the notebook", which can limit the potential for going outside. The responses of two of the teachers interviewed express some of the limitations felt in the use of outdoor space: "Sometimes we lack ideas on how to delimit or define practices that are interesting for the students and that include the content we want to address.We do not have much time to think about these things.The curriculum is extensive, the outdoor activities are a bit out of our control, and we often avoid them.The students are very used to a typical way of teaching and learning and often when you do something different, they do not understand very well what you want them to do, so going outside is also a difficulty."(T5) "Being outdoors requires more preparation, in the classroom they arrive and already know their space and the rules, when they go to space the rules change, some children overflow, there has to be more preparation for everything to go well.If there is a routine, they begin to understand the dynamics and what is expected, but if it is punctual, then it requires more preparation.I have not received any training in this area, and I am not sure how to implement these practices without first observing how they are carried out or having colleagues who are interested in these dynamics."(T6) The school director's perception of this issue is that societies and parents' view of learning is still very 'traditional', and there is some resistance from parents if they do not identify with what has been taught.Parents are a problem when it comes to innovation; they are very resistant to working differently, in the classroom or elsewhere in the school. (b) Analysis of the closed-ended responses to the questionnaire When it comes to the factors that limit or inhibit pedagogical practice in outdoor spaces, teachers' opinions vary considerably.For example, 71.1% (n = 32) disagree that normative or regulatory restrictions are an obstacle.On the other hand, there was a clear division of opinion among the respondents regarding curriculum extension.While 40.0% (n = 18) disagreed with the idea that the time spent on activities was difficult to reconcile with the length of the curriculum, 42.2% (n = 19) said that the relationship between time and curriculum length was one of the limiting factors.Regarding the possibility of students' lack of interest, the majority (95.6%%, n = 43) disagreed with this hypothesis.Similarly, 86.4% (n = 38) of respondents rejected the impracticality of spaces as an obstacle to outdoor activities. Most respondents (91.1%, n = 41) expressed disagreement or a lack of opinion regarding the assertion that the inexperience of teachers impedes pedagogical practices in outdoor spaces.Specifically, 71.1% (n = 32) of respondents either strongly disagreed or disagreed, 20% (n = 8) neither agreed nor disagreed, and only 8.9% (n = 3) agreed or strongly agreed.Regarding resources, 36.7% (n = 18) agreed that they were a limiting factor, while 63.3% (n = 27) had no opinion or disagreed.More than half of the respondents (53.3%, n = 24) believe that problems accessing the Internet or computer equipment are a limiting factor for outdoor activities.Most respondents (88.9%, n = 40) disagreed or had no opinion on the statement that there are difficulties in controlling risks in outdoor activities.Finally, 51.1% (n = 23) of respondents disagreed that lack of parental involvement was a limiting factor, while 22.2% (n = 10) had no opinion and the remaining 26.7% (n = 12) concede that lack of parental involvement was a limiting factor. The results of the inferential analysis are presented in Appendix B.3.It was observed that 'normative or regulatory restrictions of the school'; 'lack of interest of the children/young people'; 'impracticality of the school's outdoor spaces'; 'inexperience of the teachers'; 'difficulty in monitoring the level of risk involved in the activities'; and 'low participation of parents' are the factors with statistically significant proportions of disagreement of the respondents regarding the limitation or inhibition of pedagogical practices in outdoor spaces (p < 0.05, N = 45 and N = 44, respectively).On the other hand, for factors such as 'difficulties in reconciling the time spent on tasks with the length of the curriculum', 'unavailability of suitable materials or resources', and 'problems with Internet access and/or availability of computer equipment', the responses are almost equally divided between those who agree that they limit or inhibit outdoor educational practices and those who disagree (p > 0.05). Outdoor Activities (a) Analysis of the interviews The interviewed teachers presented indoor and outdoor spaces as complementary.They highlighted outdoor space as a pedagogical resource that allows them to work on different topics (e.g., mathematics, science, art, physical education, history, geography, Portuguese language)."Everything I do in the classroom; I can also do outside" (T2) shows outdoor spaces as learning contexts.". . .Exploring shadows and light, "drawing" with natural materials, observing and recording the sounds of the street, tracing the textures of materials, photographing...." (T1) are examples of outdoor learning activities.T3 stated: "the school include all spaces, whether inside or outside the classroom".She argued that outdoor spaces are suitable for learning Portuguese language or mathematics (e.g., "for Portuguese language, they choose a place to read and then come in the room and we can discuss what they have read").T4 also considered that in outdoor spaces it is possible to "tell a story; collect elements from nature to make an artistic expression, tennis practice, dancing, games, aspects related to biodiversity, sustainability, such as building a bird feeder and recording observations on whether the birds have eaten; traditional games, social learning and encounters with history, knowledge of cultural heritage".(T4) She also added that with outdoor interdisciplinary activities, the students' learning process is more meaningful because the knowledge is articulated: "... when we got a dozen outside, we did an activity with sticks..., when they [the students] had ten sticks they grouped them with an elastic band.This was so significant for them that they all carried ten sticks throughout the year" (T4).T5 shared that it was easier for him to use outdoor space to work on the area of expression (e.g., mural painting), and T6 talked about expression, mathematics, and the Portuguese language: ". . .we go outside to draw something we see... mathematical content can be perfectly worked outside, from measurements, areas, geometric shapes... starting from an external element to construct a text".T7, a mathematics and science teacher, stated: "we go to the patio to calculate areas, perimeters to study the aquatic environment, the terrestrial environment... certain insects, see the little bees".Before the COVID-19 pandemic, T8, as a school director, shared that "every year all the school classes came to plant a tree...[currently] the children in the fifth grade come here and play with stones and sticks and pinecones and have pirate and soldier wars...".As a middle school mathematics and science teacher, she uses the outdoor space "... to calculate the height of a lamp from the shadow that forms a right triangle, to calculate the area of a certain flower bed or a door... in science I give the plants... so let's go outside and pull the weeds to see the roots, let's go and look for flowers...".(T8) When it comes to the use of technology outdoors, the most common answer is the possibility of taking a photo or a sound recording, which is defined as a technological intervention.They mention that in middle school the mobile phone is an easier resource, as the computer is more difficult to use outside. (b) Analysis of the closed-ended responses to the questionnaire Among the outdoor activities with statistical significance between frequencies (binomial test, p < 0.05), the lowest frequency (occasionally, rarely, or never) was observed for the following: the resolution of real and relevant problems (76.6%);scientific research/exploration (79.2%); the development of models or prototypes to solve problems (85.4%); and activities involving the local community (66.7%) (Appendix B.4).A slightly better scenario was observed for curricular articulation involving two or more STEAM areas, which occurs occasionally in 49% of cases and frequently in 24.5% of cases. The trend is reversed for physical and motor activities, such as games or physical exercises, with 70.2% of respondents reporting frequent use.The same pattern is observed for activities that stimulate emotional well-being, with identical results. According to the results of the binomial test, there is no significant tendency (p > 0.05) observed in collaborative work.This suggests that opinions are divided between those who frequently use the space outside the school for collaborative work (46.9%), those who use it occasionally (40.8%), and those who rarely or never use it (10.2%).The same trend is observed for activities promoting healthy and sustainable eating and activities related to the creation/maintenance of school gardens. (c) Analysis of the open-ended responses to the questionnaire The data analysis reveals a variety of pedagogical practices in school settings, although not all of them correspond to STEAM activities as identified by all the teachers.A notable proportion of participants report never having carried out outdoor activities (36%-kindergarten, 42%-elementary, and 65%-middle school).Some participants mention doing interdisciplinary activities outdoors (46%-elementary, 35%-middle school), although few explicitly mention STEAM activities.There is also a tendency towards interdisciplinarity in the activities mentioned, with a particular emphasis on worldly knowledge, environmental studies, and the arts.In kindergarten, there is a very strong tendency to develop inter-or multidisciplinary outdoor activities, although very few correspond to STEAM activities. Teachers' perceptions of the pedagogical use of outdoor spaces The results suggest that, despite a considerable diversity in the outdoor spaces among the schools in the cluster-ranging from expansive green areas to more limited ones (T8)-teachers hold a generally positive view of their schools' outdoor environments.Characteristics about 'overall appearance' (e.g., the presence of natural elements), 'dimension', 'accessibility', and 'security' (the latter only referring to adequate security conditions) are significantly recognised by most respondents.These characteristics may be related to many of the potential outdoor pedagogical activities identified by respondents.For instance, the presence of 'natural elements' in the school's outdoor space aligns with the prevailing notion among kindergartens and schoolteachers, as evidenced in the interviews, that outdoor space is a key location for promoting students' well-being and fostering connections with nature.The studies by Bentsen et al. [35] and Ernest [38] also revealed teachers' preferences for outdoor spaces with natural elements.The notion of children's well-being is also evident in Erdem's study [37], as a significant proportion of kindergarten teachers expressed the view that outdoor activities that provide students with opportunities to engage with nature promote children's cognitive, physical, social, and emotional development, strengthen their immune systems, and reduce their risk of illness.On the other hand, the presence of appropriate 'dimensions' and 'safety conditions' in outdoor spaces is in line with the concept of children's well-being, which was discussed by the elementary and middle school teachers in relation to the possibility of children changing their posture, releasing energy, and moving their bodies in a context of greater freedom.These results differ from those obtained in the study by Atmodiwirjo [34], in which the reasons for the limited use of outdoor spaces were essentially due to restricted access for students to engage with the physical aspects of the school grounds based on various reasons of health, safety, and aesthetics. Other pedagogical opportunities identified in the interviews arise from the implementation of a vast array of practices in outdoor spaces which can facilitate meaningful learning.Interviewed kindergarten and schoolteachers referred to practical and sensorial activities, and exploratory, interdisciplinary, and playful activities.The results of the questionnaire corroborate the identified opportunities by highlighting 'contact with nature', along with 'interdisciplinary activities and work' and 'collaborative activities', as the opportunities with statistically significant proportions of agreement.Similarly, the Atmodiwirjo study [34] demonstrated that both teachers and directors emphasised the value of outdoor learning in natural settings as an authentic learning environment conducive to experiential learning.The same idea is evident in the beliefs of primary school teachers interviewed in Winje and Løndal's [44] study, who emphasised that taking students out of the classroom and into real-life settings enhances their learning by bridging the gap between school and curriculum and first-hand experiences.Additionally, Tuulling et al. [32] found that teachers placed great importance on children being active and engaging all their senses in the outdoor learning process.However, they also highlighted the importance of integrating subject areas, as mandated by the national preschool curriculum, as a crucial factor for outdoor learning.The integration of different subject areas is also a key concept highlighted by teachers in this study, when they select interdisciplinary activities as one of the pedagogical opportunities that arise from the use of outdoor spaces. If the characteristics of outdoor spaces appear to be related to the pedagogical potentialities identified, the absence of some characteristics of outdoor spaces may be related to the challenges identified.For example, regarding areas of outdoor spaces that afford protection from the sun, rain, or bad weather, there is a clear distinction between the participants' responses provided in the questionnaire.When questioned about the reasons for not using outdoor space, the interviewed kindergarten and schoolteachers identified weather conditions as a factor preventing the carrying out of activities outside.In Erdem [37], Ernst [38], and Tulling et al.'s [32] studies, teachers noted that parents were reluctant to have activities conducted outdoors during cold weather due to concerns that their children might fall ill.Among other reasons for not organising regular outings, the teachers often cited the lack of a suitable environment. Another feature of outdoor space that did not meet with a consensus of responses from respondents in the questionnaire was the presence of material resources accessible and manipulable by children/young people.Scarce resources could affect outdoor activities, and schools may also be wary of potential legal liabilities, making it simpler and less risky to keep students indoors.The lack of necessary tools for outdoor activities and safety concerns were also mentioned in teachers' responses in Tuuling et al.'s [32] work. Again, when discussing the reasons for not using outdoor space, some of the interviewed teachers stated that this space was not always rich enough to enable multiple learnings.Nevertheless, the interviews also revealed other reasons for not using outdoor spaces that are not directly related to the characteristics of the space itself.Elementary and middle school teachers also indicate that the organisation and management of the curriculum, with multiple teachers and fragmentation across different subjects, makes it more challenging to manage the time required to carry out activities outside.The idea that there are many topics to be taught (extension of the programmes) and some external constraints (bureaucratic tasks demanded from teachers, leadership) is also present in the reasons invoked.When these results are cross-referenced with the results of the analysis of the respondents' answers to the question related to the factors that limit or inhibit pedagogical practices in outdoor spaces, the length of the curriculum was a factor that divided the opinion of the respondents.This discrepancy in opinion may be attributed to the teaching cycle in which the respondents are engaged.For those instructing at the elementary and middle school levels, the relationship between time and curriculum length may be a significant limiting factor.Conversely, for those instructing at the kindergarten level, the length of the curriculum may not be a particularly relevant factor.These perceived barriers identified by teachers resonate with findings from other studies, such as the constraints imposed by school curricula, that leave little room for outdoor learning and not enough time to undertake outdoor learning activities [30,36].These studies have similarly identified challenges stemming from shortages of time, resources, and support, notably including an increased workload for teachers and administrative barriers within schools.On the other hand, the respondents' answers show that factors such as 'normative or regulatory restrictions of the school', 'lack of interest of children/young people', 'impracticality of the school's outdoor spaces', 'low participation of parents', 'difficulties in monitoring the level of risk of activities', and 'inexperience of teachers' are significantly acknowledged by the majority of respondents as factors that do not prevent educational activities from taking place outdoors.These results differ from those obtained in the study by Tuuling et al. [32], in which the teachers stated that they avoided outdoor learning due to challenges in engaging children and maintaining their focus.Furthermore, they cited the organisation of group work outdoors as more challenging. Although the questionnaire responses indicated that the 'inexperience of teachers' does not limit or inhibit pedagogical practices in outdoor spaces, the interviews revealed that elementary teachers perceive this as a challenge to implementing practices in outdoor spaces, as evidenced by the following excerpts from interviews: "...Sometimes we lack ideas to delimit or define practices that are interesting for students that involve the content we want to address. .." (T5) or ". ..I have not received any training in this area and I am unsure how to implement these practices without first observing how they are carried out or having colleagues who are interested in these dynamics" (T6).Some of these aspects are also echoed in the literature, such as apprehension regarding the health and safety of young individuals [30,37,38], as well as teachers' confidence and proficiency in conducting outdoor teaching and learning activities [4,30,34,36,38].Additionally, findings from van Dijk-Wesselius et al. [7] underscored a strong correlation between a lack of confidence in one's own outdoor teaching expertise and concerns regarding losing control and managing children's behaviour, suggesting that these challenges could be addressed through training opportunities. Activities implemented by the teachers in outdoor spaces, particularly within the STEAM outdoor education approach The findings indicate that a significant proportion of respondents (36% of kindergarten teachers, 42% of elementary school teachers, and 65% of middle school teachers) reported that they had never conducted outdoor activities.This is not consistent with the findings of Atmodiwirjo's [34] study, where only a small minority of primary school teachers reported that they had never used the school grounds for learning purposes.However, when it comes to kindergarten teachers, the results are more consistent with the published literature. In Tuuling et al.'s [32] study, more than half of the of kindergarten teachers emphasised that they consistently rely on outdoor learning to engage students in a variety of activities and consider it essential. However, the analysis of the interviews revealed that when teachers organise outdoor learning activities, their objective is to work on specific curricular areas (e.g., plastic, physical-motor expressions, mathematics, sciences) or, in certain instances, to combine different curricular areas in a playful way. In the first case, the activities described by the teachers permit students to engage in sensory exploration of the outdoors (e.g., observation, listening, and collecting elements of nature), in physical activity practice, or to apply acquired knowledge in real contexts (e.g., mathematical knowledge in determining areas).The analysis of the closed responses to the questionnaire confirms this analysis to some extent.Physical and motor activities, as well as activities that stimulate students' emotional well-being, are the activities that respondents reported conducting frequently in outdoor spaces, with statistically significant levels of agreement.Similarly, the results of Atmodiwirjo's [34] study indicated that teachers had, in fact, used the school grounds for a variety of learning activities in different subjects.In particular, the data indicated that the use of the school grounds to support learning activities in the sciences was of primary importance.The same idea is present in a study conducted in Denmark, where the researchers observed that most teachers engaged in outdoor learning were science and physical education teachers [35].The outdoor activities shared by participants in the study encompass content from various subjects, offering a broader range of examples compared to Dyment's study [30], which focuses predominantly on science and physical education activities.However, the examples presented by the teachers seem to deviate from a truly integrative STEAM approach [29]. In the second case, the activities described by the teachers typically combine the areas of expression (plastic and physical-motor expression) with other curricular areas, such as mathematics or science (e.g., treasure hunt with mathematical challenges).The analysis of the open-ended responses to the questionnaire also indicates that there is a tendency towards interdisciplinarity in the activities indicated by the teachers, particularly emphasising worldly knowledge, environmental studies, and the arts.However, a relatively small proportion of these activities described by the teachers align with the STEAM approach.This result is corroborated by the analysis of the closed responses to the questionnaire, which revealed that activities such as curricular articulation (involving two or more STEAM areas), solving real and relevant problems, scientific research/exploration, or developing models or prototypes are the activities that respondents reported doing occasionally, rarely, or never in outdoor spaces with statistically significant levels of agreement.These types of outdoor activities are consistent with findings from other studies; for example, in Winje and Løndal's [44] study, elementary school teachers described activities that combine physical activity with content that students have previously worked on in the classroom, with the aim of integrating and applying classroom knowledge to real-life situations outdoors.The kindergarten teachers in Tuuling et al.'s [32] study also highlighted how outdoor learning seamlessly integrates subjects such as language, mathematics, and physical education and encourages active student participation.Furthermore, they emphasised the versatility of outdoor learning, which allows for a fluid transition between movement-rich play and educational activities. Conclusions The main purpose of this study was to describe teachers' perspectives in a Portuguese school cluster regarding outdoor STEAM education, identifying opportunities and obstacles to its implementation.To this end, a mixed methods approach was employed to analyse the perceptions of kindergarten and schoolteachers about the outdoor spaces in their schools (characterisation, potential, and challenges) and the pedagogical practices they carry out in these spaces.However, it is important to note that certain limitations must be considered, as they condition the interpretation of the results.As previously stated, the preliminary interviews were employed in two distinct contexts.Primarily, they were utilised to provide qualitative insights that would ensure the relevance and effectiveness of the subsequent questionnaire.However, it was not ensured that the participants interviewed were representative of the target population of the subsequent investigation.Secondly, the preliminary interviews were employed as follow-up interviews for the sake of convenience, which did not allow for the clarification of information gleaned from the questionnaire that required more detail for a full understanding.Regarding the questionnaire, the low number of participants (49 out of 166) raises concerns regarding the overall representativeness and representativeness by level of education, which makes it impossible to generalise the results. However, the sequential combination of the results of the qualitative analyses with the quantitative analyses enabled a deeper and more comprehensive understanding of the phenomenon under study.This analysis indicates that the characteristics perceived by teachers in their school's outdoor spaces exert a certain influence on the pedagogical potential and practices that are carried out.In this sense, the characteristics of the outdoor space sometimes appear as facilitators of outdoor pedagogical activities and sometimes as obstacles to their implementation.Although most participants recognise that the spaces in their schools are adequately equipped for outdoor activities, whether in terms of appearance, size, accessibility, or safety, in relation to the resources available, some participants point to the lack of resources as a justification for not carrying out activities in these spaces.In simple terms, school spaces, which vary considerably from one school to another, generally allow STEAM activities to be carried out outdoors.Notwithstanding the constraints of limited outdoor space, the school director interviewed highlighted the capacity of some teachers to devise creative solutions to overcome these limitations. The results also suggest that, although the potential of outdoor space for carrying out interdisciplinary activities is widely recognised, its enactment in practice, as well as the implementation of activities related to the STEAM approach, is residual. Regarding the pedagogical potential of using outdoor spaces, most teachers agree that outdoor spaces provide contact with nature and promote interdisciplinary and collaborative activities.However, regarding practices, many teachers interviewed admitted to using the school's outdoor spaces only occasionally for pedagogical activities.This use decreased as the educational level at which these teachers taught increased.The most common uses of outdoor spaces are related to physical and motor activities, activities that promote the emotional well-being of children and young people, and, sporadically, some interdisciplinary and collaborative working practices.The main challenges to the pedagogical use of outdoor spaces are the inflexibility and extension of curricula, the need for teacher training to deal with these approaches, the lack of time for joint planning, and the scarcity of adequate materials and resources. In this context, it can be argued that the implementation of STEAM activities in an outdoor setting needs more than merely the availability of sufficient space and material resources.The development of this type of practice is contingent upon the professional and personal involvement of teachers to foster consensus and opportunities that reinforce the principles of curricular articulation and interdisciplinary collaboration.Consequently, teacher qualifications represent a pivotal factor in the successful implementation of STEAM outdoor education.In this context, it is crucial to provide teachers with the requisite knowledge, skills, and opportunities to energise teaching in outdoor spaces in an interdisciplinary manner, thereby capitalising on the full potential of outdoor education.Consequently, further research and debate is required regarding the knowledge, experience, and type of training that teachers require to successfully implement STEAM outdoor education. 4. 1 . Outdoor Spaces: Characterisation, Opportunities, and Challenges 4.1.1.Characterisation of the School's Outdoor Spaces (a) Analysis of the school director interview Table 1 . Sample of teachers subjected to interviews.
11,653.6
2024-06-24T00:00:00.000
[ "Education", "Environmental Science", "Engineering" ]
Research and Development of Heat-Resistant Materials for Advanced USC Power Plants with Steam Temperatures of 700 °C and Above Materials-development projects for advanced ultra-supercritical (A-USC) power plants with steam temperatures of 700 °C and above have been performed in order to achieve high efficiency and low CO 2 emissions in Europe, the US, Japan, and recently in China and India as well. These projects involve the replacement of martensitic 9%–12% Cr steels with nickel (Ni)-base alloys for the highest temperature boiler and turbine components in order to provide sufficient creep strength at 700 °C and above. To minimize the requirement for expensive Ni-base alloys, martensitic 9%–12% Cr steels can be applied to the next highest temperature components of an A-USC power plant, up to a maximum of 650 °C. This paper comprehensively describes the research and development of Ni-base alloys and martensitic 9%–12% Cr steels for thick section boiler and turbine components of A-USC power plants, mainly focusing on the long-term creep-rupture strength of base metal and welded joints, Introduction Energy security combined with lower carbon dioxide (CO 2 ) emissions is increasingly necessary to protect the global environment in the 21st century.Coal provides us with abundant, low-cost resources for electric power generation.However, traditional coal-fired power plants have been emitting environmentally damaging gases such as CO 2 , NO x , and SO x at high levels relative to other electric power generation options, such as nuclear power plants, combined-cycle gas turbines, and so on.The adoption of ultra-supercritical (USC) power plants with increased steam parameters significantly improves efficiency, which reduces fuel consumption and the emission of environmentally damaging gases.The present USC power plants with steam temperatures at around 600 °C utilize martensitic 9%-12% Cr steels for thick section components such as main steam pipes and headers in boilers and for turbine rotors and high-strength austenitic steels for superheat tubes [1].Martensitic 9%-12% Cr steels such as ASME Gr. 91 (9Cr-1Mo-0.2V-0.05Nb),Gr. 92 (9Cr-0.5Mo-1.8W-VNb),and Gr.122 (11Cr-0.4Mo-2W-1CuVNb)can offer the highest potential to meet the required flexibility for USC power plants, because of their smaller thermal expansion and larger thermal conductivity as compared with austenitic steels and nickel (Ni)-base alloys. Materials-development projects for advanced ultra-supercritical (A-USC) power plants with steam temperatures of 700 °C and above have been performed in order to achieve high efficiency in Europe (the AD700 project initiated in 1998 [2], the COMTES700 project [3,4], the GKM HWT II project [5], the ENCIO project [6], etc.), in the US (the US DOE/OCDO A-perature of 760 °C (1400 °F) and a pressure of 35 MPa, while the other projects in Europe, Japan, China, and India aim at a steam temperature of 700 °C.These projects all involve the replacement of martensitic 9%-12% Cr steels with Ni-base alloys for the highest temperature boiler and turbine components in order to ensure sufficient creep strength.It should be noted that Ni-base alloys are much more expensive than ferritic/martensitic steels.To minimize the requirement for expensive Ni-base alloys, martensitic 9%-12% Cr steels can be applied to the next highest temperature components of A-USC power plants.Therefore, it is very desirable for martensitic 9%-12% Cr steels to be developed that have an increased temperature range from their current maximum of 610-620 °C up to 650 °C. This paper comprehensively describes the research and development of Ni-base alloys and martensitic 9%-12% Cr steels for thick section boiler and turbine components of A-USC power plants.Greater attention will be paid to technical issues regarding the use of Ni-base alloys in high-temperature thick section components of A-USC power plants. Creep strength required for power-plant steels and alloys High-temperature components such as the boilers of power plants are designed using allowable stress under creep conditions, which is usually determined on the basis of a 100 000 h creep-rupture strength at the operating temperature, and sometimes also 200 000 h to 500 000 h creep-rupture strength [14].For instance, the 100 000 h creep-rupture strength is defined as the stress at which creep rupture occurs at 100 000 h.In an elevated-temperature creep region, for example, the allowable stress in ASME Section II (i.e., Section II of the Boiler and Pressure Vessel Code by the American Society of Mechanical Engineers (ASME)) is determined by several factors, such as 100% of the average stress to produce a creep rate of 0.01%/1000 h (= 10 -5 % .h -1 ), 67% of the average stress (below 815 °C), and 80% of the minimum stress to cause rupture at the end of 100 000 h [15].An evaluation of the stress required to produce a minimum creep rate of 10 -5 % .h -1 and the stress required to cause rupture at the end of 100 000 h for a number of ferritic and austenitic steels and Ni-base and Co-base alloys using long-term creep and creep-rupture data in National Institute for Materials Science (NIMS) Creep Data Sheets showed that the ASME allowable stress was determined by the creep-rupture data but not by the creep-strain rate data [16].Therefore, a deciding criterion for the creep resistance of power-plant steels and alloys is usually 100 000 h creep-rupture strength at the operating temperature.The target stress value for 100 000 h creep-rupture strength is usually 100 MPa for base metal at the operating temperature. Critical issues for long-term safe operation of candidate Nibase alloys and martensitic 9%-12% Cr steels for A-USC power plants include oxidation resistance in steam as well as the long-term creep-rupture strength of base metal and welded joints.Resistance to strength loss, such as Type IV cracking in welded joints, is a serious issue for welded thick section boilers as well as for welded turbine rotors.Furthermore, the thermal-cycling capabilities of thick section components in A-USC power plants would be severely restricted by creepfatigue damage.The discontinuous or flexible operation mode in A-USC power plants, including daily start-up in the morning and shut-down at night, needs to have good thermal flexibility for thick section components, namely, low thermal expansion, high thermal conductivity, and enough resistance to creep-fatigue damage. Traditionally, the development of Ni-base alloys with higher creep strength than ferritic and austenitic steels has been achieved mainly for application to gas turbine components.Some gas turbine alloys are now candidates for the highest temperature components of boilers and turbines in A-USC power plants with maximum steam temperatures of 700 °C and above.The Ni-base alloys used for the main steam pipe in boilers and for turbine rotors are wrought materials, not cast ones.For wrought materials, not only sufficient creep strength but also excellent hot workability and weldability are required.The primary strengthening mechanism of conventional Ni-base alloys is precipitation hardening due to the γ' phase of Ni 3 (Al, Ti). Figure 1 shows the temperature dependence of the 100 000 h creep-rupture strength of conventional martensitic 9%-12% Cr steels, austenitic steels, and Ni-base alloys [8,9].Nominal compositions of some Ni-base alloys for A-USC power plants are given in Table 1 [9].In Figure 1, some Ni-base alloys, such as Alloys 740, 282, 617, and 230, satisfy the 100 000 h creeprupture strength of 100 MPa at 700 °C, while no martensitic 9%-12% Cr steel satisfies 100 MPa at 650 °C.The creep strength of Ni-base alloys is correlated with the amount of γ' precipitates Ni 3 (Al, Ti) in the alloys.The larger the amount of γ' is, the higher the creep strength is. Figure 2 shows the amount of γ' precipitates, estimated by Toda at NIMS by using Thermo-Calc, in Alloy 740 and Alloy 617.The amount of γ' is three times larger in Alloy 740 than in Alloy 617 at 700 °C, causing a much higher creep strength in Alloy 740 than in CCA 617, which is the variant of Alloy 617, as shown in Figure 1.Alloy 740 and Alloy 282 are strongly hardened by a large amount of fine γ' precipitates, as can be expected by high aluminum and titanium content.On the other hand, hot working becomes difficult when increasing the amount of γ' precipitates.2 summarizes key Ni-base alloys under evaluation in the US DOE/OCDO A-USC project, together with some comments on their applicability and limitations [9].The intended maximum steam temperature in the US DOE/OCDO A-USC Alloy 740 is a γ' (Ni 3 Al)-precipitation hardened Ni-base alloy developed for use as SH and RH tubing in A-USC power plants.Due to its excellent creep-rupture strength and corrosion resistance, the consortium also evaluated its use for thick section components such as boiler piping and headers [9].However, when Alloy 740 became a leading candidate for thick section components envisioned for service to 760 °C in the US DOE/OCDO A-USC project, it became evident that some adjustments to the original chemistry would be needed.Alloy 282 was originally developed for gas turbine applications.Due to its excellent high-temperature creep strength, microstructural stability, and fabricability, Alloy 282 has also been found suitable for applications to A-USC power plants. Candidate Ni-base alloys in Europe Figure 3 summarizes key Ni-base alloys in European A-USC projects.While Alloy 617 has been widely used in aircraft and land-based gas turbines, typically at temperatures above 800 °C, it is also one of the candidate Ni-base alloys for boiler and turbine components in A-USC power plants, because it offers a high resistance to both creep and oxidation.Precipitates of γ' play an important role toward hardening inside the grains of Alloy 617 at around 700 °C, similar to the hardening process in Alloy 740 and Alloy 282.However, the precipitation hardening is much smaller in Alloy 617 than in Alloy 740 and Alloy 282, as can be seen from Figure 2 and Table 1.In the COMTES700 project, a test plant with large components made from Alloy 617B was implemented at the E.ON power plant at Scholven [3,4].Thus, it was demonstrated that the manufacture of such components was possible.The experience from the COMTES700 project was transferred to the follow-up projects, HWT II and ENCIO.The HWT II and ENCIO projects are intended to be the final step before the realization of the first 700 °C demonstration power plant. Candidate Ni-base alloys in Japan Table 3 gives the candidate Ni-base alloys for the main steam pipe and turbine rotor of A-USC projects in Japan [10].In addition to conventional Ni-base alloys such as Alloy 263, Alloy 740, and Alloy 617, a variety of new Ni-base alloys were developed by materials and fabrication companies in Japan for application to A-USC power plants. HR6W was originally developed for SH tube applications. Due to its excellent creep and creep-fatigue properties and fabricability, HR6W has also been found suitable for application to thick section boiler components such as main steam pipes and headers in A-USC power plants [20,21].At first, HR6W was classified as austenitic steel, but is in fact a Nibase alloy, because the concentration of Ni is higher than that of iron (Fe), although lower than Ni concentrations in conventional Ni-base alloys.The strengthening mechanisms of HR6W come from the combination of solid-solution hardening due to tungsten (W), and precipitation hardening due to fine M 23 C 6 carbides, fine MX carbonitrides, and fine Fe 2 W Laves phase.This is quite different from the primary strengthening mechanism of most conventional Ni-base alloys, which is precipitation hardening due to fine γ'.The heat treatment of HR6W involves only solution annealing, with no aging heat treatment after solution annealing and prior to operation in power plants, which is also quite different from the treatment of conventional Ni-base alloys. The new Ni-base alloys developed in Japan for A-USC power plants, such as LTES700R, USC141, FENIX700, and TOS1X-2, are modified versions of conventional Ni-base alloys [1].Table 4 gives the alloy design philosophy for the modifica- tion of conventional Ni-base alloys.LTES700R (low thermal expansion superalloy for 700 °C) was alloy-designed to produce a new Ni-base alloy with a low thermal-expansion coefficient, similar to that of 12% Cr ferritic steel, and a high creeprupture strength of 100 MPa or above at 700 °C and 100 000 h, similar to that of Refractaloy 26 [22,23].The alloy design philosophy for USC141 is the same as that of LTES700R: low thermal expansion and high creep-rupture strength [24].FE-NIX700 (Fe-Ni-X superalloy for 700 °C) is a modified version of Alloy 706 with a higher creep-rupture strength of above 650 °C and a fewer solidification defects in large ingots [25].FENIX700 is cheaper than the other Ni-base alloys, because of its higher content of Fe and hence lower Ni content.TOS1X-2 is a modified version of Alloy 617, made by increasing the Al concentration and adding tantalum (Ta) and niobium (Nb) for enhancing precipitation hardening due to γ' [26].The addition of Ta and Nb increases the amount of γ' precipitate and retards the precipitation of the undesirable σ phase. Candidate Ni-base alloys in China and India Figure 4 shows the candidate Ni-base alloys of the A-USC project in China, along with the ferritic and austenitic steels for pipes and tubes [12].Alloy 617 and Alloy 740H are candidates for use in the highest temperature parts of pipes and tubes, respectively.Alloy 2984G is an upgraded version of a new Ni-Fe-base alloy, GH2984 (0.06C-19Cr-2Mo-1Nb-0.4Al-1Ti-33Fe-43Ni), which was developed by the Institute of Metal Research of the Chinese Academy of Sciences for application to tubes at temperatures above 650 °C. Figure 5 shows the creep-rupture data for Alloy 740 base metal and welded joints, as a function of the Larson-Miller parameter [27].Various heats of Alloy 740 base metal, given in Table 5, with different chemistries and different grain sizes were subjected to creep-rupture testing.The materials of the base metal were given the standard aging heat treatment of 760-816 °C for 4-16 h after solution treatment, according to the ASME Code Case.Welded joints were prepared by gas tungsten arc welding (GTAW), gas metal arc welding (GMAW), and hot-wire narrow groove GTAW (hot-wire TIG).Table 6 provides the relevant welding details and postweld heat treatment given to the welded joints [27].For the Alloy 740 base metal, the 100 000 h creep-rupture strength is evaluated to be 214.1 MPa, 123.7 MPa, and 84.8 MPa at 700 °C, 750 °C, and 800 °C, respectively, by the Larson-Miller parameter method with a constant C value of 19.392, as shown in Figure 5(a).The lower scatter-band of creep-rupture data for the base metal is occupied by the heats with finer grain size, while slightly coarser grain size results in average or above average strength.Tortorelli et al. reported that little difference in creep-rupture results between Alloy 740 and Alloy 740H was found, although Alloy 740H showed significantly greater resistance to detrimental η-phase formation during creep-rupture testing [28]. In Figure 5(b), the creep-rupture data for welded joints with a variety of weld metals and heat-treatment conditions is located between the average strength line of the base metal, shown by the solid line, and the -30% strength line of the base metal, shown by the dotted line.The 30% reduction in stress is equivalent to a weld-strength factor (WSF) of 0.70.The 740GMAW and 740GTAW specimens exhibit a WSF slightly greater than 0.70, but the application of solution- annealing heat treatment after welding and prior to aging improves this to close to 0.90.The use of alternative filler metals, Alloy 263 and Alloy 282, also improves the WSF to 0.82 and 0.85, respectively.For age-hardenable alloys such as Alloy 740, cold working is generally known to be detrimental to creep-rupture strength and creep-rupture ductility at elevated temperature [29,30].Figure 6 shows the ratio of the creep-rupture life of prestrained Ni-base alloys to that of specimens without pre-strain, as a function of pre-strain [31].The alloys were subjected to a pre-strain of 5%-15% at room temperature.Creep-rupture testing was carried out at 750 °C and at 225 MPa for Alloy 740/740H and Alloy 263, at 180 MPa for Alloy 617, at 100 MPa for HR6W, and at 160 MPa for HR35.Alloy 740/740H exhibits little or no effect from pre-strain for up to 5% pre-strain, while the ratio decreases to 0.5 or below for 7.5% pre-strain or more.Scanning electron microscope (SEM) observations after creep-rupture testing showed that grain boundaries (GBs) in the Alloy 740/740H specimens without pre-strain were almost entirely covered with precipitates of chromium (Cr) and Nb carbides.On the other hand, a number of precipitate-free zones were observed along GBs in pre-strained specimens of Alloy 740/740H, suggesting a reduction of GB precipitation hardening.Alloy 263 exhibits no effect of pre-strain on the creep life at 750 °C for up to 15% pre-strain. The fatigue data for Alloy 740H at 700 °C is shown in Figure 7, compared with the data for Alloy 617 and Alloy 263 [32].Alloy 740H exhibits greater fatigue strength than Alloy 617 and Alloy 263, especially at low strain range.The fatigue limit of Alloy 740H is evaluated to be approximately half of the ultimate tensile strength. The US DOE/OCDO A-USC project consortium has recognized that the tensile and fatigue behavior of Alloy 282 is adequate for application to a 760 °C rotor.The current HP and IP A-USC turbine design being considered calls for a bolted rotor, similar to an industrial gas turbine.In such a design, the highest temperature component is a forged disk.Trial ingots of Alloy 282 were produced via triple melting VIM/ESR/ VAR, and were planned to be forged into a rotor disc for full property evaluation [19].VIM, ESR, and VAR are acronyms for vacuum induction melting, electro slag re-melting, and vacuum arc re-melting, respectively.Because the two-step aging heat treatment after solution annealing for Alloy 282 would pose difficulties, especially for constructing large components in power plants, considerable efforts have been directed toward characterizing a one- step aging heat treatment.Figure 8 on a multi-heat creep-rupture data set for Alloy 617 at temperatures between 600 °C and above 1000 °C, using the data provided by Krupp, JRC Petten, and Special Metals [33].A simple model with stress, log(stress), and 1/T terms (known by the acronym SLST) was chosen for the assessment.where t * u is the predicted rupture time in hours; σ 0 is the stress in MPa; and T is the temperature in Kelvin [33].The 100 000 h creep-rupture strength is assessed to be 179 MPa, 112 MPa, and 68 MPa at 650 °C, 700 °C, and 750 °C, respectively.However, the SLST model did not pass parts of the ECCC Post Assessment Tests, indicating that this model gave a poor fit to the creep-rupture data.This was partly due to the rather scattered creep-rupture data.The ECCC Working Group 3C reported that the only way to improve reliability in the rupture assessment for Alloy 617 would be to improve the extent of the creep-rupture data set, especially the extent of long-term data.Therefore, it was proposed that any future assessments of the creep-rupture strength of Alloy 617 should be based on an increased number of heats tested over a wide range of stresses and temperatures. During the operation in COMTES700, some problems with thick-walled Alloy 617B components arose, such as the formation of cracks in a high-pressure bypass valve and in the HAZ of repair welds in a thick-walled steam pipe with a wall-thickness of 50 mm [4,34].Small cracks appeared along GBs in the HAZ of repair welds. Three-point bending relaxation tests were carried out at 700 °C in order to understand the behavior of relaxation cracking in Alloy 617B components exposed to COMTES700.A virgin material of Alloy 617B that was subjected to solution annealing exhibited plastic deformation but no cracking during a three-point bending relaxation test, as shown in Figure Research Engineering Volume 1 10(a) [4].On the other hand, in the service-exposed Alloy 617B in COMTES700, after operation for three years at 700 °C, cracks formed during a three-point bending relaxation test, as shown in Figure 10(b) [4].Microstructure observations show a series of Cr carbides mainly along GBs and a high density of fine γ' precipitates inside the grains, causing significant hardening inside the grains by γ' and loss of ductility.The γ' precipitates can be re-dissolved by a heat treatment at 980 °C for 3 h.This heat treatment can reduce the susceptibility to relaxation cracking in Alloy 617B, resulting in no cracks in the base metal or in the repair welds of serviceexposed Alloy 617B.However, thick welded joints without the heat treatment are broken during three-point bending relaxation tests at 700 °C.Recent results on the creep strength and microstructure of γ'-precipitationhardened new Ni-base alloys developed in Japan, given in Table 3, are referred to in Ref. [35] for USC141, Ref. [36] for LTES700R, Ref. [37] for FENIX700, and Ref. [38] for TOS1X-2. Ni-base alloy with no γ': HR6W Figure 11 shows the creep-rupture data for HR6W at 650-800 °C, indicating stable creep strength for up to long times [20,21].The 100 000 h creep-rupture strength is estimated to be 88 MPa, 64 MPa, and 46 MPa at 700 °C, 750 °C, and 800 °C, respectively, by the Larson-Miller parameter method.The creep-rupture strength is lower but the rupture elongation is larger in HR6W than in other Ni-base alloys strengthened by γ', such as Alloy 617.Transmission electron microscope (TEM) observations show that the fine precipitates of M 23 C 6 , MX, and Fe 2 W Laves phase in HR6W serve as an effective dislocation barrier. The creep-fatigue properties of HR6W have been investigated at 700 °C, and compared with those of Alloy 617 [39].Creep-fatigue tests were carried out under strain-controlled conditions at 700 °C, using fast-fast (PP) and slowfast (CP) waveforms with strain rates of 0.8% .s -1 and 0.01% .s -1 , respectively.The results are shown in Figure 12.Under the PP test condition, the fatigue life is almost the same for both HR6W and Alloy 617.However, the fatigue life of HR6W is much longer than that of Alloy 617 under the CP test condition, as a result of the greater creep-rupture ductility in HR6W than in Alloy 617.The SEM observations of frac ture surface after the CP test show that intergranular cracking is dominant in Alloy 617 but that transgranular cracking is partly observed in HR6W.The intergranular cracking in Alloy 617 is attributed to the precipitationhardening inside the grains by fine γ' particles. In order to investigate the susceptibility to relaxation cracking, slow strain-rate testing (SSRT) was carried out at a strain rate of 1 × 10 -6 s -1 and at a temperature of 700 °C for HR6W, and compared with results for Alloy 617 [40].HR6W maintains sufficient ductility in high strain-rate testing conditions, while Alloy 617 exhibits remarkable degradation in ductility.The results are correlated with intergranular cracking in Alloy 617, and with mainly transgranular cracking in HR6W. With respect to the applicability of HR6W to A-USC power plants, the above results indicate that HR6W has advantages in creep-fatigue properties and resistance to relaxation cracking, while its creep-rupture strength is slightly lower than that of Alloy 617 at 700 °C. New martensitic 9% Cr steels for low-temperature components of A-USC power plants 4.1 Candidate martensitic 9Cr steels Figure 13 shows the development progress of martensitic boiler and turbine steels in Japan.The improvement of creep strength in martensitic 9%-12% Cr steels has been achieved by substituting part or all of the molybdenum (Mo) with W and also by the addition of cobalt (Co), nitrogen (N), Nb, and boron (B).The total concentration of alloying elements has been gradually increasing to improve the creep strength.An increase in the ferrite-forming element W requires higher Co, which is an austenite-stabilizing element, for the elimination of δ-ferrite.Three high-strength 9Cr steels, MARBN (9Cr-3W-3Co-VNbNB), Low-C 9Cr (9Cr-2.4W-1.8Co-VNb),and SAVE12AD (9Cr-2.9W-CoVNbTaNdN),are candidates for thick section boiler components such as main steam pipes operating at a maximum of 650 °C [10].MARBN is a marten sitic 9Cr steel strengthened by B and MX nitrides, which was alloydesigned on the basis of the stabilization of the martensitic microstructure in the vicinity of prior austenite grain boundaries (PAGBs) [41].Low-C 9Cr was alloy-designed to stabilize the martensitic microstructure at elevated temperatures by minimizing Ni and Al impurities to be as low as possible [42].The carbon concentration in this steel is reduced to 0.035%, which improves weldability.SAVE12AD contains high B but low N, and is similar to MARBN in this way [43].The original SAVE12 contained a high Cr concentration of 12%, but in SAVE12AD the Cr concentration is reduced to 9% to achieve long-term stabilization of the martensitic microstructure [44]. MTR10A (10Cr-0.7Mo-1.8W-3Co-VNbB),HR1200 (11Cr-2.6W-3Co-NiVNbB), and TOS110 (10Cr-0.7Mo-1.8W-3Co-VNbB),as shown in Figure 13, were developed by fabrication companies in Japan in the late 20th century before the start of the A-USC project, for application to turbine rotors with steam temperatures of 630 °C [45].These rotor steels were originally intended to be used in 650 °C-class USC power plants.At present, however, Japan has no 650 °C-class USC power plant.Therefore, the rotor steels are ready for the construction of A-USC power plants.MTR10A, HR1200, and TOS110 are martensitic 10% to 11% Cr steels containing high W, Co, and B, which are upgraded versions of TMK2, HR1100, and TOS107, respectively. In Europe, the development and evaluation of martensitic 9%-12% Cr steels for boilers and turbines of USC power plants has been continued within the frame of European Cooperation in Science and Technology (COST) programs: COST 501 (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997), COST 522 (1998-2003), and COST 536 (2004-2009) [2,46].The target temperatures for the steels to be developed were set to be 600 °C, 620 °C, and 650 °C in the COST 501, COST 522, and COST 536 programs, respectively.The outcome of COST 522 was the demonstration of the manufacturability of large rotor forgings in an FB2 steel (9Cr-1Mo-1Co-0.2V-0.07Nb-0.01B-0.02N), the alloy with the highest potential for 620 °C application.In the COST 536 program, on the basis of the promising composition of FB2, the roles of Nb and Ta in long-term creep stability were investigated using a trial melt FB2-3Ta (8.9Cr-1.49Mo-1.0Co-0.2V-0.003Nb-0.013B-0.009N-0.08Ta)with higher silicon (Si) for steam oxidation resistance, a changed B/N ratio, the lowest Ni content, and the replacement of Nb with Ta [46].The results of creep-rupture testing at 650 °C on a trial melt FB2-3Ta suggest that Ta in the chosen concentration would not be more effective than Nb in FB2. Other strategies in Europe include the characterization of 9Cr steel with the same chemical composition as MARBN and the further optimization of MARBN, which are being conducted in several projects: the UK IMPACT project [47], the MACPLUS project [48], the Energy Materials Working Group (WG2), and EMEP (Engineered Micro-and Nanostructures for Enhanced Long-Term High-Temperature Materials Performance) [49,50].Their objectives are to develop ad- Research Engineering Volume 1 • Issue 2 • June 2015 www.engineering.org.cnvanced MARBN to enable long-term safe operation at 650 °C.G115, shown in Figure 4, is a martensitic 9Cr steel, which was developed in China for pipe applications at 650 °C or below and is now a candidate steel in the A-USC project in China [12,51].The chemical composition of G115 is 9Cr-3W-3Co-1CuVNbB steel containing 150 ppm B and 140 ppm N, which is similar to MARBN but for the addition of 1% copper (Cu). Creep strength and microstructure of a new martensitic 9Cr steel: MARBN Figure 14 shows the creep-rupture data for the base metal and welded joints of MARBN (containing 120-150 ppm of B and 60-90 ppm of N) at 650 °C, together with those for P92 and P122 [52,53].MARBN exhibits much higher creeprupture strength of base metal than P92 and P122, as well as essentially no degradation in the creep-rupture strength of welded joints compared with the base metal, indicating no Type IV fracture.Dissimilar welded joints of MARBN/Alloy 617 and MARBN/Alloy 263 also exhibit substantially no degradation in creep-rupture strength compared with the MARBN base metal [54]. creep ductility but not proportional to the creep strength.Gu et al. analyzed the creep voids formed in P92 steel after creep exposure [55].Their analysis revealed that the majority of creep voids were associated with hard inclusions.Chemical analysis of these inclusions showed that the vast majority were BN, although some Al 2 O 3 and MnS particles were also observed.The addition of B and N without the formation of any boron nitrides (BN) during normalizing heat treatment significantly improves the creep strength.However, excess addition of B and N causes the formation of BN during normalizing heat treatment; this formation consumes soluble B and N and hence degrades the creep strength.The formation of BN during normalizing heat treatment also degrades the creeprupture ductility, as shown in Figure 15 [52,53].The addition of 300 ppm or 650 ppm of N together with 140 ppm of B significantly degrades the reduction of the area of 9Cr steel, because a large amount of BN formed during normalizing heat treatment in the steel.On the other hand, 9Cr steel containing less than 100 ppm of N exhibits an adequate reduction of area, larger than or the same as T91.This adequate reduction of area is advantageous to the creep-fatigue life, because the creep-fatigue life is proportional to the reduction of area in the creep-rupture testing; that is, it is proportional to the The enrichment of soluble B near PAGBs by segregation is essential for the reduction in the coarsening rate of M 23 C 6 carbides in the vicinity of PAGBs.This enrichment stabilizes the fine distribution of M 23 C 6 carbides at and near PAGBs and enhances GB precipitation hardening for a long time [57]. In welded joints, the addition of B and N without any formation of BN during normalizing heat treatment causes no grain refinement and no Type IV fracture in the HAZ of MARBN.Diffusive α/γ transformation takes place in Gr. 92 during the heating of welding, while martensitic α/γ transformation takes place in the 9Cr-B steel.The diffusive transformation by the nucleation and growth of the γ phase produces a fine-grained microstructure in the HAZ when the peak temperature is not too high.This fine-grained microstructure suggests the production of new GBs.Carbonitrides such as M 23 C 6 also become dissolved during heating but cannot re-dissolve completely when the peak temperature is not too high.The resultant microstructure of Gr. 92 in the HAZ after post-weld heat treatment (PWHT) shows that very few precipitates are formed along PAGBs and essentially no lathblock substructure is formed.The production of new GBs and the incomplete dissolution of M 23 C 6 carbides are responsible for the very few precipitates along GBs in the fine-grained microstructure.Very few M 23 C 6 carbides along PAGBs suggest the reduction of GB-precipitation hardening.The degradation in creep strength of Gr. 92 welded joints is not caused by grain refinement in the HAZ but by the reduction of GBprecipitation hardening in the HAZ.On the other hand, the GB segregation of B retards the diffusive α/γ transformation during heating, because the GB segregation of B reduces GB energy and makes GBs less effective as heterogeneous nucleation sites for the γ phase [57].The resultant microstructure of the HAZ after PWHT is substantially the same as the original microstructure, with coarse grains and sufficient M 23 C 6 carbides along GBs.Soluble B is essential for the change in transformation behavior during heating, resulting in no grain refinement and no Type IV fractures. The formation of protective Cr 2 O 3 -rich scale is achieved on the surface of MARBN by pre-oxidation treatment in argon gas.This treatment significantly improves the oxidation resistance of MARBN in steam at 650 °C [58]. The creep strength and microstructure of the Chinese 9Cr steel G115 are reported by Liu et al. and Yan et al. [12,51]. Summary A variety of progress has been made in advancing materials technology on Ni-base alloys and martensitic 9%-12% Cr steels to enable A-USC power plants with maximum steam temperatures of 700 °C and above.The US DOE/OCDO A-USC project has led to extensive study on Alloy 740/740H and Alloy 282, which are strongly precipitation-hardened by a large amount of fine γ' particles.The project consortium has identified Alloy 740/740H to be suitable for main steam pipes as well as for SH and RH tubes for long-term service in A-USC power plants with maximum steam temperatures of 760 °C, while Alloy 282 is promising for turbine rotors and discs.After exposure for three years at 700 °C in the COMTES700 project, thick-walled Alloy 617B components exhibited high susceptibility to relaxation cracking during a three-point bending relaxation test at 700 °C.The relaxation cracking is attributed to the precipitation hardening due to fine γ' particles inside the grains.Post-exposure heat treatment at 980 °C for 3 h can re-dissolve γ' precipitates, which results in no cracks in the base metal or in the repair welds of serviceexposed Alloy 617B.In Japan, a variety of new Ni-base alloys were developed for application in A-USC power plants.HR6W with no γ' has advantageous creep-fatigue properties and resistance to relaxation cracking, while its creep-rupture strength is slightly lower than that of Alloy 617 at 700 °C.New martensitic 9%-12% Cr steels, such as MARBN, Low-C 9Cr, SAVE12AD, and G115, were developed in Japan and in China for application to thick section boiler components at 650 °C and below.MARBN exhibits much higher creeprupture strength of the base metal than P92 and P122, as well as essentially no degradation in creep-rupture strength of welded joints compared with base metal at 650 °C, indicating no Type IV fractures. Future trends Heat-resistant steels and alloys with higher microstructure stability exhibit higher long-term creep strength.GB embrittlement induced by impurity segregation and by the formation of harmful phases degrades creep-fatigue properties as well as the creep strength of Ni-base alloys.Extensive precipitation hardening inside the grains by a large amount of fine γ' particles causes a mismatch of strength between GBs and inside the grains, which accelerates relaxation cracking and creep-fatigue cracking in Ni-base alloys.Therefore, more effort should be spent on examining Ni-base alloys in order to clarify the mechanisms of the microstructure evolution at and near GBs.It is also essential to establish a method to predict the evolution of GB microstructure at elevated temperatures using computational materials science and modern microstructure characterization techniques.Such efforts would contribute to the establishment of advanced Ni-base alloys with the best combination of microstructure at GBs and inside the grains. Dissimilar welded joints between Ni-base alloys and martensitic 9%-12% Cr steels are inevitably present in both boiler and turbine components of A-USC power plants.Critical issues are the characterization of microstructure near fusion boundaries and in the HAZ, as well as the evaluation of longterm creep strength of dissimilar welded joints. Reliable long-term creep-life prediction is another issue for both Ni-base alloys and martensitic 9%-12% Cr steels that needs to be investigated.Much attention should also be paid to incorporating research results on creep-deformation behavior and microstructure evolution in long-term creep while taking into account the predictions made by extrapolating short-term creep-rupture data.Such efforts would contribute to improvements in the reliability of new Ni-base alloys and new martensitic 9%-12% Cr steels for higher temperatures and longer service periods in A-USC power plants. Finally, a scale-up of candidate Ni-base alloy ingots, the Figure 1 . Figure 1. 100 000 h creep-rupture strength of some Ni-base superalloys, together with 9%-12% Cr creep strength enhanced ferritic steels and austenitic steels, as a function of temperature. Figure 3 . Figure 3. Candidate Ni-base alloys in European A-USC projects. Figure 4 . Figure 4. Candidate ferritic and austenitic steels and Ni-base alloys for pipes and tubes of A-USC project in China. Figure 5 . Figure 5. Creep-rupture data for Alloy 740 (a) base metal and (b) welded joints as a function of the Larson-Miller parameter. Figure 6 . Figure 6.Effect of pre-strain on creep-rupture life of Ni-base alloys. Figure 7 . Figure 7.Total strain range for Alloy 740H, Alloy 617, and Alloy 263 at 700 °C versus the number of cycles to failure. Figure 9 . Figure 9. Creep-rupture data for Alloy 617 as a function of SLST parameter. Figure 16 . Figure 16.Composition diagram of B and N for 9%-12% Cr steels at a normalizing temperature of 1050-1150 °C. The solubility product for BN in 9%-12% Cr steels at normalizing temperatures of 1050-1150 °C is given by log[%B] = -2.45log[%N]-6.81 (2) where [%B] and [%N] are the concentrations of soluble B and soluble N in mass fraction (%), respectively, as shown in Figure 16 [56].At a B concentration of 140 ppm, only 95 ppm of N can dissolve in the matrix without the formation of any BN at a normalizing temperature. Table 2 . Ni-base alloys under evaluation in the US DOE/OCDO A-USC project. Notes: SMAW-shielded metal arc welding; B&PV-boiler and pressure vessel.annealing at 1149 °C (2100 °F) for 30 min, aging heat treatment at 760-800 °C for 4-16 h is recommended to enable Alloy 740/740H to form fine γ'-phase particles. Table 4 . New Ni-base alloys developed in Japan for 700 °C A-USC power plants and alloy design philosophy for the modification of original Ni-base alloys. New Ni-base alloy Original alloy for modification Alloy design philosophy for the modification of original Ni-base alloy 3.5 Mechanical properties and microstructure of candidateNi-base alloys 3.5.1 Ni-base alloys strengthened by γ' precipitates: Alloy 740/740H, Alloy 282, and Alloy 617
8,517.6
2015-09-16T00:00:00.000
[ "Materials Science" ]
Enhanced moving least square method for the solution of volterra integro-differential equation: an interpolating polynomial This paper presents an enhanced moving least square method for the solution of volterra integro-differential equation: an interpolating polynomial. It is a numerical scheme that utilizes a modified shape function of the conventional Moving Least Square (MLS) method to solve fourth order Integro-differential equations. Smooth orthogonal polynomials have been constructed and used as the basis functions. A robust and unrestricted trigonometric weight function, along with the basis function, drives the shape function and facilitates the convergence of the scheme. The choice of the support size and some controlling parameters ensures the existence of the moment matrix inverse and the MLS solution. Valid explanation and illustration were made for the existence of the inverse linear operator. To overcome problems of near-singularity, the singular value decomposition rule is used to compute the inverse of the moment matrix. Gauss quadrature rule is used to compute the integral at the initial test points when the exact solution is unknown. Some tested problems were solved to show the applicability of the method. The results obtained compare favourable with the exact solutions. Finally, a highly significant interpolating polynomial is obtained and used to reproduce the solutions over the entire problem domain. The negligible magnitude of the error at each evaluation knot demonstrates the reliability and effectiveness of this scheme. IDEs are usually difficult to solve analytically and as such, there is a need to obtain an efficient approximate solution. Recently, much interest from researchers in science and engineering has been given to non-traditional methods for non-linear IDEs. The Existence-uniqueness, stability, and application of integro-differential equations were presented by Lakshmikautham and Rao [19]. Armand and Gouyandeh discussed IDE of the first kind in [3] and nonlinear Fredholm Integral Equations of the second kind were discussed by Borzabadi, Kamyad, and Mehne in [7]. A comparison between Adomian Decomposition Method (ADM) and Wavelet-Galerkin Method for solving IDEs was considered in [11] . He's Homotopy Perturbation Method was applied to nth-order IDEs in [12] and [15]. Tau Numerical solution of Fredholm IDEs with arbitrary polynomial bases. Elaborate work on IDEs was discussed in [8,10,13,16,19,[22][23][24][25]31] and in [21] where Maleknejad and Mahmoudi applied Taylor polynomial to high-order nonlinear Volterra Fredholm Integro-differential Equations. Taylor Collocation Method was applied to linear IDEs in [18] by Karamete and Sezer. In [2]; Theory, Method, and Application of boundary value problems for higher-order integro-differential equations were considered. Wavelet-Galerkin method and Hybrid Fourier and Block-Pulse Function in [5] and [4] were applied to IDEs respectively. Numerical Approximation of nonlinear Fourth-Order IDEs by Spectral Methods were considered in [34][35][36][37][38] and in [32]. A New Algorithm was utilized in solving a class of nonlinear IDEs in the reproducing kernel space. In [30], a Comparison between Homotopy Perturbation Method and Sine-Cosine Wavelets Method was applied to linear IDEs while in [29], a new Homotopy Method was applied to First and Second Orders IDEs. The pseudospectral method has been proposed by using shifted Chebyshev nested for solving the IDEs in [28] while [14] applied the Adomian Decomposition Method (ADM) for solving Fourth-Order Integro-differential Equations. In [30], the main objective was only to obtain the exact solution to Fourth-Order Integro-differential equations. The ADM in [14] and the Variational Method in [27] are applied to solve both linear and non-linear boundary value problems of fourth-order Integro-differential equation. In recent years, meshless methods have gained more attention not only by mathematicians but also by researchers in other fields of sciences and engineering. During the past decades, the moving least square (MLS) method proposed in [20] has now become a very popular approximation scheme, especially when considering a mesh-free approximating function. In [17], MLS and Gauss Legendre were applied to solve Integral Equation of the second kind while [8] utilized MLS with Chebyshev polynomial as a basis function to solve IDEs and the basic MLS was adopted in [9] in the solution of IDEs. The work of [26] and [27] were on the application of a two -dimensional Interpolating Function to Irregularspaced data. A second kind chebyshev quadrature algorithm was developed for integral equations in [37] while a chebyshev collocation approached was adopted in the solution of IDEs in [33]. Many methodologies of IDEs in literature are popular with the use of regularspaced data, the disordered-spaced data approach of MLS requires great skill of computations and this has been a source of attraction to researchers over the years. In this research work, we employ the MLS to solve fourth order integro-differential equation. The method is an effective approach for the approximation of an unknown function by using a set of disordered data. It consists of a local weighted least square fit, valid on a small neighborhood of a point, and does not require information about the where g(x) and h(x) are the limits of integration, is a constant parameter, k(x, t) is the kernel of the integral and u (n) (x) as defined in 1.1.1 above. where F is a real non-linear continuous function, β, α i , i = 0, 1, 2, 3 are real constants, g(x), h(x) and f (x) are given. Definition 1.1.5 [6]: The inverse of linear operator exists and it is linear L : P → Q. This definition holds since if L −1 exists and its domain which is a vector space is Q then for any P 1 , P 2 ∈ P whose images are q 1 = LP 1 and q 2 = LP 2 we have P 1 = L −1 q 1 and P 2 = L −1 q 2 .L is linear implies that for any scalars α and β we have αq 1 + βq 2 = αLP 1 + βLP 2 = L(αP 1 + βP 2 ). Thus for Y ∈ Q , there exists X in P such that L −1 : Y → X. In this paper, we consider a general n th order Volterra Integro-differential equation of the form: where F is a real non-linear continuous function, β, α i , i = 0, 1, 2, ..., n − 1 are real constants, g(x), h(x) and f (x) are given and can be approximated by the Taylor series. When n = 4 Eq. (5) reduces to fourth-order integro-differential equation with four conditions as proposed in this paper. The conventional MLS scheme This research is aimed at obtaining an efficient method for approximating voltterra integro-differential equations. The method was obtained by introducing an interpolation polynomial in the context of the moving least square method, thereby producing an enhanced form of the approach. The absolute difference between the true solutions and the approximated solutions obtained from the new approach was used to check how close the results are to the true solutions. This section comprises the basic idea of the conventional moving least square method and its convergence. Overview of the conventional MLS Consider a sub-domain x , the neighborhood of a point X , and the domain of definition of the MLS approximation for the trial function at X which is located in the problem domain . The approximation of the unknown function, u in x over some nodes, where P(x) is the basis function of the special coordinates, P T denotes the transpose of P,m is the number of basis function and a(x) is a vector containing coefficients a j (x), j = 0, 1, 2, ..., m which are functions of the space coordinate X . Also, 's a j (x) ′ s are the unknown coefficients to be determined. The coefficient vector a(x) is determined by minimizing a weighted discrete L 2 − norm , defined as: where U = (U 0 , U 1 , U 2 , ..., U n ) T is the exact solution and w i (x) is a new trigonometric weight function associated with the node i.n is the number of nodes for which the weight function, is always positive on [0, 1] and |.| denotes absolute value. The stationarity of J with respect to a j (x); j ≥ 0 gives: Selecting the values of x at the nodal points to ensure nonzero determinant of A and using the above inverse at each node, Eq. (10) becomes A simple Gram Schmidth algorithm that generates other polynomials: Formulation of the proposed method We wish to use the MLS method to obtain the numerical solution of (4): Suppose that the four-fold operator, exists. By applying (13) on both sides of (12) we have and To use the polynomials, we change the integral interval from [0, x] to a fixed interval [0, 1] using the translation t = xs; dt = xds : To apply the method, select the m + 1 polynomials (basis) with nodal points x i in [0, 1]. By using n j=0 U i ϕ i (x) instead of u(x) as the approximation of u(x) in (15) we have In compact form we have. Finally, we introduce the use of interpolating polynomial, up(x) at all points in [0, 1]: Calculation of the unknowns requires 2N − 1 knots, 2N − 2 even steps, and MLS solution u(x) at the evaluation points The above equations constitute a solvable system of k equations in k unknowns.. In general, given z odd knots and N unknowns in Eq. Numerical computations In this section, we use the MLS Method to solve integro-differential equations in the interval [0, 1]. All computations were carried out with scripts written in 2015 MAT-LAB. The accuracy of this method is directly proportional to the number of basis functions (m) and the nodal points (n). To compute the integral part at the initial nodes, in the absence of an exact solution, we use a six-point Gauss Quadrature Rule (GQR). It involves the Gaussian nodes. Using v as the number of nodes in the given evaluation points ( x ), initial condition: (1)) and j = 2 we estimate the corresponding values of u(x) through GQR: when the exact solution is unknown at the initial nodes. The accuracy of MLS increases as the number of basis polynomials and nodal points increases. Numerical examples Example 1. Consider the following nonlinear fourth-order Integro-Differential Equation [1]: with initial conditions: U (i) (0) = 1, i = 0, 1, 2, 3. The exact solution is given by U (x) = e x and using the transformation in (15) with the given initial conditions, we have: Solution of Example 1 with m = 5 and n points = 8: Select initial nodal points xi, using dx = 0.25; and the corresponding approximate solution U (xi) . Following the outlined steps, we compute the values of u(x) at x = 0 to 1 in steps of 1/8 using the MLS method and five orthogonal polynomials. The following are the obtained results. The optimal J (u) is quite close to zero, thus we expect a good approximation: The exact and Enhanced MLS solutions coincide at the knots (Fig. 1). All the interpolated values are close to the exact solution. An insignificant difference exists as shown in this figure. The next figure highlights this observation. Table 4 are very close to the exact solution. From Fig. 3, the exact and approximate solutions coincide at the knots. The observed errors are insignificant as shown in Fig. 4. This implies perfect interpolation. Solution of Example 2, using m = 5 polynomials and n = 15 nodes: The exact solution is U (x) = x 5 . Applying (12) on Example 2, we have: Select initial nodal points and the corresponding approximate solution. Following the outlined steps, compute the values of u(x) in steps of 1/15 using the MLS method. The following are the obtained results. U (x) = 1 55440 x 11 + 1 3024 The optimal J (u) is quite close to zero, thus we expect a good approximation: The exact and approximate solutions in Table 5 with initial conditions: The exact solution is given by U (x) = x 2 − 2 and using the transformation in (15) with the given initial conditions, we have Following the procedure in example (1), the interpolating polynomial is Only the first and third coefficients are significant. Others are very close to zero and thus insignificant since their P-Values are greater than 0.05: The Rsquare and Adjusted Rsquare are both 1.0. The statistics in Table 8 show that the chosen coefficients are the desired constants in The polynomial is a good fit for the MLS data. Any value, in. [0, 1] interval can easily be evaluated with high precision. Discussion of results It is worthy to note that the computetions were carried out using MATLAB 9.2 on a personal computer of the following specifications. Table 2 indicates a close proximity between the exact and MLS solutions. The following figure compares the obtained solutions. All the P-Values, in Table 3, are less than 0.05. The computed Rsquare and Adjusted Rsquare are equal to 1.0. The statistics in Table 3 show that the estimated coefficients are the desired constants in The estimated polynomial is a good fit for the MLS data. The observed errors are insignificant as shown in Fig. 6. This implies perfect interpolation. Following the given procedure in example (1), the interpolating polynomial is All the computed coefficients are significant except the first which has zero value. All parameters with a P-Value less than 0.05 are chosen. The Rsquare and Adjusted Rsquare are both one. The statistics in Table 6 show that the estimated coefficients are the desired constants of The polynomial is a good fit for the Enhanced MLS data. Any value of in [0, 1] interval can easily be evaluated with high precision. A high distinction of this method over existing methods is the significant interpolating polynomials obtained as a result of the constructed basis function which was then used to reproduce the solutions over the entire problem domain. The solutions produce a negligible magnitude of the error at each evaluation point and this demonstrates its reliability and effectiveness over existing methods. Conclusion An enhanced MLS method with smooth basis polynomials is used to solve the fourth order integro-differential equation of the Volterra type. At any arbitrary point, can be chosen to minimize the weight residual. Based on the results obtained, the value was given as a function of which accounts for the major difference between the Enhanced MLS, MLS method and the popular Least Square Method. Moreso, from the table of results, the error of the Enhanced MLS solution shows a tendency to increase as increases to the end boundary point. This behaviour is expected in any numerical method. Hence, we conclude that the proposed Enhanced Moving Least Square method is good for solving the class of equations described in this paper. Finally, a significant interpolating polynomial could be constructed and used to reproduce the solutions over the entire problem domain. The magnitude of the error at each evaluation knot demonstrates the reliability and effectiveness of this scheme. The application of the new weight function, svd, and orthogonal basis in the implementation of the conventional MLS method constitutes the said enhancement. The determinant of the moment matrix A(x) via SVD minimized the problem of near singularity and improved the accuracy of the results. The study concluded that enhanced MLS provides an alternative and efficient method of finding solutions to Volterra Integro-Differential equations and Fredholm-Volterra Integro-Differential equations. It is therefore recommended that the methods be used in solving the classes of problems considered.
3,612
2022-01-25T00:00:00.000
[ "Mathematics" ]
Design and Implementation of a Ball-Plate Control System and Python Script for Educational Purposes in STEM Technologies This paper presents the process of designing, fabricating, assembling, programming and optimizing a prototype nonlinear mechatronic Ball-Plate System (BPS) as a laboratory platform for engineering education STEM. Due to the nonlinearity and complexity of BPS, the task presents challenges such as: (1) difficulty in controlling the stabilization of a particular position point, known as steady-state error, (2) position resolution, known as specific distance error, and (3) adverse environmental effects—light-shadow error, which is also discussed in this paper. The laboratory prototype BPS for education was designed, manufactured and installed at Karlovac University of Applied Sciences in the Department of Mechanical Engineering, Mechatronics program. The low-cost two-degree BPS uses a USB HD camera for computer vision as a feedback sensor and two DC servo motors as actuators. Due to control problems, an advanced block diagram of the control system is proposed and discussed. An open-source control system based on Python scripts, which allows the use of ready-made functions from the library, allows the color of the ball and the parameters of the PID controller to be changed, indirectly simplifying the control system and performing mathematical calculations directly. The authors will continue their research on this BPS mechatronic platform and control algorithms. Introduction Engineering students in STEM need the practical application of theoretical concepts learned in class to master the methods and problems of controlling. The author's goal is to help students learn the control theories of systems in an engineering context through the design and implementation of a simple and low-cost BPS. Students will be able to apply computer modeling tools, control the system design and achieve software-hardware implementation in real-time while solving the ball position control problem. The overall project development is presented and can be adopted as a guide for replicating the results or as a basis for a new approach to the design of mechatronic learning platforms. In both cases, we have a tool for implementing and evaluating experimentally controlled strategies that can be further improved in the future. University laboratories and experiments play a very important role in successful education in STEM engineering, especially when it comes to robotics and automatic control applications. The rapid development of BPS applications was noted recently due to the challenges related to control and fast dynamic response, which requires short and fast sensing and immediate correction of the selected controller. Since control of fast unstable systems is very important in a variety of practical applications, a mechatronic learning platform BPS can be a successful tool when used for training in robotics and automation control applications and control methods. In the literature, we find several examples of approaches to this topic. The feedback of the position of a sphere is detected with the help of a camera, as shown in [1]. The article describes the synthesis of a controller for a two-dimensional electromechanical system consisting of the ball and a plate, intended for a study of system dynamics and laboratory experiments with various control methods based on classical and modern control theory. The system consists of a square plate movably fixed in the center. Its inclination can be changed in two orthogonal directions. A servo drive with a controller and two stepper motors was used to tilt the plate. The control problem of the described system is to keep the freely rolling ball in a certain position on the plate. An intelligent video system consisting of a CCD camera, an image interface and a program for real-time image processing is used to measure the position of the ball. The BPS was also understood as the two-dimensional movement of the sphere and beam system presented in [2]. The author S. Awtar and others presented the dynamic properties of the BPS, the mathematical model with the corresponding simplified model and the analysis of the applications of different types of PID controllers. Based on the results of the analysis of different controllers, a controller with a switching mechanism is proposed to control the position of the BPS [3]. In addition, F. Zheng describes in [4] the design of the hardware, the selection of sensors and actuators, the modeling of the system, the identification of the parameters, the design of the controller and experimental tests. The authors in [5] proposed a resistive touch screen technique to determine the position of a ball. This successfully eliminated the illumination effect that can cause an error in camera-dependent control systems. For the multivariable and complicated control system of a BPS, a touch screen and a rotating pneumatic cylinder are chosen in this paper instead of a camera and a stepper motor. The simulation results show that the system with the proposed control method has good dynamic and static characteristics. Not only has the fuzzy technique become a popular choice for the BPS, but there are also works that use a genetic algorithm with a neural network or a sliding mode controller to solve this nonlinear problem, as shown in [6]. In this paper, a genetic algorithm (GA)-based PIDNN controller (PIDNN) is proposed for the BPS. GA is used as a training weighting factor for a multilayer neural network, overcoming the disadvantage of the backpropagated algorithm (BP), which easily falls into partial extremes, and at the same time the advantage of the PIDNN controller, which has a simple structure and good dynamic and static performance. Furthermore, the authors Y. Pattanapong and C. Deelertpaiboon in [7] propose a position control technique for the BPS using fuzzy logic with adaptive integral control. The aim is that the adaptive integral gain automatically adjusts its value and becomes active only when the position of the ball is within the specified distance error. This novel system takes advantage of the integral gain's ability to eliminate steady-state errors and uses the fuzzy logic technique because it is simple without finding a mathematical-ematic model for this nonlinear system [8]. The current position of the ball is determined using a webcam mounted directly above the plate. Fuzzy controllers as advanced solutions are also described in [9,10]. In articles [11,12], the authors propose sliding mode techniques (adaptive back stepping control) with the strategy of fuzzy monitoring. They have experimentally found that adaptive back stepping control is more effective than conventional SMC control because it takes much time to achieve favorable tracking accuracy. In addition, one paper presents the use of FCMAC controllers [13] and feedback linearization controllers [14]. Another paper deals with disturbance modeling and state estimation for offset-free predictive control with state-space models [15]. In another paper, a virtual and remote laboratory for the ball and plate system is presented [16]. The authors in [17] proposed a control algorithm based on cascade PID and compared it with another control method. The paper shows the results of the accuracy of the ball stabilization and the influence of the philter used on the waveform. The application used to detect the ball position measured by the digital camera was developed using a cross-platform Net wrapper for the OpenCV image processing library-EmguCV. The aim of the paper [18] is to teach students the theory of control systems in an engineering context, through the design and implementation of a simple and low-cost ball and plate system. Students will be able to apply mathematical and computer modeling tools, control system Numerous MPC algorithms have been used in the past for various industrial process controls, but also for numerous other processes. Examples of applications are: heating, ventilation and air conditioning systems [19], robotic manipulators [20], electromagnetic mills [21], servo motors [22], quadrotors [23], autonomous vehicles [24], modular multirotors, improved design of unmanned aerial vehicles [25,26]. A fast state-space MPC algorithm was presented in papers [27,28]. The paper [27] shows the development and modeling of a laboratory ball on plate process that uses the touchpad as feedback; a simplified process model based on a state-space process description. In paper [28], a fast state-space MPC algorithm is discussed. According to the authors, its main advantage is the simplicity of the computation: the manipulated variables are found online using explicit formulae, with the parameters computed offline; no real-time optimization is required. The articles [29,30] describe MPC algorithms with state-space process modeling and state estimation methods for these algorithms. A practical approach is described in [31], but only for processes described by simple step-response models and by discrete transfer functions (i.e., difference equations). This work follows the idea presented for state-space models. Some specialized methods were developed to handle constraints in online MPC optimization that make it possible to use sampling times of the order of milliseconds [32]. A more advanced approach according to Lyapunov functions is discussed in the next papers. In both theory and practice, Lyapunov functions are an important tool for analyzing the stability of dynamical systems [33]. They guarantee the stability of equilibria or more generic invariant sets, as well as their basin of attraction. Numerous computational building approaches were created within the Engineering, Informatics, and Mathematics communities due to their usefulness in stability analysis. They apply methods such as series expansion, linear programming, linear matrix inequalities, collocation methods, algebraic methods, set-theoretic methods, and many others to various types of systems, such as ordinary differential equations, switched systems, non-smooth systems, discrete-time systems, and so on [34,35]. A method based on semi-definite programming is proposed in work [36] to estimate an invariance kernel with a target as large as possible by iteratively searching for Lyapunov-like functions. Central to the paper framework in [37] are Lyapunov invariants. These are properly constructed functions of the program variables, and satisfy certain properties-analogous to those of Lyapunov functions-along the execution trace. Finally, the book [38] describes PID management of nonlinear systems based on passivity for the general engineering population towards the user-friendly approach. The E-book offers the material with minimal mathematical background, making it relevant to a wide audience. Familiarity with the theoretical tools reported in the control systems literature is not necessary to understand the concepts contained within. The latter was an inspiration to the authors of this research in order to adapt the topic of PID control to undergraduate study programs. This paper describes the stages of designing and building a mechatronic BPS system with computer vision as feedback for educational purposes in STEM engineer education at the Karlovac University of Applied Sciences. The concept design of the depicted prototype emphasizes the avoidance of complicated mathematical methods and formulas in the ball control process. Aiming to achieve low-cost, well-documented, simple and easy implementation and good control precision, this paper proposes computer vision as feedback, via a Python OpenCV script PID controller with adjustable PID parameters to balance different positioning of given ball setPoints, as explained in the examples in the reference [39]. General knowledge of the theory of control of dynamical and nonlinear systems was used from the reference literature [40][41][42]. This paper's contribution is divided into numerous thematic sections: 1. The BPS mechatronic prototype's original design was based on computer modelling capabilities for the manufacture of all robotic and auxiliary parts. Instead of elaborate mathematical models and settings for a nonlinear system, the Python OpenCV script with ready-made functions was used. 4. A control technique is presented and implemented in the program code in accordance with the simplification of parameter manipulation by introducing ready-made Python script functions. 5. A new interactive pop-up window for manipulating sensor outputs for process control, changing the colour, and setting the setPoint. The following is a breakdown of the article's structure. Section 2 explains the methodology used in this research study. The computer design methods and procedures for building a laboratory BPS prototype are briefly described in Section 3. Individual robotic parts are designed in this area, including servo motor shaft holders, levers, and plate joints, with as few parts as possible. In Section 4, the Python script technique is detailed, with an emphasis on the ready-made functions for generating feedback by transforming a picture from a USB camera into a collection of ball position correction request data. The pop-up window software implementation in connection to the HSV standard color palette settings and the PID controller coefficients settings are discussed in Section 5. The findings of tests comparing the influence of the controller coefficients, the roughness of the substrate, and the amount of light are briefly presented in Section 6. Finally, Section 7 brings the article's issues to a close. Methodology In this part of the paper, the authors discuss the methods they used in the research study. The chapter on methodology explains what they did and how they did it so that readers can assess the reliability and validity of the research. It covers the type of research conducted, how the data were collected and how the data samples were analyzed. It discusses which sensors and materials were used in the study and the reasons for choosing these methods. The research design generally focuses on applied research with the aim of developing design techniques, building prototypes and implementing the control procedures. The authors wanted to increase the scientific understanding and solve the practical problem of controlling nonlinear systems more easily. In general, applied deductive research aims to test theory. However, in the case of this case study research, the focus is on demonstrating a new and simpler method for controlling a nonlinear system based on research and prototype implementation. In collecting original data and analyzing the data, quantitative research was carried out with numerical results, while qualitative research is concerned with the descriptions and meanings of the experiments carried out. Both analyses were applied in this work. Quantitative research is expressed in numbers and diagrams, while qualitative research is expressed in words. It was used to understand design concepts, simple solutions for robotic servo arm design with dry bearing, observed uncertainties and inadequacies in the control system, and interpretation of the results of the numerous experiments. This type of research allows the reader to gain deeper insight into certain segments that may be misunderstood. Part of the qualitative method includes interviews with open-ended questions, observations described in words, and the literature reviews that explore similar concepts and theories of nonlinear systems control. Of course, reliability and validity are usually terms used to assess the quality of research. The extent to which results are reproducible when the study is repeated under the same conditions cannot be guaranteed by the authors. The authors are aware that a reliable measurement is not always valid: the results may be reproducible but are not necessarily accurate. An effective measurement was produced after determining a criterion variable. The correlations between measurement outcomes and criterion measurement results were not calculated expressly to test the criteria's validity. BPS Computer Design and Fabrication The steps of the original BPS design and production phase are discussed in this section of the article. The BPS concept that was evaluated, designed, and chosen for production is essentially a clone of similar BPS solutions stated in the works [1,3,7,11,[16][17][18], but with details similar to [11,27]. However, the "driving board" for the two servo motors had to be picked first. The well-documented Arduino UNO microcontroller board with two matched step actuators [43,44] was the obvious choice. The Arduino Uno is a low-cost, well-documented platform that was demonstrated to work in a variety of multi-platform applications. SolidWorks is well known as a software solution for computer-aided design (CAD) and computer-aided engineering (CAE) that is widely used in all cases of technical and engineering design [45]. Ultimaker Cura is the most popular printing software in the world [46]. Fabrication and Mounting Because of its simplicity, the BPS prototype, shown in Figure 1, is made up of a dozen printed parts, including the servo motor first plug-arm, as shown in Figures 2 and 3 Figures 8 and 9, camera housing shown in Figure 10a, tube slippers shown in Figure 10b, the base plate shown in Figure 11a, the central pillar of the BPS plate shown in Figure 11b, tube knees shown in Figure 12, and Arduino board base plate and mounting screws. The DC servo motor's first robotic arm is designed and built with a central elliptical hole for the servo motor axle holder and a smaller round hole for the arm bearing shaft as shown in Figures 2 and 3. This connection must take the entire servo motor axle holder, as well as the arm bearing shaft, without any air clearance. sarily accurate. An effective measurement was produced after determining a criterion varia correlations between measurement outcomes and criterion measurement results w calculated expressly to test the criteria's validity. BPS Computer Design and Fabrication The steps of the original BPS design and production phase are discussed in tion of the article. The BPS concept that was evaluated, designed, and chosen for tion is essentially a clone of similar BPS solutions stated in the works [1,3,7,11,16 with details similar to [11,27]. However, the "driving board" for the two servo mo to be picked first. The well-documented Arduino UNO microcontroller board w matched step actuators [43,44] was the obvious choice. The Arduino Uno is a l well-documented platform that was demonstrated to work in a variety of multi-p applications. SolidWorks is well known as a software solution for computer-aide (CAD) and computer-aided engineering (CAE) that is widely used in all cases of t and engineering design [45]. Ultimaker Cura is the most popular printing softwa world [46]. Fabrication and Mounting Because of its simplicity, the BPS prototype, shown in Figure 1, is made up of printed parts, including the servo motor first plug-arm, as shown in Figures Figures 8 and 9, camera housing shown in Figure 1 slippers shown in Figure 10b, the base plate shown in Figure 11a, the central pill BPS plate shown in Figure 11b, tube knees shown in Figure 12, and Arduino bo plate and mounting screws. The DC servo motor's first robotic arm is designed a with a central elliptical hole for the servo motor axle holder and a smaller round the arm bearing shaft as shown in Figures 2 and 3. This connection must take th servo motor axle holder, as well as the arm bearing shaft, without any air clearan [47]. The design steps of some BPS parts are displayed in SolidWorks software as final files for the Ultimaker Cura printing software in the following photographs from Figures 2-9. Figure 2a,b illustrate the first part of the robotic servo arm, as an adjunct to the DC servo motor half-shaft, whose goal is a strong connection to the original output of the DC servo motor shaft on one side and a spaceless junction of the shaft with the jaws of a knee joint on the other side. The crankshaft with the jaws of the second robotic arm of the servo motor is connected to the first servo handle by inserting the shaft into a small hole through both parts, as shown in Figure 4. The hole at the left side is a holder for a ball dry bearing. servo motor half-shaft, whose goal is a strong connection to the original output of the DC servo motor shaft on one side and a spaceless junction of the shaft with the jaws of a knee joint on the other side. The crankshaft with the jaws of the second robotic arm of the servo motor is connected to the first servo handle by inserting the shaft into a small hole through both parts, as shown in Figure 4. The hole at the left side is a holder for a ball dry bearing. The Tower Pro MG995 DC servo motor housing design phases are shown in Figure 6. The crankshaft "slice phase" in the printing software and the finished part with an installed magnet after the printing process are shown in Figure 5. The Tower Pro MG995 DC servo motor housing design phases are shown in Figure 6. The system sensor-HD USB camera is built into the white housing as shown in Figure 10a. The tube slippers for the two vertical square tube pillars are visible in Figure 10b. The system sensor-HD USB camera is built into the white housing as shown in Figure 10a. The tube slippers for the two vertical square tube pillars are visible in Figure 10b. The base plate assembly for the servo motors and central BPS pillar is visible in Figure 11a and three metallic balls for three magnetic cups under the BPS plate are shown in Figure 11b. The base plate assembly for the servo motors and central BPS pillar is visible in Figure 11a and three metallic balls for three magnetic cups under the BPS plate are shown in Figure 11b. The base plate assembly for the servo motors and central BPS pillar is visible in Figure 11a and three metallic balls for three magnetic cups under the BPS plate are shown in Figure 11b. The knee-arm shown in Figures 4 and 5 is the second portion of the DC servo motor robotic arm, and it is built and parameterized to match the actual size of the BPS plate for the same horizontal distances from the plate's centre, providing equivalent angular transmission from the DC servo motors [44]. The DC servo motor is held in place by the servo motor housing, shown in Figures 6 and 7, which is screwed to the base plate shown in Figure 11a. The integrated tiny metallic ball in the top of the centre pillar of the BPS plate shown in Figure 11b provides a robust but flexible connection and ensures the BPS plate's central location, as shown in Figure 11b. Furthermore, both servo motor knee-arms have small integrated metallic balls on top and support the BPS plate in a horizontal position as shown in Figure 11b by securely embracing the magnetic cups from the bottom of the BPS plate in a vertical position. A detailed description of the robotic system is available in [47]. The design steps of some BPS parts are displayed in SolidWorks software as final files for the Ultimaker Cura printing software in the following photographs from Figures 2-9. Figure 2a,b illustrate the first part of the robotic servo arm, as an adjunct to the DC servo motor half-shaft, whose goal is a strong connection to the original output of the DC servo motor shaft on one side and a spaceless junction of the shaft with the jaws of a knee joint on the other side. Figure 3 represents the first robotic servo arm "slice phase" in the printing software and the finished part of the servo arm after the printing process. The crankshaft with the jaws of the second robotic arm of the servo motor is connected to the first servo handle by inserting the shaft into a small hole through both parts, as shown in Figure 4. The hole at the left side is a holder for a ball dry bearing. The crankshaft "slice phase" in the printing software and the finished part with an installed magnet after the printing process are shown in Figure 5. The Tower Pro MG995 DC servo motor housing design phases are shown in Figure 6. The DC servo motor housing "slice phase" and finished part with built-in servo motor are shown in Figure 7a,b. The BPS plate housing design phases are shown in Figure 8a,b. The bottom BPS plate "slice phase" and the finished part with installed magnetic cups are shown in Figure 9. The system sensor-HD USB camera is built into the white housing as shown in Figure 10a. The tube slippers for the two vertical square tube pillars are visible in Figure 10b. The base plate assembly for the servo motors and central BPS pillar is visible in Figure 11a and three metallic balls for three magnetic cups under the BPS plate are shown in Figure 11b. Figure 12 shows the elbows for the horizontal and vertical mounting tubes for the camera holder. General BPS Design This section of the paper describes the implementation of computer vision in the mechatronic education BPS prototype. During the project's execution, which included the preparation of the student's practical diploma thesis and subsequent experimentation by the co-authors in this paper, some limitations and flaws in the prototype, 3D print material and method, as well as difficulties in achieving stability when placing the ball in the desired position, were discovered. The purpose of this paper and project is to provide a basic and accessible experimental setup for learning, programming, and comprehending feedback control concerns in a real-case manual setting. The mechatronic system described in the paper was originally designed, developed and programmed with the help of the student Tomislav Tropčić at the Karlovac University of Applied Sciences [40]. The sideways view of the experimental platform is shown in Figure 13 (top left and right). The system uses a USB HD camera as a feedback sensor, placed 160 mm above the controlled platform embedded in the camera holder, as shown in Figure 13 back control concerns in a real-case manual setting. The mechatronic system described in the paper was originally designed, develope and programmed with the help of the student Tomislav Tropčić at the Karlovac Univer sity of Applied Sciences [40]. The sideways view of the experimental platform is show in Figure 13 (top left and right). The system uses a USB HD camera as a feedback sensor placed 160 mm above the controlled platform embedded in the camera holder, as show in Figure 13. The 1920 × 1080 pixel (Full HD) camera captures 30 frames per second. Othe technical data of the camera are: High-Speed 120 fps PCB USB2.0 Webcam Board 2 Meg Pixels, 1080P, OV2710 CMOS, Camera Module with 2.1 mm Lens, ELP-USBFHD01M-L21 Three balls with identical sizes but different colours were chosen for the experimen as indicated in Figure 13 bottom and left segments. Table tennis balls with a diameter o 40 mm were chosen in the following order: black, red, and orange. A smaller red ball wit a diameter of 20 mm was utilized as a comparison, composed of a silicone mixture with substantially higher mass. The ball was moved using a variety of materials with varyin friction properties: 3D print material, two millimetre Plexiglas cover plate, white paper and light grey sandpaper (180 particles per inch). The chosen materials had varyin roughness values, which resulted in unequal resistance during the movement of the tes balls over time. The white 3D printed platform plate is 150 × 150 mm and is supported b three supports, or pillars, the middle of which is vertically immobile and located in th geometric centre of the platform's square surface. A simple dry "magnetic" bearing wit a metallic ball and a magnetic cup on the underside of the platform in the geometric centr was designed to tilt the platform in both horizontal axes. When the DC servo motor's tw vertical robotic arms are raised and lowered, the platform tilts in firm contact with th robotic handle through a dry bearing on one side or the other. Servo motors are connecte to the lower half of the motor with steerable arms with a wedge in the elbow, as shown i Three balls with identical sizes but different colours were chosen for the experiment, as indicated in Figure 13 bottom and left segments. Table tennis balls with a diameter of 40 mm were chosen in the following order: black, red, and orange. A smaller red ball with a diameter of 20 mm was utilized as a comparison, composed of a silicone mixture with a substantially higher mass. The ball was moved using a variety of materials with varying friction properties: 3D print material, two millimetre Plexiglas cover plate, white paper, and light grey sandpaper (180 particles per inch). The chosen materials had varying roughness values, which resulted in unequal resistance during the movement of the test balls over time. The white 3D printed platform plate is 150 × 150 mm and is supported by three supports, or pillars, the middle of which is vertically immobile and located in the geometric centre of the platform's square surface. A simple dry "magnetic" bearing with a metallic ball and a magnetic cup on the underside of the platform in the geometric centre was designed to tilt the platform in both horizontal axes. When the DC servo motor's two vertical robotic arms are raised and lowered, the platform tilts in firm contact with the robotic handle through a dry bearing on one side or the other. Servo motors are connected to the lower half of the motor with steerable arms with a wedge in the elbow, as shown in Figure 13 above and left. They are at a 90-degree angle to each other geometrically, and the grips are equidistant from the central fixed bearing. The servo motor handle's horizontal portion (first arm) is attached to the servo motor protrusion, while the vertical portion (second arm with jaws) includes a spherical metallic ball glued to the top and a magnetic cup. Because the cup is fastened in the lower half of the steerable base, they form a firm and dry bearing that facilitates rotation. A simple robotic lever system was created using a solid elbow and a shaft with a wedge diameter of 4mm as a dry bearing, in which both DC servo motors with a rotating angle of ±15 degrees transmit the same angular motion to the BPS platform. Computer Vision Issues Performance in applications of recognizing patterns, forms, colours, and positions of objects is one of the most critical difficulties in the application of computer vision. Given the limited quantity of data available in robotics, the issues of choosing the right substrate, lighting, and methods for evaluating image and video quality without a reference are significant. Although simulations and visualization are crucial components in the preliminary phase of the scientific setup of an experiment, the algorithms utilized concern real applications rather than the development of mere theory. Image formation, CCD camera resolution, advanced image features, real-time sampling frequency, binary vision, optical flow, image filters, object creation, epipolar geometry reconstruction, motion tracking, segmentation, grouping, and also recognition of objects are all unavoidable topics in computational vision in mechatronics. Advanced research in this scientific subject is enabled by the capabilities of software modelling of image processing techniques and approaches for object localization and geometric measurements. If the experimental setup is conventional, such as a USB HD camera, software for analysing and developing image processing functionality is becoming a powerful tool. Image Converting Technicques The description of the ready-made functions used in the Python script related to image converting techniques is given in the order in which the image obtained using the USB camera is processed. In order to get more images per second, in the program code, the resolution is halved to 640 × 480 pixels, so the number of captured images can be doubled, from 30 to 60 images per second. Ready-made Python image resolution function is defined as: "self.cam_width = 640, self.cam_height = 480". The camera uses a USB connector to power and communicate with the computer. • VideoCapture object-VideoCapture() When launching the application, it is necessary to create an object that will capture a video recorded with a USB camera. The application does not process the stored video (e.g., on the hard disk or memory card) but the stream of data that the camera records in realtime (live stream), to download a series of images from the camera (30 images per second), the so-called VideoCapture object. The VideoCapture object only needs to specify the camera number (0 = built-in, 1 = external USB camera) where the recording comes from. Algorithm 1 shows a fragment of the code. All other processing (reception, processing and image formation) will be performed autonomously "under the hood" of the ready-made function and thus free the programmer from a big job. In this part of the code, it is necessary to define the dimensions of the images captured by the camera, and it is defined that the image is 480 pixels high and 640 pixels wide. • Colour model conversion from RGB to HSV-cvtColor() All colours are obtained by using and combining colours in the colour palette. If we use the RGB (R-Red, G-Green, B-Blue) palette then we have three basic colours: red, green and blue. If each colour is written in 256 shades, then by a combination of available shades we get a palette of 16.7 million colours. Another colour representation (or colour space) is HSV (H-Hue or Tone, S-Saturation, V-Value or Brightness). RGB color space does not separate color and brightness information so brightness variations affect RGB channel values. HSV color space abstracts color from saturation and brightness and is suitable for color-based image segmentation [48]. The switching was carried out in secret because it is easier to get a binary image of the object when it is written in HSV format. The function is shown in the code fragment in Algorithm 2. • Noise image removal-GaussianBlur() The next step is the process of removing noise from each image. The first step is blurring the edges of the image (Blur), using the Gaussian Blur function (blurring is performed using the Gaussian formula). When applied in two dimensions, this formula produces a surface whose contours are concentric circles with a Gaussian distribution from the center point. OpenCV documentation related to the Gaussian Blur states that the kernel size should be a positive and odd value. Higher values imply a more blurred image and vice versa. The authors decided to use a Gaussian kernel size of 11 × 11 pixels which is used by the OpenCV 2D filter function as the minimum size in order to convolve an image with the Discrete Fourier Transform-based algorithm [49]. The function is shown in Algorithm 2. • Binary image formation-inRange() The captured image has a certain resolution (640 × 480 pixels), is converted to an HSV colour model and noise is removed. It is necessary to translate the image from a coloured to a black and white image without shades-where the pixel in the image is coloured with either black or white. It is necessary to determine which HSV formatted colours are converted to black and which to white. The utilized object tracking methodology detects the object based on the range of pixel color values in the HSV color space. The selected color will be displayed as white, while all other colors will be displayed as black in the binary image, as shown in Figure 15. The function is also shown in Algorithm 2. • Binary image noise reduction-erode() The resulting binary image may have certain noises that are usually located at the boundary of the contour of the object (in the binary image). Applying the erode() function of the application will remove certain noise, but the consequence may be a reduction in the contour of the object; shown in Algorithm 2. • Object localization on a binary image-findContours() After forming the binary image and the object, it is necessary to determine the contours of the object located in the image. The contours are passed to the application as a list of coordinates of the outer points that close the contour. There may be multiple contours in the image (intentionally, by mistake, or so) and then the application will look for the contour that occupies the largest area. The function is shown in code fragment in Algorithm 3. • Minimal circle within the contour-minEnclosingCircle() After locating the contour of the object, the smallest circle is entered inside it so that the coordinates of the centre and the size of the radius of the object can be determined. In this way, the centre and edge of the contour on the binary image are determined (Algorithm 3). The procedure requires that the radius of the contour be a minimum of 10 pixels in length, and after finding the contour, the application displays a circle and its centre so that the application user has an idea of where the application has located the centre of mass or geometric centre of the sphere. After determination, it is necessary to send the coordinates of the centre of the contour and the radius according to the function that controls the PID controller function: self.PID(self.setPointX), as shown in Algorithm 4. # only proceed if the radius meets a minimum size if radius > 10:· · · # length of min 10 pixel # draw setpoint on screen-5 pixel red dot cv2.circle(frame, (int(self.setPointX), int(self.setPointY)), 5, (0, 0, 255), −1) self.PID(self.setPointX, self.setPointY, x, y) # PID setpoint actual position in x, y, All used ready-made functions: VideoCapture(), cvtColor(), GaussianBlur(), inRange(), erodes(), dilates(), findContours(), minEnclosingCircle) and self.PID() in parentheses can receive certain parameter values. Each function does a lot of work (calculations) and significantly simplifies the application and its use. For this reason, the number of lines in the program and consequently the size of the control program is significantly reduced. After running the script all functions and parameters are prepared to locate and calculate the ball shape and find its geometrical center as the start setPoint (inputX, inputY). The Python script starts the motors and aligns the axes of the platform at the appropriate angles to align the stability with the initial start-position of the ball. Python Control Script Design Python's control algorithm requires knowledge of past values. Proportional-integral control, for example, monitors the cumulative sum of differences between a setPoint and a process variable. Because the Python function disappears completely after feedback, the value of the cumulative sum must be stored elsewhere in the code. The problem with coding is figuring out how and where to store this information between call algorithms. For coding reasons, an object generator was created where certain parameter values can be received in parentheses. There are several ways to get value from a so-called number generator. One way is to use the next() Python function which executes the generator until the next yield expression is encountered and then returns the value. Python script captures a series of camera images at 30 frames per second, approximately every 33 ms, which is the sampling rate of the ball position or the speed of calculating the position correction. Thus, the parameter dT is a time constant that correlates with the image processing speed, which is a PID controller iteration parameter. Algorithm 5 shows the code fragment where the time variable was defined. # how long since we last calculated (dT definition) now = time.time() # now = begining of application # change in time (dT = ) dT = now − self.last_time· · · # print (dT) # save for next iteration self.last_time = now Advanced Block Diagram of PID Controller During the experiment and the selection of the most suitable colour, shape and size of the ball, as well as the surface of the plate, it was realized that the block diagram is not as simple as it seemed at first glance. Significant and unavoidable disturbances were observed, i.e., external influences that prevented the stable operation of the mechatronic system and the placement of the ball at a given setPoint. Interference functions were observed that cannot be accurately described mathematically but have proven to be influential because methods of reducing problems and attempts to eliminate them have led to better results and greater stability. For this reason, and shown in Figure 14, an improved block diagram control loop was proposed that highlights the locations in the CLC loop and the type of dysfunction or detrimental effect on ball position stabilization. First of all, a dysfunction (accidental disorder) is defined, which is denoted as d 1 (t), which represents mechanical imperfections and clearances of the handles that contribute to the increase in error. 1875 15 of 25 the ball at all due to the above errors or conversion imperfections. In block diagram view, dysfunction d1(t) has a direct impact on the process (plate position) and form "steady state error". Similarly, dysfunction d2(t) as "internal" uncertainty creates a cumulative effect on the Python output dataset (inputX, inputY) before the setPoint calculation process (set-PointX, setPointY) and thus forms a "light shadow error". CLC Error Value Calculation The equations in Algorithms 6 and 7 put into the Python script make significant progress, highlighting the capacity to generate ball control utilizing program ready-made calculations through functions and handle ball control without real physical hardware (external controller). The CLC comparison process generates error values for both the X-and Y-axes, errorX and errorY, which are defined in the computer code by parameters, as shown in Algorithm 6. In Algorithm 6, the phrase "inputX" refers to the ball's real beginning location in the plate along the X-axis, while the term "self.setPointX" refers to the new ball position setPoint. Furthermore, another dysfunction d 2 (t) describes a group of functions within the software that, if inconsistent or unable to perform their task properly, increase position vagueness and introduce uncertainty and directly lead to significant problems and instabilities during position control. The third influential quantity that contributes the most to the results of the experiment is the amount of scattering or light intensity. The system was shown to have the greatest stability if the illumination is adequate and light is scattered on the substrate from several sources and the original beam of the lamp is shaded. Each shadow of the ball from the light source significantly changes the colour shade of the ball on the edge of the ball and changes the contour image, which contributes to poorer recognition of the contour shape and consequently the creation of a binary image. It was observed that with a single light source although the system has a dispersive structure, the controller cannot stabilize the ball at all due to the above errors or conversion imperfections. In block diagram view, dysfunction d 1 (t) has a direct impact on the process (plate position) and form "steady state error". Similarly, dysfunction d 2 (t) as "internal" uncertainty creates a cumulative effect on the Python output dataset (inputX, inputY) before the setPoint calculation process (setPointX, setPointY) and thus forms a "light shadow error". CLC Error Value Calculation The equations in Algorithms 6 and 7 put into the Python script make significant progress, highlighting the capacity to generate ball control utilizing program ready-made calculations through functions and handle ball control without real physical hardware (external controller). The CLC comparison process generates error values for both the X-and Y-axes, errorX and errorY, which are defined in the computer code by parameters, as shown in Algorithm 6. In Algorithm 6, the phrase "inputX" refers to the ball's real beginning location in the plate along the X-axis, while the term "self.setPointX" refers to the new ball position setPoint. In most cases, a PID control system comprises two independent classic PID controllers connected by a single loop. The first manipulates the PWM control signal of the first DC servo motor to control the ball's X position. The second, as illustrated in Algorithm 7, uses the PWM control signal of the second DC servo motor to regulate the Y position. Assuming the board has two axes, uniformity and ideal perpendicularity, the PID controller used the identical coefficients for both axes. Equation (1), as described in [40,41], is the canonical mathematical form in general theory: where term e(t) in Equation (1) is the errorX value in the Python script shown in Algorithm 7, term e(t )dt is term self.error.SumX and term de(t)/dt is dErrorX shown in Algorithm 7. The control signal voltage u(t) is represented in the program as the control signal for operating the X-axis DC servo motor and is denoted by "angle X" according to Equation (1). The X-axis control signal is thus a sum of three terms. The voltage control signal is labeled "angleY" in Algorithm 7, similar to a Y-axis DC servo motor. Algorithm 7. Fragment of the Python code: calculation of both axis PID control signals. # angle variables angleX = self.zero_x + (errorX * self.kP + dErrorX * self.kD + self.kI * self.errorSumX) angleY = self.zero_y + (errorY * self.kP + dErrorY * self.kD + self.kI * self.errorSumY) The coefficients of the PID controller kP, kD, and kI stated in Equation (1) were chosen and placed into the program code as default values during the optimization process, as shown in Algorithm 8. They can be changed during the experiment in the 0.001 value stages of the control application pop-up window. Proportion gain, term kP, is responsible for the corrective reaction and is used to identify the difference between the desired and actual values, as shown in [40]. With the increasing gain, the error lowers as the system gets more oscillatory. To determine the integral value, the integral term kI is used to calculate all previous error values and then integrate them. Integral action can also be thought of as a way to automatically generate the bias term in a proportional controller [41]. When the error value is removed from the system, this integral term stops growing. Based on current values, the derivative kD is used to anticipate future expected error levels. If the system has a fast rate of change, which is also reliant on the derivative component, the controlling effect can be amplified. The entire value of the required correction is obtained by combining these three operations. The PID controller's constants kP, kI, and kD can be adjusted both in the program code and in the graphical visualization space boxes shown in Figure 15. As shown in Algorithm 9, calculated control signals for Arduino board as PWM driving platform for both DC servo motors are presented below. # send to Arduino board − X and Y control signals arduino.write((str(angleX) + "," + str(angleY) + "\n").encode()) # print(angleX, angleY) Dynamics and PID Control Issues Overview There are numerous methods for controlling a dynamic system [40,41]. The philosophical principles that underpin these methodologies can be broadly classified into three types for the sake of this case study: descriptive, model-based, and myoptic. Descriptive techniques presume that a controller is provided, and the purpose is to determine whether the controlled system meets certain stability requirements. Simulating the system or running it under a variety of operational light and surface conditions and seeing the outcomes are examples of empirical tests. After the control parameter is chosen at the current moment, a myoptic approach will look at the direction of a ball movement in state space. The core algorithm of 1D control systems, i.e., the X-axis control, is proportionalintegral-derivative control [40]. It is the most studied class of controllers due to its simplicity, and it is almost always the first thing to test on a new system [41]. Despite the fact that it lacks a model and is short-sighted, it may operate admirably with a few manual tweaks. During experiments, it was discovered that the ODE solution is a damped harmonic oscillator. This oscillatory behaviour means that the oscillation will overshoot the setPoint for any nonzero setPoint starting state. Furthermore, the frequency of oscillation ω is dependent on both the gain coefficients and the system coefficients. Lower kI values will minimize and finally eliminate oscillation, although recovery from steady-state error will be slower. A comparable consideration of the PD control problem for a second-order system yields the damped harmonic oscillator system, which is also featured in the experiment. Because derivatives can be approximated using finite position differencing: x = x(t) − x(t − dt)dt, derivative estimation mistakes are an issue. The derivative contribution, however, is more sensitive to measurement noise than position estimations since t is tiny and in the denominator. As a result, the derivative term varies, leading the control to track less precisely and in an irregular manner. BPS Visualization and Control In this part of the paper, a discussion is focused on visualizing the position of the ball after activating the application and managing the position of the ball. First, during the experiment, it was proved that of the three selected balls, the highest quality conversion to a binary image and the entire image processing covers the case of the modified orange colour (HSV format parameters-0/77/115/51/253/255) with a slight deviation from the entered "default" value in relation to the value entered in program code (HSV-default 1/77/115/61/153/255). The red colour (HSV-default parameters 121/157/86/243/255/255) did not give sufficient response quality, despite parameter modification, which resulted in an increase in the value of the disturbance function d 2 (t) and ultimately too much error and deviation in the calculation, which manifested itself as the possibility of setting the red ball to a given default setPoint on the platform. The black ball, despite having the strongest colour contrast in its parameters (HSV-default 0/0/0/25/25/25), could not be recognized at all as a shape or contour in the HSV standard, probably due to poor lighting quality. Figure 15 shows an interactive "Ball Tracking" pop-up window that serves as the controlling device window for the mechatronic BPS prototype. It is possible to control the process with different critical parameters using designed functions that are performed on the screen. The centre of mass estimated in Python script as the true centre of the orange ball is represented by the small white dot, which is five pixels wide. The normal and computed binary variants of ball pictures are displayed in the upper right corner of the ball tracking window, as shown in Figure 15a,b. Although the small white dot on the computer screen symbolizes the ball's centre, clicking on a new place on the plate establishes the ball's desired position as a small red dot, also five pixels wide, as seen in Figure 19. In a Python script, the equations for calculating PID error values automatically generate the correction value for both the X-and Y-axes, balancing the BPS plate with both actuators. As for the servo motor's robotic arms, the calibrated mechanical "zero horizontal position" of the plate is a default angle of 37 degrees for both actuators, as shown in Figure 15a,b. If necessary, the "zero position" can be adjusted in one-degree increments within the "Calibrate" X-and Y-axis space boxes. Angle control is limited to ± 15 degrees on both axes. In the experiment, the proportional coefficient kP is chosen at a value of 0.03, the coefficient kD is chosen at a value of 0.02, and kI is chosen at a value of 0.01 or less. Application "Ball Tracking" Two further pop-up screens were added to the Python script, which initiate the graphical representation, time period charts, and numerical matrix representation of the relevant parameters for future mathematical analyses. Figure 16 shows, for example, a 6- Six HSV palette sliders are located on the left side of the Ball Tracking pop-up window, allowing fine customization of colour hues for the best binary conversion. Figure 15a shows a real-time image from a USB camera with an orange ball that showed the best responsiveness and presentation of the image live stream during the experiment in the upper right corner. Below this section are three frames or "space boxes" for fine-tuning the controller PID coefficients in 0.001 unit increments or the "Reset PID" option for default values (stored in script). The object search (Start/Stop Tracking) is controlled by two square space boxes in the lower-left corner, while the right button controls the servo motors (Start/Stop Motors). Furthermore, at the very bottom of the interactive window, there is a very handy option to modify the horizontality of the plate manually in relation to the unevenness of the substrate on which the prototype is positioned. In the upper left corner, the image's sampling frequency in milliseconds is also shown (insert value 32 in Figure 15a). A binary figure of the ideal shape depicts the identical position of the ball in Figure 15b. By pressing the "Show Thresh"/"Normal View" button, you can change the images. As for the servo motor's robotic arms, the calibrated mechanical "zero horizontal position" of the plate is a default angle of 37 degrees for both actuators, as shown in Figure 15a,b. If necessary, the "zero position" can be adjusted in one-degree increments within the "Calibrate" Xand Y-axis space boxes. Angle control is limited to ±15 degrees on both axes. In the experiment, the proportional coefficient kP is chosen at a value of 0.03, the coefficient kD is chosen at a value of 0.02, and kI is chosen at a value of 0.01 or less. Two further pop-up screens were added to the Python script, which initiate the graphical representation, time period charts, and numerical matrix representation of the relevant parameters for future mathematical analyses. Figure 16 shows, for example, a 6-s time period chart with a graphical representation of the actual and selected position setPoint, as well as shaft angle value as PID control signal. For a better understanding of the dynamics and stability of the BPS system, the time period of the strip chart is extended to 20 s in Figures 17, 21 and 22. The second manageable pop-up window in the Python code but on the background screen represents numerical data of parameters shown in the figures in the same time period for additional analyses, if needed, respectively. Experimental Results Following the creation of the prototype, it was required to functionally verify the work and optimize all of the Python script's functionalities using actual BPS prototype components. After multiple revisions, the BPS system's functional operation was achieved, allowing the setting of a ball setPoint anywhere on the plate's surface (150 × 150 mm). The "sliding" of a smooth ball on a smooth Plexiglas plate was the first item that was noted. As indicated in the advanced block diagram in Figure 14, this is a verified flaw of the mechanical system according to the plate smooth surface, generally referred to as "steady-state error" or dysfunction d1(t). In this research, a graphical depiction of the ball movement exclusively along the horizontal X-axis was used due to the easier explanation and highlighting of crucial elements connected to the control problem. There is a usual "exceeding" of the value of the ball position in both directions during the first experiments. Several items sparked suspicion: specific PID values, sliding on a flat surface, and mechanical clearances. The process of altering the X-position of the ball from one side of the plate to the other along the X-axis by roughly 250 pixels, as shown in Figure 16, is typical of the first series of tests. The graph in Figure 16 displays the ball's actual beginning X-position: 100 pixels at 0 s (blue line) and the newly selected (setPoint) position at 390 pixels (red line) farther along the X-axis. Additionally, kP = 0.033, kI = 0.010, and kD = 0.023 are the controller constants. The PID control system can keep the ball within an overshoot of ±24 pixels, or around ±8 mm, using this set of constants. The controller does 32 control adjustments every second, and there are exactly 16 signal orders for the X-axis servo motor in each exceeding of the ball's setPoint, as shown in the lower section of Figure 16. Angle variations are approximately ±4 degrees. Without a doubt, a typical example of unsteady system operation with the harmonic frequency of one Hz is illustrated [50]. PID control system can keep the ball within an overshoot of ± 24 pixels, or around ± 8 mm, using this set of constants. The controller does 32 control adjustments every second, and there are exactly 16 signal orders for the X-axis servo motor in each exceeding of the ball's setPoint, as shown in the lower section of Figure 16. Angle variations are approximately ± 4 degrees. Without a doubt, a typical example of unsteady system operation with the harmonic frequency of one Hz is illustrated [50]. In the following studies, despite varying PID parameters, no substantial stability was attained. Several mechanical flaws were discovered after the study. When using additive technologies, such as 3D printing, keep in mind that due to the thermoplastic material's characteristics, deviations in all three axes might occur throughout the printing and cooling process. This is mostly dependent on the thickness of the filament material to be applied, with rises and depressions being observed in thin layers of big surfaces, such as tiles. In this scenario, mechanical levelling was required to mechanically polish the surface of the printed BPS bottom plate. Furthermore, a new two-millimetre-thick Plexiglas plate with a sandblasted surface was added instead of smooth Plexiglas. Figure 17 displays the BPS chart for a 20-s period with the X-setPoint changing from 110 to 400 pixels. Within the first two seconds of the average range of ± 12 pixels, rough stabilization of the location is noticeable, followed by fine stabilization of the position after four to five seconds within the limits of ± 6 pixels. The controller coefficients in this experiment are kP = 0.033, kD = 0.022, and kI = 0.001. In the following studies, despite varying PID parameters, no substantial stability was attained. Several mechanical flaws were discovered after the study. When using additive technologies, such as 3D printing, keep in mind that due to the thermoplastic material's characteristics, deviations in all three axes might occur throughout the printing and cooling process. This is mostly dependent on the thickness of the filament material to be applied, with rises and depressions being observed in thin layers of big surfaces, such as tiles. In this scenario, mechanical levelling was required to mechanically polish the surface of the printed BPS bottom plate. Furthermore, a new two-millimetre-thick Plexiglas plate with a sandblasted surface was added instead of smooth Plexiglas. Figure 17 displays the BPS chart for a 20-s period with the X-setPoint changing from 110 to 400 pixels. Within the first two seconds of the average range of ±12 pixels, rough stabilization of the location is noticeable, followed by fine stabilization of the position after four to five seconds within the limits of ±6 pixels. The controller coefficients in this experiment are kP = 0.033, kD = 0.022, and kI = 0.001. During the studies, it was discovered that lighting had a significant impact on the sensor system's operation. As seen in Figure 18, strong light sources on one side of the ball were demonstrated to destabilize the BPS system (d). Figure 18 depicts multiple scenarios based on the ball's strong and low illumination: (a) weak illumination conditions; During the studies, it was discovered that lighting had a significant impact on the sensor system's operation. As seen in Figure 18, strong light sources on one side of the ball were demonstrated to destabilize the BPS system (d). Figure 18 During the studies, it was discovered that lighting had a significant impact on the sensor system's operation. As seen in Figure 18, strong light sources on one side of the ball were demonstrated to destabilize the BPS system (d). Figure 18 This makes it difficult to use the Python method "inRange ()" as previously described. As indicated in the advanced block diagram in Figure 14, it was important to describe and document these sensor feedback defects caused by a poor image conversion system, which is referred to as "light shadow error" or d2(t) dysfunction. Several smaller discrete shaded light sources were added to mitigate this negative effect, and the control precision was greatly enhanced. This increased the amount of light directed towards the ball, which at the time had no visible shadow. A resolution issue, known as "particular distance mistake", was also discovered. It is the control system's failure to recognize a new sphere centre position setPoint that is very close to the existing actual setPoint. This could be classified as a sort of hysteresis, i.e., sensor or computer vision recognition insensitivity. The largest specified distance error was found to be 6-pixels or roughly two millimetres. This is the same as the diameter of the red dot that represents the sphere's center of mass. The control instruction to move the ball 4 pixels in the horizontal X-axis direction is shown in Figure 19, however, there is no answer since the needed setPoint offset is within the defined distance error, which is the size of the sensor recognition error. Figure 19's time graph on the right side displays the setPoint value of the ball at "201 pixels" on the panel and no actual signal from the controller. The blue line represents the signal noise This makes it difficult to use the Python method "inRange ()" as previously described. As indicated in the advanced block diagram in Figure 14, it was important to describe and document these sensor feedback defects caused by a poor image conversion system, which is referred to as "light shadow error" or d 2 (t) dysfunction. Several smaller discrete shaded light sources were added to mitigate this negative effect, and the control precision was greatly enhanced. This increased the amount of light directed towards the ball, which at the time had no visible shadow. A resolution issue, known as "particular distance mistake", was also discovered. It is the control system's failure to recognize a new sphere centre position setPoint that is very close to the existing actual setPoint. This could be classified as a sort of hysteresis, i.e., sensor or computer vision recognition insensitivity. The largest specified distance error was found to be 6-pixels or roughly two millimetres. This is the same as the diameter of the red dot that represents the sphere's center of mass. The control instruction to move the ball 4 pixels in the horizontal X-axis direction is shown in Figure 19, however, there is no answer since the needed setPoint offset is within the defined distance error, which is the size of the sensor recognition error. Figure 19's time graph on the right side displays the setPoint value of the ball at "201 pixels" on the panel and no actual signal from the controller. The blue line represents the signal noise from the ball's actual position sensor, which has an average value of 197.3 pixels and a variance of ±0.2 pixels (±0.07 mm) and this is the proven sensitivity of the CCD sensor. Figure 20 shows the residual specific distance error after the controller correction process: the setPoint position demand in the horizontal X-axis direction is 12 pixels (four millimetres). After two seconds of signal control, there is stabilization and residual dislocation of the real ball position of around two pixels, which is about 0.7 mm distance. Figure 20 shows the residual specific distance error after the controller correction process: the setPoint position demand in the horizontal X-axis direction is 12 pixels (four millimetres). After two seconds of signal control, there is stabilization and residual dislocation of the real ball position of around two pixels, which is about 0.7 mm distance. ball position. Figure 20 shows the residual specific distance error after the controller correction process: the setPoint position demand in the horizontal X-axis direction is 12 pixels (four millimetres). After two seconds of signal control, there is stabilization and residual dislocation of the real ball position of around two pixels, which is about 0.7 mm distance. Many variants of the controller coefficient were developed in order to further improve the stability of the BPS system. The system works very quickly and nervously with large oscillations and the inability to stabilize the ball for a long period, roughly five seconds, when an integral coefficient kI is present, as illustrated before in Figure 21. The controlling process is greatly enhanced when the integral coefficient kI is excluded from the equation. The controller coefficients in this experiment, shown in Figure 22, are kP = 0.030, kD = 0.020, and kI = 0. The specific error of the final X-position distance is still occasionally seen in the steady location of the ball. The dislocation of the ball in a stable position 6 to 9 pixels (two to three millimetres) distance from the defined setPoint may be seen in the graph in Figure 22. Many variants of the controller coefficient were developed in order to further improve the stability of the BPS system. The system works very quickly and nervously with large oscillations and the inability to stabilize the ball for a long period, roughly five seconds, when an integral coefficient kI is present, as illustrated before in Figure 21. The controlling process is greatly enhanced when the integral coefficient kI is excluded from the equation. The controller coefficients in this experiment, shown in Figure 22, are kP = 0.030, kD = 0.020, and kI = 0. The specific error of the final X-position distance is still occasionally seen in the steady location of the ball. The dislocation of the ball in a stable position 6 to 9 pixels (two to three millimetres) distance from the defined setPoint may be seen in the graph in Figure 22. ball position. Figure 20 shows the residual specific distance error after the controller correction process: the setPoint position demand in the horizontal X-axis direction is 12 pixels (four millimetres). After two seconds of signal control, there is stabilization and residual dislocation of the real ball position of around two pixels, which is about 0.7 mm distance. Many variants of the controller coefficient were developed in order to further improve the stability of the BPS system. The system works very quickly and nervously with large oscillations and the inability to stabilize the ball for a long period, roughly five seconds, when an integral coefficient kI is present, as illustrated before in Figure 21. The controlling process is greatly enhanced when the integral coefficient kI is excluded from the equation. The controller coefficients in this experiment, shown in Figure 22, are kP = 0.030, kD = 0.020, and kI = 0. The specific error of the final X-position distance is still occasionally seen in the steady location of the ball. The dislocation of the ball in a stable position 6 to 9 pixels (two to three millimetres) distance from the defined setPoint may be seen in the graph in Figure 22. With practically every start of the ball position adjustment, the absolute angle correction of the DC servo motor with a maximum permissible correction of ±15 degrees can be seen in the lower graph in Figure 22. The proportional and derivative parts of the controller's usual activity can be seen as characteristic control signals to the actuator in the lower graph in Figure 22. With practically every start of the ball position adjustment, the absolute angle correction of the DC servo motor with a maximum permissible correction of ± 15 degrees can be seen in the lower graph in Figure 22. The proportional and derivative parts of the controller's usual activity can be seen as characteristic control signals to the actuator in the lower graph in Figure 22. Of course, in the absence of integration contribution, there is a delay in position control in the case of PD controllers, about 0.15 s after initiation, but the BPS system has considerably superior stability. According to iteration method frequencies, the frequency of the controller control signal directed to the X-axis DC angle correction actuator is 32 control actions per second, as shown in Figure 22. Conclusions The implementation of the BPS prototype as a laboratory platform for the education of STEM engineers is discussed in this study. In addition, the design and implementation of software and hardware are explored in detail. The computation time of an open-source control system based on Python scripts that permits the usage of ready-made functions from the library is quite short. Because of the OpenCV environment, the calculation may be made as simple as possible. The OpenCV technique was found to work when applied to the BPS process, however, it is important to improve the system in comparison to other publications in order to eliminate or partially minimize the influence of the amount of the disturbances indicated as mistakes in the improved block diagram. Because of the dynamic features of the mechatronic prototype and the circumstances surrounding suitable lighting, the PD algorithm proved to be more successful than the conventional PID solution. Because of the required amount of consistent light illumination, choosing an HD camera as a sensor for the control system feedback proved to be quite challenging. A quantitative study was carried out with numerical results in collecting original data and analysing the data, whereas qualitative research was concerned with the descriptions and meanings of the tests carried out. In this study, both analyses were used. Qualitative research is expressed in words, whereas quantitative research is expressed in numbers and graphs. It was used to grasp design principles, simple solutions for robotic servo arm design with dry bearing, control system observed uncertainties and inadequacies, and interpretation of the results of multiple trials. The scientific approach always seeks some categorical views and evidence and even doubts that open up opportunities for other research teams to investigate, confirm, or deny such phenomena more deeply. Additionally, the scientific approach always requires Of course, in the absence of integration contribution, there is a delay in position control in the case of PD controllers, about 0.15 s after initiation, but the BPS system has considerably superior stability. According to iteration method frequencies, the frequency of the controller control signal directed to the X-axis DC angle correction actuator is 32 control actions per second, as shown in Figure 22. Conclusions The implementation of the BPS prototype as a laboratory platform for the education of STEM engineers is discussed in this study. In addition, the design and implementation of software and hardware are explored in detail. The computation time of an open-source control system based on Python scripts that permits the usage of ready-made functions from the library is quite short. Because of the OpenCV environment, the calculation may be made as simple as possible. The OpenCV technique was found to work when applied to the BPS process, however, it is important to improve the system in comparison to other publications in order to eliminate or partially minimize the influence of the amount of the disturbances indicated as mistakes in the improved block diagram. Because of the dynamic features of the mechatronic prototype and the circumstances surrounding suitable lighting, the PD algorithm proved to be more successful than the conventional PID solution. Because of the required amount of consistent light illumination, choosing an HD camera as a sensor for the control system feedback proved to be quite challenging. A quantitative study was carried out with numerical results in collecting original data and analysing the data, whereas qualitative research was concerned with the descriptions and meanings of the tests carried out. In this study, both analyses were used. Qualitative research is expressed in words, whereas quantitative research is expressed in numbers and graphs. It was used to grasp design principles, simple solutions for robotic servo arm design with dry bearing, control system observed uncertainties and inadequacies, and interpretation of the results of multiple trials. The scientific approach always seeks some categorical views and evidence and even doubts that open up opportunities for other research teams to investigate, confirm, or deny such phenomena more deeply. Additionally, the scientific approach always requires that readers from the presented paper can assess the reliability and validity of the research. Nevertheless, the authors hope that the presented works will inspire readers and students to develop new methods and applications of machine vision and computer vision for industrial and non-industrial applications, as the authors will undoubtedly continue their research on the BPS mechatronic platform and control algorithms. The selection of various control algorithms and the usage of a resistive touchpad as a feedback sensor are the most likely directions for future study. Funding: This paper was produced as part of the "Atrium of Knowledge" project co-financed by the European Union from the European Regional Development Fund and the Operational Programme "Competitiveness and Cohesion" (OPCC) 2014-2020. Contract No: KK.01.1.1.02.0005. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
16,759.4
2022-02-27T00:00:00.000
[ "Engineering", "Education", "Computer Science" ]
Liouville Conformal Field Theories in Higher Dimensions We consider a generalization of the two-dimensional Liouville conformal field theory to any number of even dimensions. The theories consist of a log-correlated scalar field with a background $\mathcal{Q}$-curvature charge and an exponential Liouville-type potential. The theories are non-unitary and conformally invariant. They localize semiclassically on solutions that describe manifolds with a constant negative $\mathcal{Q}$-curvature. We show that $C_T$ is independent of the $\mathcal{Q}$-curvature charge and is the same as that of a higher derivative scalar theory. We calculate the A-type Euler conformal anomaly of these theories. We study the correlation functions, derive an integral expression for them and calculate the three-point functions of light primary operators. The result is a higher-dimensional generalization of the two-dimensional DOZZ formula for the three-point function of such operators. I. INTRODUCTION Two-dimensional quantum Liouville theory has been a subject of much investigation since its first appearance in the study of non-critical string theory [1] (for reviews see e.g. [2][3][4]). The theory provides a realization of two-dimensional quantum gravity [5,6], is an essential ingredient of many string theory backgrounds and has been related to certain N = 2 SCFTs [7]. As a conformal field theory (CFT) it is non-compact, thus the set of Virasoro representations that make up its space of states is continuous. The aim of this work is to study a generalization of the two-dimensional Liouville CFT to any number of even dimensions that consists of a log-correlated scalar field with a background Q-curvature charge and an exponential Liouville-type potential (for an earlier work on the dynamics of the four-dimensional conformal factor see [8]). Consider an even-dimensional manifold M of dimension d without a boundary, equipped with a Euclidean signature metric g ab . The action of the higher-dimensional Liouville CFT reads: φ is a scalar field, P g is the conformally covariant GJMS operator [9]: where = g ab ∇ a ∇ b with ∇ a being the covariant derivative, and Q g is the Q-curvature scalar [10]: The dimensionless parameters in the action (1) are the background charge Q, the cosmo- is the surface volume of the d-dimensional sphere S d . When d = 2 the action (1) is that of the two-dimensional Liouville field theory. When µ = 0, the action describes a generalization of the two-dimensional Coulomb gas that appeared in [11] as part of a proposal for a field theory description of inertial range turbulence and the analysis of the A-type conformal anomaly. The action (1) defines non-unitary conformally invariant theories that localize semiclassically on solutions that describe manifolds with a constant negative Q-curvature. We will show that C T is independent of the Q-curvature charge and is the same as that of a higher derivative scalar theory [12]. We will calculate the A-type Euler conformal anomaly of these theories. We will study the correlation functions, derive an integral expression for them and calculate the three-point functions of light primary operators. The result is a higher-dimensional generalization of the two-dimensional DOZZ formula for the three-point function of such operators [13,14]. The paper is organized as follows. In section 2 we will consider the classical higherdimensional Liouville CFTs, verify their Weyl invariance, derive the field equations and define the background Q-curvature charge. In section 3 we will study the higher-dimensional Coulomb gas theory, the two-point function and the quantum background Q-curvature charge, C T and the A-type conformal anomaly. In section 4 we will analyze the Liouville correlation functions, derive an integral expression for them and calculate the three-point functions of light primary operators. Section 5 is devoted to a discussion and outlook. In appendix A we briefly review the higher-dimensional Möbius transformations that are used in section 4. A. GJMS Operators and Q-Curvature There are two objects in the action (1) that play an important role in conformal geometry (for a review see e.g. [15]). The first are the conformally covariant GJMS operators P g [9]. For instance, in two and four dimensions they are the Laplacian and the Paneitz operator [16], respectively: The second object is the Q-curvature Q g [10], that takes in two and four dimensions the form: The integral of the Q-curvature on a Riemannian manifold M is an invariant of the conformal structure, but is not in general a topological invariant. When M is a conformally flat manifold, the Q-curvature is related to the Euler density E d and: where χ(M) is the Euler characteristic of M. B. Classical Weyl Invariance Consider the Liouville CFT defined by the action (1). Under a Weyl transformation of the metric g ab → e 2σ g ab the Liouville field φ transforms as: while the GJMS operator P g transforms as: and the Q-curvature as: These transformations imply that the action (1) is classically Weyl invariant: for Q = 1 b . This is the classical value of the background charge, and it will be modified by quantum corrections. Equation (9) can be written as: If we take Q e 2σ g to be a real constant Q then solutions σ to equation (11) are answers to the question: Given a manifold M with a metric g ab , can we find σ such that under a Weyl transformation g ab → e 2σ g ab we get a manifold that is conformally equivalent to M and has a constant Q-curvature. In two dimensions this means that we get a conformally equivalent surface with a constant scalar curvature. In higher dimensions the Q-curvature does not determine the curvature tensor, however, the new metric with a constant Q-curvature may have special properties. The field equations derived from the Liouville action (1) for a rescaled Liouville field φ → bφ take the form (11) with a negative constant Q-curvature: 4 C. Background Charge When µ = 0, the action (1) describes a higher-dimensional Coulomb gas theory, and we will denote this action by S C.G. (φ, g). For a conformally flat manifold with the topology of the sphere we get using (6) for a constant shift of the field by φ 0 : We will study these theories on the d-sphere S d and the following discussion is a generalization of the two-dimensional analysis in [14] to d-dimensions. Since the sphere is conformally equivalent to flat space we can preform a (singular) Weyl transformation and work with a flat metric. The results boundary conditions for φ read: and this is called a background charge −Q at infinity. When using a flat reference metric, one must regulate the region of integration and introduce boundary terms. We can define the action to be the large R limit of: where B d is the d-dimensional ball of radius R and d d−1 Ω is the volume element on its boundary ∂B d = S d−1 . In writing down this action we have neglected boundary terms that are necessary in order to analyse the conformal boundary conditions for this theory [17], but are irrelevant to our analysis. In addition, this action needs to be regularized in order to ensure its finiteness, and this is done by adding a constant term of the form N Q 2 log R where N is a suitable number. D. The Semiclassical limit The semiclassical limit of the theory is b → 0. In this limit it is convenient to work with the rescaled field φ c = bφ and the action (15) reads: The boundary condition (14) becomes: The field equations that follow from (16) read: and this is equivalent to the equation: Equation (19) describes a manifold with constant negative Q-curvature. Indeed, as discussed in a previous section, the semiclassical field equations resulting from the variation of (1) on a manifold M with a metric g ab describe metrics on M that are conformally equivalent to g ab with a constant negative Q-curvature. A. Two-point Function In Coulomb gas theory, i.e. µ = 0, the two-point correlation function of φ reads: where the computation is done using an IR regulator L in the limit L → ∞ and the dots indicate finite regulator dependent terms. As in the two-dimensional case, we define vertex operators as: which in the free Coulomb gas theory are primary conformal operators of dimension: In the Liouville theory the result is the same, since we can compute the dimension of vertex operators by considering correlations in a state of our choice. By choosing a state in which φ ≪ 0 we can turn off the Liouville interaction potential and reduce the calculation to the free field case. In the Liouville CFT we require that the interaction term has dimension ∆ b = d, and thus using (22) we get the quantum-corrected value of the background charge: The flat space stress-energy tensor is defined as the variation of the action with respect to the metric: Therefore, the stress-energy tensors obtained from the Liouville action and from the Coulomb gas action are identical. We can write the dependence of the flat space stress-energy tensor on the background charge in the following way: where T ab | Q=0 is the stress energy tensor in the absence of a background charge, i.e. the part originating from the variation of P g . Looking at the two-point function of the stress-energy tensor for Coulomb gas theory, and using the fact that the theory is Gaussian and the three-point function vanishes: The RHS vanishes (up to contact terms) for d > 2. The coefficient C T is defined by: where I ab,cd (x) is the inversion tensor for traceless symmetric tensors. The calculation (26) implies that for d = 2 the coefficient C T is independent of the background charge Q and is the same as calculated for the higher derivative scalar theory in [12]. Consider the quantum action of Coulomb gas theory: where D is the propagator of the theory: and we used the notation: The A-type conformal anomaly coefficient a is defined by: where E d is the Euler density normalized as: and we work on a conformally flat space where all the Weyl invariant terms on the RHS of (31) vanish. We get: The first term in (33) has been calculated in [18]. A. Correlation Functions We are interested in calculating correlation functions of the vertex operators (21): By shifting φ → φ − log µ db we get using (6) the KPZ scaling relation: the identity 1 = ∞ 0 dAδ A − d d xe dbφ into the Liouville functional. We get: where the fixed area correlation function is defined by: By shifting φ → φ + log A db one sees that the fixed area correlation functions satisfy the scaling relation: Using the relation (36) we find that for Re(s) < 0: For Re(s) ≥ 0 the integral has a UV divergence at A → 0. This corresponds to the fact that when Re(s) ≥ 0, as we will see later, there are no solutions to the classical equation of motion, i.e., there are no real saddle points. In this case the correlation function will include a non-universal part which is polynomial in µ and depends on a UV cutoff, and a universal cutoff independent part which is proportional to (µA) s V a 1 (x 1 ) · · · V an (x n ) A . B. Relation to a Free Field The KPZ scaling relation (35) shows that the correlation functions in Liouville theory are not analytic in µ and therefore we expect the naive perturbation theory in µ to fail. This follows from the fact that by shifting the Liouville field we can always change the value of µ and therefore there is no sense in which we can consider it to be small. We can separate the zero mode of the path integral over φ(x) from the non-zero mode φ 0 and write φ(x) = φ 0 +φ(x). As in the two-dimensional case (see e.g. [3]), the Liouville measure factorizes in the following way: Under translations of the zero mode the measure dφ 0 is invariant and the measure over the non-zero mode satisfies [Dφ] e −dbδ µ,φ 0 +δ = [Dφ] µ,φ 0 . In the limit φ 0 → −∞ the interaction term vanishes and the measure [Dφ] µ,φ 0 is asymptotic to the free Coulomb gas measure with corrections given by the interaction term: This is a power series in the translationally invariant small variable µe dbφ 0 . While the limit φ 0 → ∞ of the integral over the zero mode is well behaved due to the presence of the Liouville-type interaction term, this is generally not the case for φ 0 → −∞. For the correlation functions (34) the leading dependence in this limit is e −dbsφ 0 and we see that the integral has to be regularized for Re(s) > 0. We can regularize it by subtracting the leading divergences, as given by the asymptotic behaviour (41). One gets: The correlation functions (42) have poles in the variable s at values s = n. Denoting the residues G (n) α 1 ,...,α N (x 1 , . . . , x N ) = Res s=n V α 1 (x 1 ) · · · V α N (x N ) we get: In this equation · · · C.G. denotes correlation functions in Coulomb gas theory, in which: The correlation functions vanish unless k i=1 α k = Q. C. The Semiclassical Limit In this section we consider correlation functions of vertex operators : The following analysis is a generalization of the the two-dimensional case and follows the discussion in [14,19]. We wish to evaluate the integral (45) in the semiclassical limit b → 0 using the saddle point approximation. The action (16) scales as b −2 , thus in order for a vertex operator insertion V α in (45) to affect on the saddle points we require the scaling α ∼ b −1 . We define α = η/b, where we keep η fixed when b → 0. Such vertex operators define "heavy" operators whose dimensions read (22): "Light" operators are defined by vertex operators with α = bσ where σ is kept fixed when b → 0 and their dimension is ∆ = dσ in the semiclassical limit. The insertion of light operators can be quantified to lowest order in b by a b-independent factor of e dσ i φc(x i ) , where φ c is the saddle point and hence it does not affect it. On the other hand, an insertion of a heavy operator modifies the field equation (18): Assuming that in the neighbourhood of an operator insertion we can ignore the exponential term, one gets near a heavy operator the boundary condition: The physical metric e 2φc(x) δ ab in this region reads: where dΩ 2 d−1 is the metric on S d−1 and the effect of a heavy operator can be interpreted as creating a conical singularity in the physical metric. Thus solving equation (47) Inserting the solution back into the field equations and requiring that the exponent is subleading one gets the condition: Condition (50) is called the Seiberg bound in the two-dimensional case [2]. It was interpreted as the non-existence of local operators with Re(η) > 1 2 . Stated differently, α and Q − α correspond to the same quantum operator: where the relative scaling R(α) is called the reflection coefficient. When considering the semiclassical limit we use out of these two operators the one that satisfies the Seiberg bound. An additional constraint for real saddle points follows from the Gauss-Bonnet-Chern theorem, by integrating (47): which implies that there is no real saddle point for the Liouville path integral with light operator insertions. When i η i < 1, we can consider the fixed area path integral (37), which still has a real saddle point. In the limit b → 0, we can can fix the area where φ A = bφ, by using a Lagrange multiplier. This results in the following semiclassical equation of motion: We see that in the case i η i < 1, the classical solutions correspond to manifolds with positive constant Q-curvature (that include the correct singularities) and finite area. The action evaluated on a classical solution obeying our boundary conditions is divergent. In order to regularize it, we preform the action integral only over the part of the ball B d that excludes a ball b i of radius ǫ around each heavy operator insertion: The action is regularized by adding the field independent terms log R, η 2 i log ǫ multiplied by suitable numbers. The equations of motion for this action include both the equation of motion (47) and the boundary conditions. The leading exponential asymptotic in the limit b → 0 for the correlation function of heavy and light operators is given by the semiclasssical expression: In general there will be more than one solution, and the right hand side will include a sum, or an integral, over the solutions. D. Three-Point Functions of Light Primary Operators In a conformal field theory, the three-point function of primary operators is determined up to a constant by conformal invariance. In particular we have: where the function C(α 1 , α 2 , α 3 ) specifies the structure constants of Liouville field theory. We now consider the case where all three operators are light and therefore we need to examine the fixed area correlation function. The relevant solution to the fixed area equation of motion is the sphere metric of area A: We have to integrate over all solutions related to this one by conformal mappings, as follows from the conformal invariance of the problem. According to Liouville's theorem, all conformal mappings on a domain of R d for d > 2 are a composition of translations, inversions, dilations and orthogonal transformation, i.e. are higher-dimensional Möbius transformations. We describe these transformations using 2 × 2-matrices with entries in the Clifford algebra C d−1 = Cℓ 0,d−1 (R), as detailed in appendix A. Using this formalism we can write the general Möbius transformation of the saddle point: where α, β, γ, δ ∈ Γ d−1 ∪ {0}, αβ * , γδ * , γ * α, δ * β ∈ R d , αδ * − βγ * = 1. We now see that and would also need the Jacobian for changing the integral over φ A to an integral over α, β, γ, δ. We explicitly include O(b 0 ) terms in the action, but represent the functional determinant and Jacobian as a b-dependent factor A(b) whose logarithm is at most O(log b) [19]. Note, that it is independent of σ i since neither effect is affected by light operator insertions. We can now write: where dµ(α, β, γ, δ) is the invariant measure on SL(2, C d−1 ). The Coulomb gas action is given by : Evaluating this action for the solution (58) we get: Here the constant S Bulk is given by regularizing (i.e. taking the finite part of) the large R limit of the integral: where r = 1 r d−1 ∂ r r d−1 ∂ r is the radial part of the Laplacian and where:Î (σ 1 , σ 2 , σ 3 ) = dµ(α, β, γ, δ) This integral is invariant under the SU(2, C d−1 ) subgroup of SL(2, C d−1 ). We can now parametrize SL(2, C d−1 ) elements by a unitary matrix times an upper triangular matrix (i.e. a composition of dilatation and translation). The relevant integral is therefore: In changing coordinates from α, β, γ, δ to λ, w we get a b-independent Jacobian which we can ignore by replacing A(b) by a new factor A(b). This integral can be evaluated explicitly with the result: Finally, we find the semiclassical result for the structure constants of light operators: This is the higher-dimensional generalization of the two-dimensional DOZZ formula for the three-point function of light primary operators [13,14]. V. DISCUSSION AND OUTLOOK In this work we initiated the study of a higher-dimensional generalization of the twodimensional Liouville CFT that consists of a log-correlated scalar field with a background Q-curvature charge and an exponential Liouville-type potential. There are many interesting classical and quantum aspects of these theories that deserve further study. Classically, the solutions to the field equations describe manifolds with a constant negative Q-curvature. The space of solutions to this mathematical problem is not known in more than two dimensions and it corresponds to a higher-dimensional uniformization-like problem. Quantum mechanically, it is quite possible that these theories can be solved once the three-point function is calculated exactly. We calculated it for the special case of three light primary operators. Performing the integral expression (43), (44) in general and deriving the exact formula for the three-point functions is an interesting and challenging problem. Another interesting direction to follow is to study these theories in a Lorentzian siganture. Being non-unitary and higher derivative theories it is not clear whether they can be defined and analyzed consistently in such a signature. Much research on adding boundaries to CFTs revealed a rather rich structure in diverse dimensions. It would be interesting to study the higher-dimensional Liouville CFTs in the presence of boundaries. One needs to formulate the consistent boundary conditions and study issues like boundary operators, correlation functions and boundary anomalies [17]. One can also consider the odd-dimensional bulk case with an even-dimensional boundary, where the GJMS-type operators are pseudo-differential [11]. One can add fermionic degrees of freedom to the higher-dimensional Liouville CFTs and, as in the two-dimensional case, construct and study supersymmetric versions of them. Finally, it would be interesting to explore the possible role of the higher-dimensional Liouville CFTs in the study of higher-dimensional random geometry, as well as a possible generalization of the AGT relation [7]. 2. ′ : replace each i k with −i k . It determines an automorphism of C n : (ab) ′ = a ′ b ′ . Clifford numbers of the form x = x 0 + x 1 i 1 + · · · + x n i n are called vectors. They form an (n + 1)-dimensional subspace which we identify with R n+1 . For vectors x * = x and thus x ′ =x. Further, xx =xx = |x| 2 so non-zero vectors are invertible with x − 1 =x/|x| 2 . Thus products of non-zero vectors are invertible and form a multiplicative group, the Clifford group Γ n . Clifford Matrices We now consider Clifford matrices T =   a b c d   with elements in Γ n ∪ 0. Each such matrix is identified with the map T :R n+1 →R n+1 (whereR n+1 = R n+1 ∪ {∞} is the one-point compactification of R n+1 ) given by: The group of invertible 2 × 2 Clifford matrices is given by: The quantity ∆(T ) = ad * − bc * is called the pseudo-determinant. The inverse matrix is given by T Each T ∈ GL(2, C n ) is a composition of: • Translation: • Inversion: • Special orthogonal: x → axa * , a ∈ Γ n , |a| = 1 • Reflection: Further, T is orientation preserving if and only if ∆(T ) > 0.
5,446
2018-04-05T00:00:00.000
[ "Physics" ]
A vast resource of allelic expression data spanning human tissues Allele expression (AE) analysis robustly measures cis-regulatory effects. Here, we present and demonstrate the utility of a vast AE resource generated from the GTEx v8 release, containing 15,253 samples spanning 54 human tissues for a total of 431 million measurements of AE at the SNP level and 153 million measurements at the haplotype level. In addition, we develop an extension of our tool phASER that allows effect sizes of cis-regulatory variants to be estimated using haplotype-level AE data. This AE resource is the largest to date, and we are able to make haplotype-level data publicly available. We anticipate that the availability of this resource will enable future studies of regulatory variation across human tissues. Background Allelic expression (AE, also known as allele-specific expression or ASE) analysis is a powerful technique that can be used to measure the expression of gene alleles relative to one another within single individuals. This makes it well suited to measure cis-acting regulatory variation using imbalance between alleles in heterozygous individuals ( Fig. 1a) [1]. AE analysis can capture both common cis-regulatory variation, for example, expression quantitative trait loci (eQTLs), and rare regulatory variation [2]. It can also be used to measure allele-specific epigenetic effects such as parent of origin imprinting [3]. In practice, AE analysis uses RNA-seq reads that overlap heterozygous single nucleotide polymorphisms (SNPs), where the SNP can be used to assign the read to an allele. These heterozygous SNPs capture the cumulative effects of cis-regulatory variation acting on each allele. Allelic imbalance occurs when the two alleles of a gene are expressed at different levels. The magnitude of the imbalance can be quantified by allelic fold change (aFC) [1], and the statistical significance of the imbalance can be evaluated using binomialbased statistics to account for the count-based nature of the data [4]. In some cases, these effects can be caused by the SNPs being used to measure AE themselves, for example, stop-gain variants that cause nonsense-mediated decay (NMD) [5], but often they simply capture the effects of other cis-acting variation. Traditionally, a single SNP has been used to measure AE, by taking the SNP with the highest coverage per gene. However, as a result of improvements in genome phasing, data can be aggregated across SNPs to produce estimates of AE at the haplotype level (Fig. 1b). We have previously developed a tool, phASER, which does this systematically, in a way that uses the information contained within reads to improve phasing, while preventing double counting of reads across SNPs to improve the quality of data generated [6]. In this work, we present and demonstrate the utility of an AE resource generated using the Genotype Tissue Expression (GTEx) version 8 data release comprising RNAseq data from 54 tissues and 838 individuals, for a total of 15,253 samples [7]. We generated both SNP-level and haplotype-level AE data. While the SNP-level data is available to approved users through dbGaP, the haplotype-level data does not contain identifiable information, and we were thus able to make it publicly available on the GTEx portal. Finally, we developed an addition to phASER, called phASER-POP which makes it easy to generate population-scale, haplotype-level AE data and calculate effect sizes for regulatory variants. Results and discussion Both SNP-level and haplotype-level AE data were generated for each GTEx sample using current best practices, both with and without using WASP filtering [8] to reduce the mapping bias that is sometimes present in AE analysis, resulting in 4 data types per sample (Additional file 1: Fig. S1, "Data generation and availability" section in the "Methods" section). Across samples, this produced over 431 million measurements of AE at the SNP level and 153 million measurements of AE at the haplotype level. To Fig. 1 Capturing cis-regulatory effects with phased allelic expression data. a The presence of a heterozygous cis-regulatory variant or eQTL produces an expression-level imbalance between the two haplotypes, which can be detected using allelic expression analysis. b RNA-seq reads overlapping heterozygous SNPs in expressed regions of the gene can be used to quantify the expression of alleles relative to one another. These SNPs can be phased with each other and their counts aggregated to produce haplotype-level expression estimates, or haplotypic counts. The effects of regulatory variants can be captured by phasing them with haplotypic counts. c Spearman correlation across the 49 GTEx v8 tissues where eQTLs were called between eQTL effect size (allelic fold change, aFC) and effect size measured using AE data from the single SNP with the highest coverage (SNP AE) or haplotype-level AE generated with phASER (phASER). Results are shown with and without allelic mapping bias correction from WASP. In each tissue, only a single top significant (FDR < 5%) eQTL per gene was analyzed. p values were calculated using a Wilcoxon paired signed rank test. For boxplots, bottom whisker: Q1 − 1.5*interquartile range (IQR), top whisker: Q3 + 1.5*IQR, box: IQR, and center: median demonstrate the ability of these data to robustly capture cis-regulatory effects and also benchmark the four data types relative to one another, we estimated eQTL effect sizes across the 49 tissues where eQTLs were mapped from AE data using allelic fold change (aFC) and compared them to those derived from eQTL mapping [7]. The effect sizes were quantified using aFC for both AE and eQTL data. To make it easier to generate aFC estimates for regulatory variants from phASER data, we developed a new add-on to the software package, phASER-POP, eliminating the need for custom scripts (Additional file 1: Fig. S2). Briefly, phASER-POP integrates genotype calls and haplotype-level AE data across individuals and phases each regulatory variant of interest (e.g., eQTL) in each individual with their AE data. It then calculates statistics, including aFC per sample, and its median across samples for individuals that are heterozygous for the variant. At the sample level, aFC is a net expression fold difference between the two haplotypes in an individual that is affected by all heterozygous regulatory variants, including other eQTLs and rare regulatory variation, and thus can differ from the expected aFC derived from eQTL mapping. However, the median aFC across all individuals in a population that is heterozygous for a given eQTL can be used as a robust estimate of its effect size [1]. The software is described in full detail in the "Methods" section. To characterize the GTEx AE resource, we first compared aFC estimates calculated for GTEx eQTLs between SNP-and haplotype-level AE data. We found high correlations between AE and eQTL estimates, with a median Spearman rho of 0.80 across tissues for SNPlevel data and 0.83 for haplotype-level data generated by phASER (Fig. 1c). Haplotype-level correlations were significantly higher than SNP-level correlations (p = 3.55e−15, Wilcoxon paired signed rank test) while at the same time producing estimates for a median of 20% more eQTLs (Additional file 1: Fig. S3). Based on this, we recommend using the haplotypelevel data for most downstream analyses, as it yields more data of a higher quality. However, there are some circumstances when the SNP-level data should be used. For example, when analyzing allelic splicing, the haplotype-level data is not appropriate because it spans the entire transcript, whereas only SNPs within the exon(s) or intron(s) of interest should be analyzed. Furthermore, when analyzing transcribed variants with post-transcriptional effects on gene expression, such as stop-gain or splice variants, SNP-level AE data from the variant of interest is more straightforward to analyze. Next, we assessed the effect of read mapping bias correction on allelic expression analysis by comparing eQTL and AE effect size correlations with and without WASP filtering. WASP filtering significantly improved correlations for both SNP-(p = 2.49e −13, median improvement 1.22%) and haplotype-(p = 3.55e−15, median improvement 1.28%) level data (Fig. 1c). Since WASP works by removing, rather than correcting reads with mapping bias, we compared the number of eQTLs for which an aFC estimate could be calculated and found only a small 3.5% reduction (Additional file 1: Fig. S3d). We therefore recommend using WASP-filtered data for most downstream analyses. This is particularly important if the aim is to identify strong signals of allelic imbalance, which can often be false positives due to mapping bias. We encourage users of the resource to assess the impact of WASP filtering for their own use case, so have included the unfiltered AE data for comparison. Next, we characterized the WASP-filtered AE data. In the GTEx RNA-seq data, at a minimum coverage of 8 reads, samples had a median of 7,607 genes with AE data at the SNP level and 10,043 genes at the haplotype level, and this dropped as a function of increasing coverage thresholds (Additional file 1: Fig. S4). With the same coverage threshold, at the tissue level and excluding tissues with small sample sizes (N < 70) where eQTL mapping was not performed, there were a median of 18,042 genes with a median of 128 samples per gene using haplotype-level AE data, rendering the data set well-powered to detect cis-regulatory effects (Fig. 2a). The median number of samples with AE data per gene was largely dependent on tissue sample size, ranging from 39 for kidney cortex (N = 73 samples) to 321 for thyroid (N = 574 samples). The number of genes with AE data was correlated with both sample size (rho = 0.41) and the number of expressed genes (rho = 0.82), with the two cell lines having the lowest number of genes with AE data (LCLs = 15,804, fibroblasts = 16,526) and the testis having the Fig. 2 The GTEx v8 haplotype-level allelic expression resource. a Number of genes per tissue with haplotype-level AE data (AE genes) in at least 1 individual versus the median number of samples with data per gene. b Percentage of AE genes with significant allelic imbalance (binomial test, gene-level FDR < 5%) in at least n samples per gene using all samples (blue) or excluding samples heterozygous for any top (FDR < 5%) or independent GTEx eQTL (permutation p < 1e−4) (red). Faded points are values for individual tissues, and solid points are the median across tissues. Proportions above data points indicate the reduction in percentage of AE genes with imbalance after removing eQTL heterozygotes. A full summary of these statistics across tissues and sample thresholds is available in Additional file 3: Table S2. c The effect of the number of heterozygous variants in or proximal to gene promoters (< 10 kb upstream of TSS) on allelic imbalance stratified by minor allele frequency. Plotted values are effect estimates and 95% confidence intervals (see the "Promoter variant effect modeling" section in the "Methods" section) largest number of genes with AE data (21,952) despite an intermediate sample size of 322 (Additional file 2: Table S1). This was likely driven by the number of expressed genes in testes, which was the highest across all tissues. Finally, we sought to demonstrate the pervasiveness of cis-regulatory effects that can be captured with this resource. We found that even strong regulatory effects, where one allele was expressed at ≥ 2x the level of the other allele, are widely present, even for protein-coding genes, with 53% of protein-coding genes showing such an effect in at least one tissue and at least 50 individuals (Additional file 1: Fig. S5). Considering all genes, we found that a median of 10,183 genes (or a median of 56% of those genes with AE data) per tissue exhibited significant allelic imbalance (binomial test, FDR < 5% at the gene level) in at least one sample, indicating the wide-spread nature of cis-regulatory effects (Fig. 2b). Removing individuals that were heterozygous for any known GTEx eQTL ("GTEx eQTLs" section in the "Methods" section) only resulted in a median reduction of 7.5% in the number of genes with significant imbalance in at least one sample, demonstrating the potential of this resource to identify additional regulatory effects, including rare regulatory effects, that are not captured in eQTL analysis. To further demonstrate this potential, we modeled allelic imbalance as a function of the minor allele frequency and number of heterozygous variants found in or proximal to gene promoters (< 10 kb upstream of TSS). As expected, we found that rare variants tended to have larger effects on allelic imbalance than common variants, with the rarest class of variants analyzed (MAF < 0.005 in GTEx) having the strongest effects (Fig. 2c). Conclusion In this work, we used the GTEx v8 release to produce a vast allelic expression resource, consisting of hundreds of millions of measurements. We generated SNP-and haplotype-level data, which provides better estimates of allelic expression for a greater number of genes. These data have numerous uses for the study of regulatory variation. SNP-level data from the previous v6 AE dataset [2] has been extensively used to study gene regulation, for example, to study the effects of rare regulatory variation [9], X chromosome inactivation [10], Neanderthal-introgressed regulatory variation [11], interaction between regulatory and coding variants [12], and regulatory constraint in the context of rare disease [13]. The haplotype-level v8 data presented here have similarly found broad use for studying gene regulation. For example, they have been used to replicate sex-, population-, and cell type-specific eQTLs [7,14] as well as capture the effects of rare regulatory variants [15] and study cis-domains of lncRNA regulation [16]. By making haplotype-level AE data publicly available for the first time, we anticipate that this resource will find similarly broad use as the eQTL data it complements. Data generation and availability Paired-end 75-bp Illumina RNA-seq reads were aligned to hg38 using STAR [17] v2.5.3a (without allelic mapping bias correction) and v2.6.0c (with allelic mapping bias correction) in two-pass mode, and with allelic mapping bias correction enabled via the --waspOutput-Mode option which replicates the approach in van de Geijn et al. [8] (the full settings of the alignment pipeline are described at https://github.com/broadinstitute/gtex-pipeline). All data was generated with or without using this feature and is indicated by "_WASP_" in the file names. SNP-level AE data was generated using the GATK ASEReadCounter tool v3.8-0-ge9d806836 with the following settings: -U ALLOW_N_CIGAR_READS -minDepth 1 --allow_potentially_misencoded_quality_scores --minMappingQuality 255 --minBase-Quality 10. Raw SNP-level data, consisting of the GATK tool output, were aggregated per subject across all tissues. Raw autosomal SNP-level data, for SNPs with ≥ 8 reads, was annotated by assigning heterozygous SNPs to genes using Gencode v26, calculating the expected null ratio for each combination of ref/alt allele [4], calculating a binomial p value by comparing to the expected null ratio, calculating a multiple hypothesis corrected p value per tissue using Benjamini-Hochberg, and flagging sites that overlapped low-mappability regions (75-mer mappability < 1 based on 75mer alignments with up to two mismatches based on the pipeline for ENCODE tracks and available on the GTEx portal), showed mapping bias in simulation [18], or had no more reads supporting two alleles than would be expected from sequencing noise alone, indicating potential genotyping errors (FDR < 1%, see Castel et al. [4] for the description of the test). The genotype warning test cannot distinguish between strong allelic expression and a true genotyping error and as a result should not be used when studying phenomena with expected mono-allelic expression (e.g., imprinting). Haplotype-level data was generated using phASER v1.0.1 [6]. phASER was run using whole genome sequencing genotype calls that were population-phased with Shapeit v2.837 in read-backed phasing mode with whole genome sequencing reads [19]. phASER was run using all available RNA-seq libraries per subject. RNA-seq readbacked phased genotype data are provided (filename: phASER_GTEx_v8_merged.vcf.gz). Haplotypic expression was calculated using phASER Gene AE 1.2.0 and Gencode v26 gene annotations with min_haplo_maf 0.01. Haplotypic expression matrices containing all samples were generated using the "phaser_expr_matrix.py" script. This consists of a single string per sample per gene with the format "HAP_A_ COUNT|HAP_B_COUNT." One matrix was generated using only haplotypes that could be genome-wide phased such that the haplotype assignment is consistent across genes within an individual and with the phased VCF (filename: phASER_GTEx_ v8_matrix.gw_phased.txt.gz). Another was generated that does not ensure genome-wide haplotype phasing across genes, which includes more counts, but makes the haplotype assignment of A/B arbitrary and unrelated across genes within an individual or the VCF (filename: phASER_GTEx_v8_matrix.txt.gz). The full settings of the haplotype-level AE pipeline are described at https://github. com/broadinstitute/gtex-pipeline/. Unless stated otherwise, all analyses were performed using only protein-coding and lncRNA genes. Software and availability The original phASER package produced gene-level haplotypic expression per individual [6]. We developed new additions to phASER (phASER-POP) that make it easier to analyze data across many samples, as is often done with gene expression quantifications. First, we developed a new addition to the software (phaser_expr_matrix.py) that enables the aggregation of gene-level haplotypic expression measurement files across samples to produce a single haplotypic expression matrix, where each row is a gene and each column is a sample. The values consist of a single string per sample per gene in the format "HAP_A_COUNT|HAP_B_COUNT." This format is intended to facilitate downstream analyses of allelic expression. Second, we developed a tool to make it easier to estimate effect sizes of regulatory variants using phASER haplotypic expression data (phaser_cis_var.py). As input, this script takes a phASER haplotype expression matrix, a phased VCF, and a list of regulatory variants (e.g., eQTLs) to calculate effect sizes for. To improve accuracy, the read-backed phased VCFs produced by phASER should be used, but first need to be combined across individuals, which can be performed using, e.g., "bcftools merge ind1.vcf.gz ind2.vcf.gz …." Using these inputs, the tool phases each regulatory variant of interest with haplotype-level expression data in each individual. It then calculates numerous statistics, including allelic fold change (aFC) [1] per sample, and a median across samples for individuals that are heterozygous for the variant of interest. This median can be used as an estimate of regulatory variant effect size. aFC is calculated as log 2 ((eqtl_alt_allele_haplotype_count+1)/ (eqtl_ref_allele_haplotype_count+1)). The output also includes aFC estimates calculated for homozygous individuals and performs a ranksum test of absolute aFC in heterozygotes as compared to homozygotes. True regulatory variants are expected to have a significantly higher aFC in heterozygous individuals. 95% confidence intervals are included for all aFC estimates, and all underlying individual data, including haplotypic counts, are outputted. The updated phASER package code along with extensive documentation is available through GitHub at https://github.com/secastel/phaser/tree/master/phaser_pop under the GNU General Public License v3 [21]. GTEx eQTLs For comparison between eQTL effect size and allelic expression effect size, GTEx v8 top significant (FDR < 5%) eQTLs were used from 49 tissues [7]. This results in at most a single eQTL per gene in a given tissue. When quantifying the number of samples that are not heterozygous for a known eQTL but still show allelic imbalance, gene-level haplotypic expression levels were excluded for a sample if the individual was heterozygous for a top significant eQTL or a nominally significant (permutation p < 1e−4) independent eQTL in any of the 49 tissues. Promoter variant effect modeling The effects of regulatory variants in or proximal to gene promoters were modeled using haplotype-level allelic expression data. Briefly, for each individual, all heterozygous variants within 10 kb upstream of protein-coding or lincRNA gene transcription start sites (TSS) were retrieved and the median allelic imbalance for that gene across all tissues, measured using aFC, was calculated. For each individual by gene, the number of heterozygous variants (which could potentially cause allelic imbalance) falling into each of the following minor allele frequency (MAF) bins was calculated: 0.50-0.10, 0.10-0.05, 0.05-0.01, 0.01-0.005, 0.005-0. Bins were inclusive of variants whose MAF < upper bin limit and ≥ the lower bin limit. Using data from all genes by individuals, absolute aFC was modeled with a multivariate linear model (speedglm function in R) using the number of variants in each of the MAF bins as predictors. The coefficients for each of the predictors were then plotted along with their 95% confidence intervals (confint function in R) as a measure of the effect of the number of heterozygous variants in each MAF class on allelic imbalance, with a higher coefficient indicating a stronger effect (i.e., a larger allelic imbalance). Because allele frequencies were calculated within the GTEx cohort, only individuals of predominantly European ancestry (N = 699, determined by PCA) were included in the analysis, to ensure accurate allele frequency estimates. Without this filtering, population-specific variants, whose populations are not well represented in the GTEx cohort, may have inaccurate, likely underestimated allele frequencies, which can confound the analysis. Additional file 2: Table S1: Tissue-level summary statistics for haplotype-level AE data. Table listing sample size, number of expressed genes (defined as genes with > = 0.1 TPM in at least 1 individual), number of genes with phASER data (defined as genes with > = 8 reads in at least 1 individual), median number of samples per gene with phASER data, and if the tissue was used for GTEx v8 eQTL mapping. Additional file 3: Table S2: Sample-threshold and allelic imbalance statistics for haplotype-level AE data. Table where rows are each of the 49 GTEx tissues where eQTLs were called and columns list the number of genes with haplotype-level AE data at minimum number of sample thresholds from 1 to 300 (minXXX). For example, min1 lists the number of genes that have AE data from at least 1 sample. The table has three sheets, the first (all_data) presents statistics generated using all haplotype-level AE data, the second (sig_imb_fdr05), counting only cases with significant allelic imbalance (binomial test versus 50/50, gene-level FDR < 5%), and finally (sig_imb_fdr05_no_het), counting only cases with significant imbalance where the individual is not heterozygous for any top (FDR < 5%) or independent (permutation p < 1e-4) eQTLs across any GTEx tissues for the gene.
5,045.6
2019-10-03T00:00:00.000
[ "Biology" ]
XPS Depth Study on the Liquid Oxidation of Sn-Bi-Zn-X ( Al / P ) Alloy and the Effect of Al / P on the Film X-ray photoelectron spectroscopy (XPS) was used to study the properties of liquid oxidation of Sn-Bi-Zn (SBZ) solder alloys and the effect of Al/P on the oxide film. The results showed that the oxidation film on SBZ surface was in high concentration of both oxygen and zinc. Adding trace amount of Al/P to SBZ alloys (SBZA/ABZP) decreased the ratio ofO/M (Mcould be Sn, Bi, andAl/P) and changed the film compositions. Layers near the free surface of oxidation filmmostly contained Zn and Al oxides for SBZA. From the half quantitative analysis result, the aluminum had a surface enrichment behavior in liquid solder, so did phosphorus and zinc. Therefore, the Al/P addition changed their stoichiometry such as the ratio of O/M near film surface. Introduction Due to the step-soldering process in electronic package, solder alloys joining at different temperature ranges have been in demand during manufacturing procedure.Sn-Ag-Cu system, notably, Sn-3.0Ag-0.5Cu,has been becoming the main stream in surface mount technology (SMT) as a middle-temperature Pb-free solder.Accordingly, Sn-Bi alloy, represented by eutectic Sn-58Bi, has been used for thermal module connection in notebook ascribed to the low cost, superior wettability, and almost void-free bonding [1][2][3].It can also reduce the damage arising from the mismatch of thermal expansion among various components in electronic assembly [1]. But there are still two issues preventing further application of Sn-Bi alloy, that is, lower thermal conductivity and microstructure coarsening with a serious Bi segregation along the interface, which can greatly decrease the reliability of the solder joint [4].Actually, decreasing Bi-content has been an effective way to adjust the thermal conductivity and melting range.As for the microstructure instability at soldering, temperature can be suppressed by incorporating fine dispersoid particles into Sn-Bi alloy [4,5]. Among the alloying elements, Zn, Al, and P are very cheap and useful.It is reported that minor Zn doped in Pbfree solders can obviously inhibit the growth of intermetallic compounds (IMCs), restrain the growth of Cu 3 Sn, and suppress the formation of Kirkendall voids during isothermal aging [6,7].About 0.5 wt.% Zn addition to Sn-Ag-Cu-Ce solder alloy can restrain the growth of tin whiskers and strengthen the solder joints [8].Being very active, Zn will worsen the oxidation performance of solder alloy during and after soldering.Element alloying has been an effective way to improve the antioxidant power of Zn-containing solders [9].Al, Ag, and In addition can improve the liquid oxidation behavior at Sn-Zn alloy [9][10][11][12].Furthermore, Al decreases the oxidation rate while doping in Sn-Zn alloy due to the formation of Al 2 O 3 thin film on the surface, which is much similar to Bi, Ga, and P for Sn-Zn solders [12].Since Al and P have been paid much attention for the cheapness and effectiveness, Sn-40Bi-2Zn (SBZ) and Sn-40Bi-2Zn-X (X could be Al and P, simplified as SBZA and SBZP, resp.) will be used to investigate the effect of minor Al/P addition on the oxidation performance of SBZ alloy.As regards the testing method, using XPS in the characterization of formed metal alloy surface is not 2 Advances in Materials Science and Engineering new [13,14].Many efforts focusing on oxide formation on iron, steel, and molten tin have been implemented [13][14][15]. Experimental Procedures 2.1.Preparation of Specimens.SBZ, SBZA, SBZP, and Sn-58Bi alloys (for comparison in oxidation observation by eye) were prepared using pure Sn, Bi, Zn, Al, and P in vacuum oven at 800 ∘ C and remelted with stirring at 200 ∘ C to obtain a homogenized solder alloy.During the remelting process, an antioxidation agent was used to protect the liquid solder from oxidation.Subsequently, the solder alloys were aged for two weeks in room temperature for a stabilized microstructure. Surface Observation of Solder Alloy after Oxidation. About 40 g solder alloys were put in graphite crucibles with a diameter of 28 mm and heated up to 170 ± 5 ∘ C to observe the oxidation processes during 1 h in air.The first dross should be carefully scraped away from the initial liquid surface to leave a fresh surface for oxidation starting therefrom.The color changes of the liquid solder surface with oxidation time increasing were observed visually and the final appearance of the oxide films was recorded by a digital camera. 2.3.XPS Procedure.Solder alloys were exposed at 170 ± 5 ∘ C for 7 min before cooling down to achieve oxidized surfaces.The solid samples were cut from the top surface of the ingots into about 2-3 mm sheets for XPS tests.For the test is sensitive to pollution, all the samples should be kept carefully away from contaminations. The surface elementary and chemical analyses were carried out via XPS with an achromatic Al K X-ray source.After the original surface of samples was analyzed, an argon ion beam with 0.2 A current operating at 3 keV was used to etch away a very thin layer of the solder from the oxidation surface to expose the underlying layer.Subsequently, a second surface analysis of the newer oxide film was performed by XPS with an argon gun attached to.So, repeatedly, an etch rate of about 0.6 nm/s was motivated over a 2 mm × 2 mm area by the manner the same as [16]. XSP Data Analysis. The etching and measurement for each sample were operated about 10 times till the oxygen content test on the surface becomes near to zero.All the spectra were calibrated by carbon adsorbed on the initial surface of the sample during etching. Curve fitting processes for all high-resolution spectra were implemented by the XPS Peak-Fitting Programme-XPSPEAK4.1 to deconvolute and quantify the contribution of each chemical species (element associations) that comprise the spectra. Results and Discussion 3.1.Observation of Liquid Solder Alloy.Liquid Sn-58Bi alloy is easily oxidized in atmosphere condition.The freshly displayed surface remained silver-white for about 7 min before a gray white film formed.Then, the color of the liquid solder alloy varies from gray, blue with inhomogeneous purple, and finally dark blue with a light brownish yellow in some areas within 1 h (Figure 1(a)).In the same air exposure condition, Zn-containing solder alloy, SBZ loses metallic lustre after about 3 min and keeps in white with a slight blue finally.The solder surface is quite rough due to the Zn oxidation before, during, and after cooling down, as shown in Figure 1(b). SBZP and SBZA solder alloys shine with grey or dark grey metallic lustre in liquid state within 1 h of exposure in air.With increasing oxidation time to about 30 min, an oxidation film with a slight grey white is visible on SBZP liquid surface, while the color on SBZA surface shows somewhat blue.After cooling down to room temperature, the surface grows rough on the surface of SBZP solder alloy (Figure 1(c)).In comparison, the metal surface of SBZA still keeps smooth in solid state, as shown in Figure 1(d). The Variation of Atomic Concentration with Etch Time. The variation in chemical composition with etch time of tin, oxygen, bismuth, zinc, and aluminium or phosphorus obtained from the XPS survey scans for the three specimens (SBZ, SBZA, and SBZP) is shown in Figure 2. The oxygen content on the outer surface among three specimens decreases by the turn of SBZ, SBZP to SBZA from about 70 at.%to 60 at.% and then 45 at.% (see Figure 2(a)).It indicates that the oxidation film of SBZ is much porous.Moreover, the total thicknesses of the oxide films after oxidation at 170 ∘ C for 7 min are about 156 nm for SBZA sample and 216 nm for SBZ and SBZP alloys deduced from oxygen content. As observed in Figures 2(b) and 2(c), for tin and bismuth as the main elements, their concentrations increase with etch time as expected.The concentrations of zinc, aluminium, and phosphorus oppositely follow an increase sharply and then decrease towards the solder base direction.Among the three alloys, it is worth noting that the Zn concentration on the top of the oxidation film decrease followed by SBZ, SBZP to SBZA in turn corresponding to 40.4 at.%, 36.9 at.%, and 24.6 at.%, respectively.It indicates that a trace amount of Al/P doping in SBZ solder can protect Zn from overoxidation, as shown in Figures 2(d) and 2(e).Furthermore, Zn is somewhat enriched at the area of near surface, also the outer surface compared to the base-alloy composition.Accordingly, the contents of Al and P are thousand times more than additive amount, which suggests that the two elements have strong surface enrichment behavior in liquid solder alloy.Thus, there are three peaks showing Zn, Al, and P at low etching time.It can be ascribed to the following three points.First, element O, especially the absorbed oxidation, makes the other element contents keep lower relatively; second, the chemical O keeps driving Zn and Al/P diffusion from the solder base to react into compounds; finally, the competition between the above two reasons may result in the highest contents emerging at the sublayer of the oxidation films. The ratios of metal element M (M could be metal element in samples) and oxygen on the surface are about 3 : 7, 4 : 6, and 4.5 : 5.5 for SBZ, SBZP, and SBZA, respectively, increasing rapidly to 6 : 4 after etching for 40 s with argon ions for SBZ sample and 10 s for the latter two samples.It indicates the metal elements oxidation in SBZ is much serious compared to the elements in SBZA and SBZP.As for the species of the oxides, it is hard to distinguish them only from the semiquantitative results.And there are two other issues that should also be taken into account in the analysis.Firstly, there are at least four species of elements in the sample alloys which need further high-energy spectra analysis to conform the chemical states of the elements.Secondly, the surface oxygen should conclude both the physical adsorption oxygen and the chemical state oxygen appearing in oxides of metal M. Therefore, the O 1s, Sn 3d, Zn 2p, and so forth spectra should be analysed in detail, respectively, considering that Al addition cannot only protect Zn from oxidation but also reduce the thickness of oxidation film, while trace amount of P doping into SBZ only protects the Zn from overoxidation.Therefore, the XPS spectra of elements in SBZ and SBZA samples have been chosen for further analyses in the following section. Analyses for SBZ. Figure 3 shows the O 1s spectra of SBZ sample etched at various times along the film depth direction, which are decomposed to one or two symmetric peaks using a Shirley background type of subtraction for fitting the original asymmetric peaks through Gaussian curves. The deconvoluted O 1s peaks are located at 530.6 (A) and 532.6 (B) eV for etch time of 0 s, corresponding to the spectral factoring of O chemically bound with metal element M and the adsorbed oxygen, respectively.Peak A might be assigned to the presence of metal oxide (OM oxide) [17].Peak B has been reported to correspond to presentation of adsorbed oxygen [15].It is obvious that the amount of adsorbed oxygen is much higher than that of O chemically bound with metal M.After etching for 60 s (Figure 3(b)), the adsorbed oxygen on surface is very low and entirely removed within 80 s, producing a symmetrical and sharp peak pattern shown in Figure 3(c).It is worthwhile to note that the peak A in Figure 3(a) shifts from 530.6 eV to 530.9 eV after 60 s etching, which might be resulting from the stronger covalence of Sn-O bond. The XPS patterns of Sn 3d peaks for SBZ sample at different depths in the oxide film are shown in Figure 4.There are two kinds of chemical states of tin: tin oxides and metallic tin.The peak position of metallic tin, Sn 0 , is located at 484.8 eV, while the peaks standing for tin oxides are showing at 486-487.1 eV.For the difference of peak positions between Sn 2+ and Sn 4+ is so small that their states cannot be distinguished from each other.With etching time increasing, the proportion of tin oxide decreases and the peak position of Sn 3d5/2 spectrum gradually shifts from 484.5 eV (tin oxides) to 485 eV (metallic tin). Figure 5(a) displays Zn 2p spectra which possess two peaks corresponding to 1021.6 eV (Zn 2p3/2) and 1048.1 eV (Zn 2p1/2), respectively.One thing to be noted is that there is a small shift of the Zn 2p3/2 peaks to the left after etching for 60 s.This peak shift could be ascribed to the chemical state change of Zn in the oxides, from ZnO to Zn metal according to the data in Table 1. Analyses for SBZA. The O 1s spectra of SBZA sample along the depth of the oxide film are shown in Figure 6.The deconvoluted O 1s peaks for original surface are located at 531 (A ) eV and 532.6 (B ) eV.Based on the results in Figure 3, A peak can be assigned to metal oxide (OM oxygen), and the latter B may relate to the adsorbed oxygen like SBZ alloy and the emerging Al 2 O 3 [18].However, B peak disappears after 20 s of etching.It should be the absorbed oxygen mainly.The binding energy of peak A is 0.4 eV higher than the position of peak A in SBZ sample, which may indicate the species changes of oxides in the outmost surface of the film.According to the atomic concentration depth profile of SBZA etched for 0 s, it is found that there are mainly Zn and Al in the outer surface of the oxidation film.And considering that O chemically bounded with Zn located at 530.7 eV, O bounded with Al located at 532.6 eV, the OM oxygen should be mainly ZnO and Al 2 O 3 with a little Bi and Sn inside.Peak B which corresponded to adsorbed oxygen (532.5 eV) [17] and Al 2 O 3 [19] that could not be resolved becomes lower and totally disappears after etching for 20 s.That is, the film of SBZA sample is much denser than the surface film of SBZ sample, because the time getting to zero for physical adsorbed oxygen in SBZ sample requires 60 s, 40 s more than SBZA sample. It is noteworthy that peak A shifts from 531 eV to higher binding energy, 531.5 eV after 20 s etch time, then 531.7 eV at the etching time of 80 s, which might be also ascribed to the stronger Sn-O bond similar to the SBZ sample. Sn 3d spectra along the film depth are shown in Figure 7. Similar to the results for SBZ in Figure 3, the peak position of Sn 3d5/2 of the original outer surface is positioned at 487.03 eV.However, after 80 s of etching, Sn 3d peak shifts slightly to 486.31 eV first and then maintains its position till the tin oxide disappears.Both the binding energy of the outer surface and the subsurface are slightly greater than that of the SBZ oxide film.The possible reason may lie in a strong electron withdrawing group or compounds around the tin atoms, which changed the chemical conditions resulting by the trace amount of Al addition.After removing 36 nm from the surface by etching, no oxide has been detected by XPS.The peak position of metallic tin is located at 484.86 eV as shown in Figure 7(d). Figure 8 shows the XPS spectra of Al 2p depth profiles of oxide film in SBZA sample.The XPS peak height (Al content can be estimated) on the subsurface of oxidation is higher than that on the outer surface.The detected segregation layer of Al is very thin, distributed over a depth of about 12-48 nm from the outer surface.There is no visible Al peak detected beyond the area.From Figure 8, it is found that an almost invisible peak around 75.1 eV is showing at the outer film surface.After etching for 20 s, a new peak located at 72.2 eV arises corresponding to metal Al.It implies the enrichment behavior of element Al at the subsurface that the atom Al can be kept as a lower chemical state, Al 0 .While the etching time increases to 80 s, both kinds of peaks shift to higher energy direction.It suggests that there is a new chemical state of Al in the oxide.Based on [18], this peak position for Al 2p approaches that of Al 2 O 3 , with 0.9∼1.9eV deviation which might result by Sn and Bi dissolved in the oxidation film.However, further research is necessary to clarify the Al mechanism in the protective oxide film. Combined with the result of semiquantitative analysis (see Figure 2), it is reasonable to propose that there is a triple-layer oxidation film structure.The outer surface should mainly comprise of O in physical status and ZnO with about 6 nm in thickness.The subsurface should be ZnO and Al 2 O 3 doped with Sn 4+ and Bi 3+ to about 30-84 nm thickness according to Al element getting to almost zero and Sn metal contents almost being the main in all forms of Sn, which could be a protective film to prevent further oxidation of the molten solder alloy in atmosphere condition.And the inner remaining thickness of the film layer should be (Sn, Bi, and Zn) O as a transition layer. Conclusion The properties of the oxide film that forms on liquid SBZ solder and the effect of Al/P addition on the film properties were investigated by XPS depth profiling at 170 ∘ C for 7 min: (1) It was found that the thickness of the SBZ oxide was approximately 216 nm.Further XPS analysis revealed that the oxide film close to the substrate mainly composed of Sn 2+ oxides which consisted mostly of Zn 2+ oxides near the free surface.The amount of Sn 4+ and Bi 3+ oxides generated near the film surface was found to be similar to the three types of samples. (2) With 0.005 wt.% Al addition to SBZ creating a SBZA alloy, a triple-layer oxidation film structure can be observed.The outer surface should mainly comprise of physical absorbed O and ZnO.Oxides of Zn and Al are mainly responsible for the formation of the subsurface of SBZA alloy.The oxides of Sn and Bi forming the innermost layer also contribute a little Figure 8 : Figure 8: Al 2p spectra of SBZA sample exposure at 170 ∘ C for 7 min after etching for (a) various times from 0 s to 140 s, (b) 0 s (initial surface), (c) 20 s, and (d) 80 s. Table 1 : XPS binding energy values for solder elements and the oxidative products obtained from literatures (Al, O data from NIST Standard Reference Database 20, V. 3.5).
4,333
2015-07-08T00:00:00.000
[ "Materials Science" ]
The Role of Synthetic A Priori Propositions in the Development of Kant’s Account of Practical Autonomy: A Critique of Watkins’ Reading of Kant’s Prolegomena Abstract I draw attention to a 12-page Vorarbeit to Kant’s Prolegomena from the so-called Scheffner-Nachlaß and argue that the parallel Kant draws there between the possibility of theoretical and practical synthetic a priori propositions provides important insight into the development of his account of practical autonomy in the Groundwork. Based on a brief sketch of the role synthetic a priori propositions play in the development of Kant’s critical philosophy, I conclude that for Kant the objective validity of any science depends on the objective validity of a number of synthetic a priori propositions. After comparing and contrasting Kant's accounts of theoretical and practical legislation, Watkins asks: Do the substantive philosophical parallels between practical autonomy and theoretical legislation that Kant envisions justify the historical claim that he was led by the development of the account of theoretical legislation in the first edition of the first Critique and the Prolegomena to develop his account of practical autonomy in the Groundwork? : : : [T]here is no conclusive textual evidence that would resolve the issue. (p. ) In what follows I will provide textual evidence for the fact that Kant 'was led by the development of the account of theoretical legislation in the first edition of the first Critique and the Prolegomena to develop his account of practical autonomy in the Groundwork'. I will first cite a passage from the so-called Scheffner-Nachlaß which includes a -page Vorarbeit to the Prolegomena, and second comment on its relevance in the present context. Here is the crucial passage: Now, there is the question: how is a categorical imperative possible? Whoever solves this problem has found the real principle of morals. The reviewer will probably take up this task as little as he does the important problem of transcendental philosophy, which has a striking similarity with that of morals. I will before long unveil the solution, but one must not get worried about idealism and categories here. (VA-Prol, : ; my translation)  I think this is 'textual evidence' that at least in part resolves the question about the development of Kant's 'account of practical autonomy in the Groundwork'. But to see this as 'textual evidence' requires acknowledging the centrality of the question, how are synthetic judgements a priori possible? For the 'striking similarity' between the important problems of transcendental philosophy and morals is based on the possibility of synthetic a priori judgements, or so I will argue in this essay. Watkins does not consider the bridging role that synthetic a priori propositions play in relation to theoretical and practical philosophy after the first edition of the Critique. This seems surprising since Kant takes the most fundamental laws in mathematics, natural science (including biology), metaphysics (including transcendental philosophy), ethics, political philosophy and aesthetics to be synthetic a priori propositions. K O N S T A N T I N P O L L O K Moreover, Kant does not think that this is self-evident but argues for his view explicitly and extensively in each of these contexts. Historically, the prominent role of synthetic a priori propositions in Kant's philosophyhighlighted by the Marburg School of Neo-Kantianshas earned a bad reputation. One reason for this seems to be that Kant invokes them to articulate non-empirical foundations of mathematics and natural science, and so some naturalists might take them for relics of the 'Cartesian dream' (Quine : ) and thus reject them. Another reason seems to be that Kant invokes them to critique traditional metaphysics, i.e. rational psychology, cosmology and theology, which is why readers of Kant's transcendental idealism with more realist inclinations might deflate their role. In this paper I will neither propose a non-naturalist reading of Kant's philosophy nor defend a transcendental idealist versus realist reading of his transcendental idealism. Instead, I will limit my analysis to the role synthetic a priori propositions play in the development of Kant's critical philosophy, including his metaphysics of morals. The Scheffner passage illuminates that, based on an important expansion of the problem of synthetic a priori propositions, just a short time after the first Critique's publication Kant realized that the question, how can pure reason be practical?, or how can pure reason determine the will?, required closer attention.  The passage highlights Kant's key insight that the categorical imperative, or the principle of autonomy, as he repeatedly juxtaposes these notions in the Groundwork, is a synthetic a priori proposition and therefore requires a non-empirical deduction just as much as the categories that gave rise to theoretical synthetic a priori propositions. By the same token, this passage also serves as a clue why Kant wrote a second Critique which was not on his radar in . The search for an answer to the question, how are synthetic judgements a priori possible?, can be seen as guiding the development of Kant's critical philosophy from the early s.  In the first edition of the Critique he speaks of the problem of synthetic a priori judgements as a 'certain mystery : : : the elucidation of which alone can make progress in the boundless field of pure cognition of the understanding secure and reliable: namely, to uncover the ground of the possibility of synthetic a priori judgments with appropriate generality' (A). But it is not until the Prolegomena that 'synthetic propositions a priori : : : alone constitute its [metaphysics'] aim', and form 'the essential content of metaphysics' (P, : ), including the metaphysics of morals. Up until the first edition of the Critique Kant's concern with synthetic a priori judgements seems to be equally on both the negative argument about such judgements transgressing the bounds of sense and the positive argument about such judgements constituting the possibility of experience. Remarkably, there is a sharp decline in Kant's interest in a critique of traditional metaphysics after the A edition. Apart from some remarks (cf. ÜE, : -), he seems to consider the matter closed, and focuses instead on the constructive aspects of that future metaphysics that will be able to come forward as science, as the Prolegomena title reads in full. In the second Critique, Kant consciously or unconsciously but, in any case, incorrectly recalls the structure of the first Critique (KpV, : ). He even gives the impression that there is no chapter on the 'logic of illusion' (A/B), called Dialectic. In the second Critique, there is indeed a chapter called the Dialectic of Pure Practical Reason. But this is only marginally concerned with a 'logic of illusion', i.e. the refutation of some misguided claims in moral philosophy. It is more concerned with the substantiation of positive claims about pure practical reason and its postulates. Altogether, it seems that Kant, with the first publication of the first Critique, sees no further need to systematically critique traditional metaphysics, and instead argues for his own views on a metaphysics of nature and a metaphysics of morals, both of which must be built on synthetic principles a priori. In the B edition, Kant replaces the paragraph about the 'certain mystery' (A) quoted above with sections V and VI of the new introduction, which are in large part taken over from the Prolegomena. Here he argues for the claim that 'synthetic a priori judgments are contained as principles in all theoretical sciences of reason' (B), and states that '[t]he real problem of pure reason is now contained in the question: How are synthetic judgments a priori possible?' (B). This is the point where the Scheffner passage comes in. The reviewer hinted at in this passage is the author of the so-called Göttingen Review of the first Critique.  What Kant here calls the 'important problem of transcendental philosophy' is 'expressed with scholastic precision, the exact problem on which everything hinges : : : : How are synthetic propositions a priori possible?' (P, : ), as he first formulates this in the Prolegomena. Kant addresses the first question of the Scheffner passage in the Groundwork under the title How is a Categorical Imperative Possible? and argues that the 'categorical ought represents a synthetic proposition a priori, since to my will affected by sensible desires there is added the idea of the same will but belonging to the world of the understandinga will pure and practical of itself' (G, : ). K O N S T A N T I N P O L L O K Hence, the explanation of the categorical imperative, one of the central problems of Kant's critical metaphysics of morals, including a second Critique, relates to the problem of synthetic a priori judgements. The parallel Kant sees between the theoretical and the practical problem of synthetic a priori propositions becomes even more striking when he comments on the How-is-a-categorical-imperative-possible passage as follows: [T]his is roughly like the way in which concepts of the understanding, which by themselves signify nothing but lawful form in general, are added to intuitions of the world of sense and thereby make possible synthetic propositions a priori on which all cognition of a nature rests. (G, : ) So, the way in which 'the idea of freedom', or 'the idea of : : : a will pure and practical of itself' (G, : ), gives rise to the moral law in imperatival form, i.e. a synthetic a priori proposition by which reason necessitates certain maxims, is 'roughly like' the way in which the categories give rise to principles by which the understanding determines appearances, or as Kant puts it in the Prolegomena, laws that the understanding prescribes to nature (P, : ). Without going into the non-trivial details of this analogy, it is on this expanded basis of synthetic a priori propositions that we can make full sense of the dichotomy between the phenomenal and the noumenal that Kant first brought up in the Inaugural Dissertation but which now serves to capture an entirely novel configuration. 'The distinction of all objects in general into phenomena and noumena' (B) no longer merely refers to the possibility of the former and the impossibility of the latter (noumena in a positive sense). Rather, on the basis of synthetic a priori propositions, cognition of the sensible world is now assigned to theoretical philosophy, whereas cognition of the intelligible world is assigned to moral philosophy. At the same timeand this is of great importanceit is only synthetic a priori propositions that require a critique. For it is only these propositions whose objective validity is problematic, and thus requires a non-empirical deduction.  This becomes evident from section III of the Groundwork where the second Critique comes into focus. Under the title 'Autonomy of the Will as the Supreme Principle of Morality', Kant relates the notions of autonomy, the categorical imperative and the synthetic a priori to a prospective Critique of Practical Reason. T H E R O L E O F S Y N T H E T I C A P R I O R I P R O P O S I T I O N S First, he states, 'Autonomy of the will is the property of the will by which it is a law to itself' (G, : ). Next, he gives the Formula of Autonomy, and the following comment. 'That this practical rule is an imperative : : : cannot be proved by mere analysis of the concepts to be found in it, because it is a synthetic proposition; one would have to go beyond cognition of objects to a critique of the subject, that is, of pure practical reason' (G, : ). Finally, he emphasizes the connection between the synthetic a priori character of the categorical imperative and the second Critique in the 'Division of All Possible Principles of Morality': 'That morality is no phantom : : : requires a possible synthetic use of pure practical reason, which use, however, we cannot venture upon without prefacing it by a critique of this rational faculty itself' (G, : ). In conclusion: the 'How-is-the-synthetic-a priori-possible?' question becomes the central question with the Scheffner-Nachlaß and the Prolegomena, and Kant literally transfers this insight to the B introduction of the Critique. He seems to be confident in putting this question front and centre with respect to the relevant sciences because he now sees that the answer to it is the key to the foundations of any science. In the Prolegomena hefor obvious reasonsconfines himself to the speculative part of metaphysics, while the Groundwork makes the 'How-is-thesynthetic-a priori-possible?' question explicit in practical philosophy, for the first time in a published form. So, why does the Scheffner-Nachlaß change the critical game? It marks Kant's important insight that the objective validity of any science depends on the objective validity of a number of laws, i.e. synthetic a priori propositions. This interpretation helps us understand the sense in which 'the substantive philosophical parallels between practical autonomy and theoretical legislation' (Watkins : ) are grounded in Kant's discovery of the synthetic a priori character of any law, theoretical and practical.  Notes  'Nun ist die Frage wie ist ein categorischer Imperativ möglich wer diese Aufgabe auflöset der hat das echte princip der Moral gefunden. Der Rec: wird sich vermutlich eben so wenig daran wagen wie an das wichtige Problem der Transscendental philos. welches mit jenem der Moral eine auffallende Aehnlichkeit hat. Ich werde die Auflösung in Kurzem darlegen aber man darf hier nicht Idealismus und categorien besorgen.' On the implications of this note, see Pollok (: xi-xiv).
3,155.6
2021-03-11T00:00:00.000
[ "Philosophy" ]
Fuzzy Dark Matter and the Dark Dimension We propose a new dark matter contender within the context of the so-called ``dark dimension'', an innovative 5-dimensional construct that has a compact space with characteristic length-scale in the micron range. The new dark matter candidate is the radion, a bulk scalar field whose quintessence-like potential drives an inflationary phase described by a 5-dimensional de Sitter (or approximate) solution of Einstein equations. We show that the radion could be ultralight and thereby serve as a fuzzy dark matter candidate. We advocate a simple cosmological production mechanism bringing into play unstable Kaluza-Klein graviton towers which are fueled by the decay of the inflaton. I. INTRODUCTION While there are various lines of evidence for the existence of dark matter in the universe, the nature of the dark matter particle remains a challenging dilemma at the interface of astrophysics, cosmology, and particle physics [1].There is a large variety of dark matter candidates with masses spanning many orders of magnitude.Of particular interest here, fuzzy dark matter (FDM) is made up of non-interacting ultralight bosonic particles that exhibit coherent dynamics and a wave-like behaviour on galactic scales [2].On sub-galactic length scales, FDM brings to light a distinctive phenomenology alternative to that of cold dark matter (CDM).However, FDM predictions are indistinguishable from those of CDM on large scales, and so benefits from the remarkable success of ΛCDM cosmology. The main parameter regulating the two FDM regimes is the particle's mass, with a range that spans three decades of energy, 10 −24 ≲ m/eV ≲ 10 −22 eV.Particles of such tiny mass and with typical velocities v found in haloes hosting Milky Way-sized galaxies, acquire a very long de Broglie wavelength, to deliver the wave-like behavior at galactic scales.FDM can populate the galactic haloes with large occupation numbers and behave as self-gravitating dark matter waves.This engenders a pressure-like effect on macroscopic scales which catalyzes a flat core at the center of galaxies, with a relatively marked transition to a less dense outer region that follows the typical CDM-like distribution. Before proceeding, we take note of a serious challenge for FDM models.The criticism centers on numerical simulations to accommodate Lyman-α forest data which provide bounds on the fraction of FDM [3][4][5].In response, it was noted that these bounds strongly depend on the modeling of the intergalactic medium [6].More recently, new constraints for FDM models have emerged, e.g.(i) from inferences of the low-mass end of the subhalo mass function [7], (ii) from observations of ultrafaint dwarf galaxies [8], and (iii) from superradiance of FDM which would cause the supermassive black hole at the center of M87 to spin down excessively [9].Constraints (i) and (ii) also depend on simulations and are subject to a completely different set of assumptions and systematic uncertainties.The Event Horizon Telescope measurement of the spin of M87 * excludes FDM masses in the range 10 −21 ≲ m/eV ≲ 10 −20 .Whichever point of view one may find more convincing, it seems most conservative at this point to depend on experiment (if possible) rather than numerical simulations to resolve the issue. In this paper we show that the dark dimension scenario [10] embraces a well motivated FDM candidate.The layout is as follows: in Sec.II we outline the basic setting of the dark dimension scenario and we identify the radion as a FDM candidate, in Sec.III we discuss the process of radion production to estimate the corresponding relic abundance, and conclusions are given in Sec.IV. II. THE GOOD, THE BAD, AND THE FUZZY The Swampland program seeks to understand which are the "good" low-energy effective field theories (EFTs) that can couple to gravity consistently (e.g. the land-scape of superstring theory vacua) and distinguish them from the "bad" ones that cannot [11].In theory space, the frontier discerning the good theories from those downgraded to the swampland is drawn by a family of conjectures classifying the properties that an EFT should call for/avoid to enable a consistent completion into quantum gravity.These conjectures provide a bridge from quantum gravity to astrophysics, cosmology, and particle physics [12][13][14]. For example, the distance conjecture (DC) forecasts the appearance of infinite towers of states that become exponentially light and trigger the collapse of the EFT at infinite distance limits in moduli space [15].Connected to the DC is the anti-de Sitter (AdS) distance conjecture, which correlates the dark energy density to the mass scale m characterizing the infinite tower of states, m ∼ |Λ| α , as the negative AdS vacuum energy Λ → 0, with α a positive constant of O(1) [16].Besides, under the hypothesis that this scaling behavior holds in dS (or quasi dS) space, an unbounded number of massless modes also pop up in the limit Λ → 0. As demonstrated in [10], the generalization of the AdS-DC to dS space could help elucidate the radiative stability of the cosmological hierarchy Λ 4 /M 4 p ∼ 10 −120 , because it connects the size of the compact space R ⊥ to the dark energy scale Λ where the proportionality factor is estimated to be within the range 10 −1 < λ < 10 −4 .Actually, (2) derives from constraints by theory and experiment.On the one hand, since the associated Kaluza-Klein (KK) tower contains massive spin-two bosons, the Higuchi bound [17] provides an absolute upper limit to α, whereas explicit string calculations of the vacuum energy (see e.g.[18][19][20][21]) yield a lower bound on α.All in all, the theoretical constraints lead to 1/4 ≤ α ≤ 1/2; see [22] for a recent discussion.On the other hand, experimental arguments (e.g.constraints on deviations from Newton's gravitational inverse-square law [23] and neutron star heating [24]) lead to the conclusion encapsulated in (2); namely, that there is one extra dimension of radius R ⊥ in the micron range, and that the lower bound for α = 1/4 is basically saturated [10].This in turn implies that the KK tower of the new (dark) dimension opens up at the mass scale m KK ∼ 1/R ⊥ .Within this set-up, the 5dimensional Planck scale (or species scale where gravity becomes strong [25,26]) is given by p .Note that for m KK ∼ 1 eV, we have M * ∼ 10 9 GeV and therefore the species scale is outside the reach of collider experiments [27]. The dark dimension stores a top-notch phenomenology [28][29][30][31][32][33][34][35][36][37][38].For example, it was noted in [32] that the universal coupling of the SM fields to the massive spin-2 KK excitations of the graviton in the dark dimension provides a dark matter candidate.Complementary to the dark gravitons, it was observed in [29] that primordial black holes with Schwarzschild radius smaller than a micron could also be good dark matter candidates, possibly even with an interesting close relation to the dark gravitons [31].Next, in line with our stated plan, we propose a new dark matter candidate within this framework. It is unnatural to entertain that the size of the dark dimension would remain fixed during the evolution of the Universe right at the species scale.To accommodate this hierarchy we need to inflate the size of the dark dimension.To see how this works explicitly, we consider that the inflationary phase can described by a 5-dimensional dS (or approximate) solution of Einstein equations [33].All dimensions (compact and non-compact) expand exponentially in terms of the 5-dimensional proper time.This implies that when inflation starts the radius R of the compact space is small and the 4-dimensional Planck mass is of order the 5-dimensional Planck scale M * .However, when inflation ends the radius of the compact space is on the micron-scale size and the 4-dimensional Planck scale is much bigger, A straightforward calculation shows that the compact space requires 42 e-folds to expand from the fundamental length 1/M * to the micron size.We can interpret the solution in terms of 4-dimensional fields using 4dimensional Planck units from the relation (3), which amounts going to the 4-dimensional Einstein frame.Namely, the higher-dimensional metric in M * units is given by where η is the conformal time, a 5 = 1/(Hη), H is the Hubble parameter, ⃗ x denotes the 3 uncompactified dimensions, and r 0 ∼ 1 is the radius of the dark dimension y at the beginning of the inflationary phase.The 4dimensional decomposition in the Einstein frame is given by where ds 2 4 = a 2 4 (−dη 2 + d 2 ⃗ x).Comparing ( 4) and ( 5) we arrive at a 4 / √ R = R.After inflation of N e-folds, where the scale factor was expanded by a 5 = e N , the radius becomes R = e N .This implies that if R expands N e-folds, then the 3-dimensional space would expand 3N/2 e-folds as a result of a uniform 5-dimensional inflation [33].We want r 0 to grow fast up to the micron scale.Altogether, the 3-dimensional space has expanded by about 60 e-folds to solve the horizon problem, while connecting this particular solution to the generation of a mesoscopic size dimension.A consistent model requires the size of the dark dimension to be stabilized at the end of inflation; an investigation along this line is already presented in [39]. The 5-dimensional action of uniform dS (or approximate) inflation leads to a runaway potential for the radion R coming from the 5-dimensional cosmological constant, where R (5) is the higher dimensional curvature scalar and Λ 5 is the 5-dimensional cosmological constant at the end of inflation.The quintessence-like potential of the radion is seen explicitly upon dimensional reduction to 4 dimensions.The resulting 4-dimensional action in the Einstein frame is found to be, where R (4) is the Ricci scalar and ⟨R⟩ is the vacuum expectation value (vev) of R after the end of inflation. Because the radion field R is not canonically normalized, we define ϕ = 3/2 ln (R/⟨R⟩).In terms of the normalized field ϕ the scalar potential takes the advertised quintessence-like form Exponential potentials of the form e −αϕ are constrained by cosmological and astrophysical observations.The existing data lead to an upper bound α ≲ 0.8 [40].Curiously, the upper limit on the allowed value of α is the one predicted by (8). 1 Even though the potential in ( 8) could be used to explain the current acceleration of the universe, herein we consider the possibility that the radion is stabilized by additional terms in the potential. The mass of the radion, depends on the functional form of the various additional terms V i (ϕ) that allow minimization of the potential, i.e. with V tot ϕ (0) = 0 and where V ϕ ≡ dV /dϕ.Adding only the term originating from the Casimir energy [42] leads to a lower bound on m ∼ √ Λ 4 /M p ∼ 10 −30 eV [39].An important aspect of this model is that the coupling of the radion to SM fields must be suppressed to avoid conflicts with limits on long range forces.Herein, we assume that the radion has a localized kinetic term through (e.g. an expectation value of a brane field) that suppresses the coupling to matter.Alternatively, in the absence of a scalar potential, the 5-dimensional radion is equivalent to a Brans-Dicke scalar with a parameter ω = −4/3.It has been argued that an appropriate modification of such theories due to bulk quantum corrections can lead to a logarithmic scale (time) dependence of ω that suppresses the radion coupling to matter, consistently with the experimental limits [43].An investigation along these lines is obviously important to be done. III. PRODUCTION OF THE FUZZY RADION AND ITS RELIC ABUNDANCE The issue that remains to be assessed is whether there is a mechanism which allows enough radion production to accommodate the relic dark matter density.An interesting possibility emerges if the inflaton has roughly equal couplings to brane and bulk fields, such that on decay will produce the SM fields while also populating the KK towers. We begin by considering a tower of equally spaced dark gravitons, indexed by an integer l, and mass m l = lm KK .We assume that the cosmic evolution of the dark sector is mostly driven by "dark-to-dark" decay processes that regulate the decay of KK gravitons within the dark tower.The proposed decay model then provides a particular realization of the dynamical dark matter model [44].The intra-KK decays in the bulk require a spontaneous breakdown of the translational invariance in the compact space such that the 5-dimensional momenta are not conserved.An explicit realization of this idea, in which the KK modes acquire a nonzero vev ⟨φ l ⟩ has been given in [45].Following [46], we further assume transitions by instanton-induced tunneling dynamics associated with such vacuum towers.The effect of the instanton processes is to accelerate the cascade dynamics to collapse into the radion. Bearing this in mind, we calculate the decay of a given massive KK graviton into the radion ϕ and a lighter KK graviton in the presence of an expectation value for the bulk scalar φ that breaks momentum conservation.Following [45] we postulate the existence of a coupling of the form coming from where T (x, y) is the trace of the energy momentum tensor of the bulk theory, h AB is the 5-dimensional graviton which comes from expansion of the 5-dimensional metric around flat space, h AB , and where in (11) λ is a dimensionless coupling and C ABCDEF is a constant tensor.Next, we expand all 5-dimensional fields in terms of their 4-dimensional KK modes, for instance After integration over dy, (11) can be recast as where we have made use of ( 3) and considered a factor of 1/ √ 2πR ⊥ for each KK decomposition of the four fields and a factor of 2πR ⊥ from the integration over y.Now, following [45] we assume that φ l takes a vev which is independent of l.The total decay width of a KK graviton of mass m l = lm KK is found to be [27] Γ where ⟨φ (l−l ′ ) ⟩ = ⟨φ⟩.Substituting in (15) our fiducial value for m KK ∼ 10 eV while considering as entertained in [45] ⟨φ⟩ ∼ M * we obtain a total decay width of Γ l tot ∼ 5 × 10 12 s −1 , where we have taken λ ∼ 1 and m l ≫ m KK .If we instead adopt ⟨φ⟩ ∼ 5 × 10 −4 M * we obtain Γ l tot ∼ 10 6 s −1 , which implies that the energy the inflaton deposited in the KK tower ends up into the radion well before the QCD phase transition (with characteristic temperature ∼ 150 MeV and age ∼ 20 µs).Altogether, we conclude that even for ⟨φ⟩ ≪ M * the energy the inflaton deposited in the KK tower collapses all into the radion well before the earliest observational verified landmark (viz., big bang nucleosynthesis with starting age of roughly 180 s). Qualitatively, radion cosmology reassembles that of ultralight axion-like particles [47].Namely, the radion equation of motion is given by where H is the Hubble parameter.We assume that ϕ is around the minimum of the potential at the origin such that the total potential can be expanded around its minimum as V tot ∼ (mϕ) 2 /2 + Λ 4 , and so ( 16) can be rewritten as At very early times, when m < 3H, the radion field is overdamped and frozen at its initial value by Hubble friction.During this epoch the equation of state is w ϕ = −1 and the radion behaves as a sub-dominant cosmological constant.Once the Universe expands to the point where m ∼ 3H, the driving force overcomes the friction and the field begins to slowly roll.Finally, when m > 3H, the field executes undamped oscillations.The equation of state oscillates around w ϕ = 0 and the energy density scales as CDM.A visual representation of the evolution of ϕ and w ϕ is shown in Fig. 1. We now turn the estimate the required energy density of the radion field ρ ϕ to accommodate the observed relic density if the CDM evolution should duplicate that in ΛCDM after the epoch of matter-radiation equality.To this end we first reexamine the evolution of the radiation energy density, which can be conveniently expressed as where g B(F ) is the total number of boson (fermion) degrees of freedom and the sum runs over all boson (fermion) states with m B(F ) ≪ T , and where N (T ) is the number of effective degrees of freedom (the factor of 7/8 is due to the difference between the Fermi and Bose integrals). The expansion rate as a function of the temperature in the plasma is given by By inspection of ( 19) we can immediately see that for m ∼ 3H, only photons and neutrinos contribute to the sum in (18), yielding N = 7.25.This corresponds to a temperature T osc ∼ 86 eV, for which the total energy density of radiation ( 18) is ρ R (T osc ) ∼ 10 8 eV 4 .Now, let ρ ϕ (T osc ) be the background energy density of the radion field at T osc .As the universe expands, the ratio of dark matter to radiation grows as 1/T , and in ΛCDM cosmology they are supposed to become equal at the temperature T MR ∼ 1 eV of matter-radiation equality.This implies that which leads to ρ ϕ (T osc ) ∼ 10 6 eV 4 [6].In other words, if the density of the radion field were about 10 6 eV 4 , then today's radion abundance would easily accommodate the observed dark matter density [48], i.e., ρ ϕ,today ∼ ρ DM ∼ 1.26 keV/cm 3 .We note that ρ ϕ (T osc ) should be equal to the value of the potential (above Λ 4 ) at the constant value ϕ i that is the initial condition, i.e.V tot (ϕ i ) ∼ 10 6 eV 4 .We note that although the initial value of radion field is a free parameter of the model it is subject to the constraint ϕ i /M p ≪ 1, so that V tot ϕ ∼ m 2 ϕ and the expansion in (17) is valid. In closing, we note that when oscillations start, before the matter-radiation equality, the radion is nonrelativistic and therefore ∆N eff (the number of "equivalent" light neutrino species in units of the density of a single Weyl neutrino [49]) stays unaffected at the earliest observationally verified landmarks (viz.big bang nucleosynthesis and cosmic microwave background).As a consequence, our model remains consistent with the bounds derived in [50,51]. IV. CONCLUSIONS We have introduced a new dark matter contender within the context of the dark dimension.The dramatis personae is the radion, a bulk scalar field whose quintessence-like potential drives an inflationary phase described by a 5-dimensional de Sitter (or approximate) solution of Einstein equations.We have shown that within this set up the radion could be ultralight and thereby serve as a fuzzy dark matter candidate.We have put forward a simple cosmological production mechanism bringing into play unstable KK graviton towers which are fueled via inflaton decay. We end with an observation.The coherent oscillation of the fuzzy radion in galactic haloes leads to pressure perturbations oscillating at twice its Compton frequency, ω = 2m [52].These oscillations induce fluctuations of the gravitational potential at frequency f ≡ ω/(2π) ≃ 4.8 × 10 −9 Hz m 10 −23 eV (21) and can give rise to distinctive profiles in the travel time of radio beams emitted from pulsars, which have been monitored for decades in Pulsar Timing Array (PTA) experiments [53]. Figure 4 :Figure 4 : 27 wFIG. 1 : Figure4: Evolution of various quantities in the exact solution to the background evolution of an ALP, Eq. (58), for a radiation-dominated universe (p = 1/2).Dimensionful quantities have arbitrary normalization.Vertical dashed lines show the condition defining a osc. .Further discussion of this choice, and the approximate solution for the energy density, is given in the text.
4,645.2
2023-07-03T00:00:00.000
[ "Physics" ]
An Intelligent Content Discovery Technique for Health Portal Content Management Background: Continuous content management of health information portals is a feature vital for its sustainability and widespread acceptance. Knowledge and experience of a domain expert is essential for content management in the health domain. The rate of generation of online health resources is exponential and thereby manual examination for relevance to a specific topic and audience is a formidable challenge for domain experts. Intelligent content discovery for effective content management is a less researched topic. An existing expert-endorsed content repository can provide the necessary leverage to automatically identify relevant resources and evaluate qualitative metrics. Objective: This paper reports on the design research towards an intelligent technique for automated content discovery and ranking for health information portals. The proposed technique aims to improve efficiency of the current mostly manual process of portal content management by utilising an existing expert-endorsed content repository as a supporting base and a benchmark to evaluate the suitability of new content Methods: A model for content management was established based on a field study of potential users. The proposed technique is integral to this content management model and executes in several phases (ie, query construction, content search, text analytics and fuzzy multi-criteria ranking). The construction of multi-dimensional search queries with input from Wordnet, the use of multi-word and single-word terms as representative semantics for text analytics and the use of fuzzy multi-criteria ranking for subjective evaluation of quality metrics are original contributions reported in this paper. Results: The feasibility of the proposed technique was examined with experiments conducted on an actual health information portal, the BCKOnline portal. Both intermediary and final results generated by the technique are presented in the paper and these help to establish benefits of the technique and its contribution towards effective content management. Conclusions: The prevalence of large numbers of online health resources is a key obstacle for domain experts involved in content management of health information portals and websites. The proposed technique has proven successful at search and identification of resources and the measurement of their relevance. It can be used to support the domain expert in content management and thereby ensure the health portal is up-to-date and current. (JMIR Med Inform 2014;2(1):e7) doi: 10.2196/medinform.2671 Background The Internet has become a key medium for audiences seeking health information resources [1]; an important contributor is health information portals. Content management (CM) in health information portals covers a broad spectrum of functions that surround the creation, discovery, distribution, consumption, and maintenance of content. A mixture of cyclic and acyclic execution of these functions is evident in both research and industrial applications. Large organizations usually follow the full cycle from content creation to maintenance, whereas specific applications focus on the advancement of a limited number of functions. Each function has its own challenges with added complexity introduced by the context of the application. CM is a widely published topic with research conducted in knowledge management [2], Internet research [3], and information retrieval [4]. The focus of research in CM is largely influenced by its context. This context varies from enterprise level management to management of basic website content. At the enterprise level, recent advances include the ECM3 model [5], which aims to address the CM challenges by introducing stages of maturity for all enterprise documents and unstructured content. The Web content maturity model proposed by Forrester research [6] attempts to address the challenges facing an organization's Web content. It consists of 4 phases: basic, tactical, enterprise, and engagement. The focus gradually broadens through these 4 phases, starting with the basic focus of making enterprise content available online and in the final phase expanding it to providing an online channel to achieve organizational goals. The Content Management Bible [7] defines CM as composed of 3 phases: the first is creation or collection of content; the second is managing storage and retrieval, versioning over time, and multiple languages etc; and the third involves publication and delivery of the content. Content discovery plays an important part in CM as a quality intensive function that also determines the level of acceptance by a target audience. For instance, low quality and irrelevant content that fails to gain attention would limit the usefulness of the entire CM process. The significance of content discovery is also evident through its contribution to a broad spectrum of technologies, including portals (enterprise, information, and community), wikis, e-commerce, and social media. Domain expertise is integral to content discovery. The domain expert needs to be proficient in both the subject area as well as the process of acquiring content relevant to a well-defined audience. A domain expert would maintain a high degree of emphasis on the quality of content as well as the level of personalization. Quality is generally identified in terms of 4 factors: relevance, usefulness, reliability, and timeliness [8]. Personalization addresses the diverse interests, needs, and expectations of a target audience composed of several subgroups [9]. Domain experts involved in content discovery for health information portals are confronted with an exponential growth in online content. Although access to most content is simplified by the availability of search engines, the discovery of relevant, high quality content that is personalized to suit the information needs of a target audience remains a challenge. In this paper, we propose an intelligent content discovery technique to address the challenge. This paper follows the design science research process to solve this important real world problem by designing a solution (information technology artefact) in a form of an innovative automated content discovery and ranking approach for health information portals [10]. The groundwork of the technique was reported in a previous publication [11]. The technique is based on the appropriation of an existing expert-endorsed content base as a benchmark to evaluate new content with similar features and offer the new content for inclusion to the portal repository. This semi-automated technique augments the manual process of content discovery, thus addressing inefficiencies, saving human effort, and potentially reducing human error with the increasing availability of online health information. As stated, content discovery is relevant to a wide spectrum of technologies and application areas. This paper explores content discovery in the context of smart health information portals (SHIPs). Smart Health Information Portals An information portal, in general, is a gateway to a diverse collection of information on a specific domain of interest. It attempts to aggregate information from multiple sources and present it in a useful form to targeted groups of users [12]. Advances in information systems coupled with the wide availability of diverse interfaces to the Internet have led to the adoption of smart technology for the development of portals. Within this context, it is pertinent to formally define a SHIP as the provision of smart technology and techniques to enhance the core capabilities of CM, content delivery, and collaboration for online health information provision [11]. The authors identify that it is not sufficient to define SHIP exclusively on its exhibiting computational intelligence features, for example, learning, reasoning, and memory. Sustainability of SHIP operation within organizational settings is crucial for its long-term viability. Hence, the issue of maintenance support becomes one of the deciding factors in the level of intelligence of a SHIP's operation. Breast Cancer Knowledge Online [13] and Heart Health Online [14] are examples of SHIPs researched and developed at the Faculty of Information Technology, Monash University, to address the health and medical information requirements of individuals associated with breast cancer, and mental health associated with heart conditions, including patients, caregivers, family, and friends of those affected. The delivery of user-sensitive, relevant, timely, and accurate health information to the various user groups was the focus throughout the various phases of the projects. These SHIPs are implementing several novel research outcomes, for example, resource description quality criteria modelling [15], user-centric portal design [16], automated quality assessment [8], and decision support systems perspective on portals [17]. Reported experience from the development of these SHIPs clearly demonstrated the value of continuous engagement and a high degree of reliance of user groups to identify, categorize, and describe the type of information required by relevant individuals. The resource intensity in terms of time and scarcity of relevant expertise was also highlighted by the researchers involved in these projects [17][18][19]. These studies reinforce the need for intelligent support for SHIP CM. Automated content discovery, content summarization [20], dynamic ranking, user annotations, and feedback [21] are some of the enhancements to CM, which could assist in SHIP CM. Content delivery is enhanced with user profiling, geographical filtering, mobile interfaces, and device-independent content delivery. Online messaging, social networking, and discussion forums are enablers for smart collaboration. Among these features, assurance of quality of information delivery is by far the most sought after by users, and the most resource intensive from the organizational setup point of view. Content Management Model The CM model represents the external entities of CM and their interactions in the formulation and management of personalized content. Informed by the experience with BCKOnline and Heart Health portal research [19], this model is a conceptualization of the fact that the audience of the SHIP users has distinct characteristics and contexts, which potentially affect their information needs. The resources for a SHIP can be aligned with a domain ontology, which classifies them against the major concepts that define such a domain. For example, official publications from medical journals are usually classified by a set of keywords, which the audience is likely to use to search and retrieve these publications. A set of such keywords or subject terms can be considered as part of domain ontology. The completeness or relevance of such an ontology can be problematic, especially when it comes to the search for relevant user-centered information [18]. It is up to the domain experts to reach consensus when deciding which terms are most suited for the ontology and content discovery. However, these issues are outside the scope of this particular paper. For this research we assume that there is a trusted and appropriate domain ontology constructed for resource classifications (eg, in BCKOnline, a combination of Medical Subject Headings [MeSH], BreastCare Victoria Glossary, BCKOnline Disease Trajectory, and BCKOnline keywords were used as encoding schemas for the subject metadata element [22]). The role of domain experts in classifying potential resources against the needs of the target audience becomes essential for identifying the best terminology suitable and understandable by the target audience. At the generic level, the target audience, potential content, a domain ontology, and domain expertise are the external entities that are fused together to generate personalized content. This formulation is further illustrated in Figure 1a. It is useful to formally define the entities and their interactions. The target audience comprises subgroups of users with similar characteristics and thus having similar information needs. Let A={a 0 , a 1 ,…a n } be the target audience comprising all subgroups. Let D={d 0 , d 1 ,…d m } be the set of all content that is able to address the information needs of the target audience. A domain ontology formalizes the concept hierarchy of knowledge for a specific domain, and it can be generally represented as a set of topics, T={t 0 , t 1 ,… t p }. The information requirements for audience A are determined using the Cartesian product of A and T. Let R be the Cartesian product, R=A*T. Actual information requirements could very well be a subset of R because all terms may not be applicable to all A. Domain expertise transforms information requirements R, to actual content D, by determining subsets of D that address each R. Let this transformation be E={e 0 , e 1 ,… e x } , where e 0 ={a 0 t 0 ,(d 0 ,d 1 ,…d m )} comprises information requirements and a set of matched content elements. The transformation E represents the CM model because it captures all entities and their relationships. It can also be depicted as a matrix (Figure 1b). The CM model possesses certain properties that make it robust and flexible to changes. Over time, it is likely A, T, and D would expand or contract to reflect developments in health practices. Matrix E is time-invariant and thus can be altered easily to reflect these changes. The challenge and opportunity for developing a sustainable CM model is in designing transformation R as a semi-automated expert-driven procedure by using intelligent technologies. The following section elaborates on this technique. Overview The CM model underlies the formulation of the proposed technique. It extracts semantics that are useful to construct queries that discover new content as well as semantics that are used to measure the relevance of new content from the CM model. Query construction introduces context specific information to the final query that is then distributed to search engines. The results are amalgamated and followed by the analysis of textual content of both new and existing resources. In the content selection phase, each item is ranked based on several factors of quality and presented to the domain expert for further perusal and possible inclusion in the content repository. Figure 2 illustrates the components of the technique. Query Construction and Content Search Each query is based on several specific and generic dimensions. The specific dimensions are sourced from meta-data found in the first element of each term in the CM matrix ( Figure 1b). The element a x t y , denotes the audience subgrouping and the term (or topic) from the domain ontology. The generic dimensions serve the purpose of introducing the context/background to a search. These can range from the high-level domain terms to synonyms indicative of the specific dimensions. Figure 3 illustrates this further. Both specific dimensions are well defined by the domain expert and thereby translate easily into query construction. The audience dimension will contain information about the subgroups found within. Age, sex, marital status, occupational status, and level of knowledge of the domain are some examples. The domain ontology contains the key terms and phrases that define the information needs of the audience. The generic dimension of synonyms introduces further diversity to the query construction process with related terms for the two specific dimensions. The widely used lexical database, WordNet [23] is used to extract synonyms with semantic relationships. WordNet is a lexical database for the English language. It is made up of two parts: sets of synonyms called (synsets) and the semantic relations between these sets. The semantic relations are useful to identify terms that have a common ancestor and thus can be linked to each other. For instance, wellness and well-being are terms similar in meaning to health but positioned at different levels on WordNet. Query construction will generate a set of queries Q={q 1 ,q 2 ,….q n }, representing the information needs expressed in the CM model. Query construction and content search are recurrent phases in which queries with failed searches are reconstructed using synonyms from WordNet. In the content search phase, each query will be run on several search engines. Duplicates are removed from the search results generated and merged into one distinct set. The actual webpages are downloaded from this list and further examined for misrepresentations, such as duplicates, revisions of the same page, index pages, pages generated by other search engines, etc. The valid results are converted to plain text using Apache Tika, which is able to parse most Web document formats, including HTML, PDF, and XML. The resultant corpus of plain text documents, D q =d q1 , d q2 , …d qn ; ∀n∈N, ∀q∈Q, is input to the text analytics phase. Overview Text analytics is responsible for the identification of content that is relevant to the existing expert endorsed resources. It is the core function of the technique and is made up of 3 submodules as illustrated in Figure 4. Text analytics is an emerging area in business analytics where smart techniques are being developed and used to extract patterns, predictions, and semantic content from text corpora [24]. Every document has a number of words used only for grammar and presentation and not directly related to content description. Preprocessing removes the words that do not have a semantic use for analysis. Stop-word removal [25] and Porter's stemming algorithm [26] are run on the text corpus to generate a "bag of words" representation of each document. Further preprocessing can be conducted depending on the content of the original documents (formulae, images, and other media). Multi-Term Recognition Multi-term recognition aims to improve the semantic representation of the original document with the extraction of multi-word terms by means of the C-value/NC-value approach [27]. This method combines linguistic and statistical information with emphasis on nested multi-word terms and the general distribution of candidate terms. It has been used successfully in a variety of applications [28,29]. It generates a list of multi-word terms ranked by the NC-value. The NC-value is a weighted summation of context information and the C-value ( Figure 5). The 2 factors of NC-value have been assigned the weights 0.8 and 0.2, respectively, based on previous experiments [27]. The C-value is a measure of each term's distinct frequency of occurrence within the corpus. It takes into account the number of times the term appears nested within other candidate terms; this is subtracted from the total frequency in the corpus ( Figure 6). To improve the detection of multi-word terms, the C-value/NC-value approach was extended with the introduction of domain-specific information to the calculation of NC-value. The presence/absence of terms from the domain ontology was incorporated as shown in Figure 7. The domain ontology is composed of terms recommended by the experts and thus would appropriately narrate the context of the search to each document. The new element in the equation captures the likelihood of candidate terms appearing within the domain ontology as nested or distinct terms. The weight of term t can be determined by the hierarchical organisation or its relationships within the ontology. The factors of the new NC-value have been assigned weights 0.6, 0.2, and 0.2, respectively This adjustment ensures that context factor and ontology information have equal contribution toward the final measure. Term Vector Creation The third submodule, term vector creation, generates a vector space model (VSM) representation of the document corpus as well as the benchmark resource set. The VSM introduced by Salton et al [30] models documents as elements in term space. The term space is composed of all unique terms in the document collection and each document is represented by the vector of terms found in the document. Thereby the documents are comparable within the corpus and with external content. VSM has been successfully applied to several text mining/business analytics applications such as ontology-based information retrieval [31], incremental learning from text [32], and disease identification [33]. The VSM follows a term weighting scheme to improve the semantic position of a document. The 3 main factors of term weighting are term frequency factor, collection frequency factor, and length normalization factor. Term frequency factor determines the frequency within a single document, collection frequency factor determines its prevalence within the collection of documents, and the length of each document is used as a normalization factor to negate the bias of long documents. A noted weakness of VSM is the assumption that identified terms are independent of each other. This shortcoming is offset to a certain degree with the inclusion of multi-word terms. Multi-word terms are able to capture more semantics than a single term set. The general VSM only focuses on single terms; therefore, it is necessary to create a separate VSM for multi-word terms. Thereby two VSMs (vsm m (d q ) , vsm(d q )) are created for each document d q in each collection D generated by query q. The VSMs generated for the document corpus need to be evaluated for relevance to the target audience and their information needs. Resources in the expert-endorsed content repository are the most suitable benchmark for this purpose. Independent to the VSMs from the document corpus D q , separate VSMs need to be generated for these resources in the content repository. The same query sent into the content search phase is run on the content repository to identify the relevant documents, R q =r 1 ,r,….r n ∀q ∈Q. The content of the documents in this set is converged into a single representative document and this is sent through to the multi-word term recognition phase followed by the generation of VSMs for both multi-word terms and single terms, vsm m (R q ) and vsm(R q ), respectively. The outcome from this submodule is, for each query, a set of VSMs that represent new documents found in the content search phase and a set of VSMs that represent existing resources that are have been determined by the domain expert to be relevant to the same query. Effectively, this produces a benchmark term vector and the VSMs for multi-term words, vsm m (R q ) and vsm m (d q ) ∀d∈Dq, as well as for single terms vsm(R q ) and vsm m (d q ) , ∀d∈D q . Both these are defined using related dimensions that enable comparisons as well as rankings. The cosine coefficient similarity measure, which measures the angle between two vectors without bias for the length of the document, can be used to determine the closeness of each d q to R q (Figure 8). The denominator length-normalizes the vectors, ensuring the two are comparable in their original format. The same measure is calculated for the multi-term VSMs. Multi-Criteria Ranking Thus far, the technique has generated 3 quantifiable measures: the ranking from content search, cosine similarity for multi-term words, and cosine similarity for single terms. Each measure represents an independent aspect of the content discovery process. The ranking from content search indicates the position assigned by the search engine (determined by the respective search and indexing algorithms) as well as its temporal significance. On the other hand, the cosine similarities are entirely content-based with the multi-term VSM capturing more semantics. From a CM perspective, the quality of content is largely determined by 4 criteria; relevance, reliability, timeliness, and usefulness [8]. These can be defined briefly as relevance to the search query, usefulness to the target audience, reliability of the author/publishing body, and timeliness as the period when the article was compiled and published. As mentioned in the technique thus far, the existing content repository makes a significant contribution toward the relevance factor of new content. The content-based similarity measures are sound candidates for the measurement of relevance. Ranking from content search maintains temporal significance. This can be coupled with the actual date of publication, which can be retrieved from the host site (if available) to create a measure of timeliness. The author/publishing body of new content can be directly validated against authors/publishers of similar content found in the repository so that reliability can also be established to some extent. Usefulness that cannot be determined without user involvement/feedback is the only measure of quality that is beyond the proposed content discovery technique. The quality criteria are shown in Table 1. Multi-criteria decision-making (MCDM) involves the identification of an alternative from a finite set based on the evaluation of values from a set of criteria that characterize the alternative [34]. Ranking of new content is a variation of MCDM where more than one alternative is selected from a set of resources based on the assessment of four factors of quality. Several methods have been proposed to address MCDM problems: crisp methods such as multiplicative exponential weighting, simple additive weighting, analytic hierarchy process [35], discrete choice analysis [36], data envelopment analysis [37], and fuzzy MCDM analysis. Fuzzy MCDM analysis is largely based on the decision-making method in a fuzzy environment developed by Bellman and Zadeh [38]. The measures of quality will reflect varying degrees of importance for each ontology term. Given this subjective nature of the qualitative factors, it is pertinent to use fuzzy MCDM analysis for selection of new content. An MCDM problem consists of 5 elements: alternatives, criteria, outcomes, preferences, and information [39]. In the context of content ranking, the alternatives are the new content discovered, the criteria are the measures of quality, preferences are the expectations for each criterion, and the quantified measures contain the information used to evaluate these parameters. The preferences, expectations for each criterion, are subjective because they vary between terms in the domain ontology. For instance, the measure of timeliness may not be as important as relevance for certain areas of the domain that are highly theoretical with less change over time. In such cases, the outcomes can be misleading if timeliness is equally represented as relevance in the ranking scheme. In essence, the criteria are sensitive to the type of term that is being evaluated. Fuzzy MCDM analysis is advanced to overcome this limitation. The advantage of using a fuzzy approach is in the assignment of relative importance of criteria using fuzzy numbers instead of crisp numbers. Fuzzy triangular numbers (FTN) are necessary to establish fuzzy weights for each criterion. Input provided by domain experts on the expectations of each criterion for each term is represented as FTNs. An FTN is defined as a fuzzy set, F={(x,μ F (x),x∈R), where x takes values on the real line, R:-∞ ≪x ≪ ∞ and μ F (x) is a continuous mapping from R to closed interval [0,1]. A FTN denoted as M=(l,m,u), where l≪m≪u, expresses the relative strengths of each pair of elements in the same hierarchy. The parameters l; m; u; represent the smallest possible value, the most promising value (modal), and the largest possible value respectively in a fuzzy event. The membership function of M is expressed as follows (Figure 9). The first 4 criteria (Table 1) , where m p c is the FTN mean and ρ is its spread, which is determined by domain experts and reflects the characteristics of criterion c. With R alternatives and C criteria, the weighted sum is derived to measure performance and shown in Figure 10. Ranking takes place when n i > n j if and only if e ij =1 and e ji <Q, where Q is a fixed position fraction of a number less than 1 (preferably 0.9). The use of a fuzzy MCDM approach has thus converted measures representing different qualitative factors into a single ranked metric based on weights indicative of the term from the domain ontology that is being explored by the technique. The ranked resources can now be easily perused by a domain expert. Results As outlined earlier, SHIP was selected as the application test bed for the delineated technique. The technique was implemented using Java programming language for use in the experiments. Quality is essential for health information delivery and therefore maintenance and regular update of content is crucial for long-term value of the portal. The rate of generation of new health-related content far exceeds the numbers that can be manually examined by domain experts for relevance to a specific topic and audience. In this context, the benefits gained from the said technique are substantial. One of the portals noted earlier, BCKOnline, was used in this experiment. BCKOnline is a SHIP designed and developed at Monash University for the provision of personalized health information on breast cancer. A robust CM model was used by the domain experts to manage and revise the content in BCKOnline. The evaluation sample consisted of all content in the BCKOnline portal, a domain ontology comprising 795 terms and a content repository with 900 documents. Terms were selected from the ontology for demonstration of each phase. Each document was linked to one or more ontology terms by a domain expert. Figure 11 presents the top 30 domain ontology terms in the content repository. The graph exhibits a long tail, where a larger number of the resources are categorized in smaller groups. This signifies the breadth of health information for breast cancer accessible via the portal and further justifies the need for an automated content discovery process. The highest numbers of resources are on the primary subtopics of early, advanced, and recurrent breast cancer. "Palliative care," which has a count of 52 resources, was selected to demonstrate the query construction component. Construction of the query involves generic and specific dimensions ( Figure 3). The actual term is the specific ontology dimension and the term "breast cancer" represents the high-level domain and its inclusion introduces a background to the query. The next level of construction expands the query to include personalization and diversification efforts. The audience dimension is represented using several attributes specific to the high-level domain of breast cancer. These are level of knowledge, age groups, stage of illness, and user role. WordNet is explored in search of the generic dimension of synonyms. The two terms, "palliative" and "care" are searched separately. The WordNet senses metric is used to select synonyms with a higher relevance to the input term. The association of dimensions for the said term is tabulated in Table 2. Starting with the base query "palliative care breast cancer," the search is gradually expanded to include the audience attributes and the synonyms. Thereby, the recurrent phases of query construction and content search contribute toward good coverage of available online content. After the search results have been processed into a corpus of plain text documents, D q =d q1 , d q2 , …d qn , multi-term recognition takes place. As mentioned earlier, this module identifies multi-word terms that are ignored by the VSM. The expectation of text analytics phase is to capture semantics representative of the documents; the inclusion of multi-word and single-word terms reinforces the VSM outcomes. As an illustrative example, some comparable multi-word terms and single-word terms recognized from a high ranked resource are presented in Table 3. Table 3. Comparison of multi-word and single-word terms from an online resource on "palliative care" [40]. Single-word terms Multi-word terms palliative, care, specialist, treatment, disease, female, support, family, body, medicine palliative care, palliative care team, palliative care specialist, palliative medicine, anticipate future issue, spiritual care, outpatient setting, treatment option, family member In the term vector creation stage, VSMs for multi-term words, vsm m (R q ) and vsm m (d q ) ∀d∈Dq, as well as for single terms vsm(R q ) and vsm m (d q ), ∀d∈Dq are generated. Vector R q represents the benchmark vector derived from existing resources in the content repository. The cosine similarity was used to measure likeness between the VSMs with the threshold set at 0.75. Two terms were selected to demonstrate the measures of similarities. These are "palliative care" and "reviews." The contrasting nature of the terms, the first being specific and the second more general, appeals to the usual content discovery requirements of information portal and related Internet technologies. The number of new resources above the threshold for the first term was 45 and 70 for the second term. The second term, "reviews" has a larger number of resources because it covers a broad content area. The cosine similarities in the range of 0.75-1 in bins of 0.05 are depicted in the histograms in Figure 12 for the multi-word and single-word VSMs of the two terms. The primary observation here is the high similarities of most resources in the multi-word VSM, with 60 resources (23 for palliative care and 37 for reviews) in the range of 0.9-1.0 in comparison to single-word terms that have only 25 in the same range. This proximity to the benchmark is indicative of the contextual information captured by multi-word terms. Multi-criteria ranking aims to satisfy 3 criteria: relevance, reliability, and timeliness. The multi-word and single word similarity measures make up 2 relevance measures. The ranking from the content search is coupled with the upload date and time of each resource to calculate a timeliness measure. Reliability is determined by comparing the author/publisher names of new resources with those already in the repository. Unknown authors are ranked very low so that domain experts can intervene at the actual content selection phase to determine reliability based on their knowledge. As already presented, the varying level of importance of criteria for each term prompted the use of fuzzy weights per criterion per term. Inputs accumulated from domain experts are accumulated and aggregated to generate these FTNs. The following FTNs (Table 4) were used for the 2 terms "palliative care" and "reviews" to demonstrate the multi-criteria ranking process. Both terms have high weights for the 2 relevance measures and reliability in contrast to timeliness. Timeliness is not crucial for the term "reviews" due to the obvious nature of a medical review. The reliability measure for "review" is weighted above that for "palliative care." The weighted sum value, a cr W c for three resources for term "reviews" is presented in Table 5. The 4 measures for each resource were normalized to 1-10 and are shown in the first column of Table 5. The weighted summation of the resources are R1 (10.97, 14.47, 19.55), R2 (9.63, 12.83, 16.62), and R3 (11.48, 15.6, 21.08). Figure 13 displays the membership functions for each. Following Figure 10, the comparison scores are e 31 , e 32 , e 12 =1, e 13 =0.88, e 21 =0.76 and e 23 =0.64. Using a threshold Q of 0.9 and 0.8, respectively, the ranking of the 3 resources in descending order can be determined as R3, R1,and R2. With completion of the ranking phase, the ranked resources and the intermediary metrics are sent through to the domain expert for further scrutiny. Discussion Evaluation and quality of content become crucial based on the information expectations of the target audience, especially in the case of health information [1]. The increase in relevant online health information is a challenge for domain experts to peruse and evaluate on a regular basis. This paper reported the development of an intelligent content discovery technique that is able to address this challenge with automated discovery and ranking features. The technique utilizes an existing content repository as a benchmark to validate new content discovered online. It operates in 4 modules: query construction, content search, text analytics, and multi-criteria ranking. Query construction uses an existing ontology of key terms and supplements this with audience and context information as well as synonyms extracted from WordNet. Content search retrieves a unique list of resources that are downloaded, preprocessed, and consumed by text analytics. Semantics, based on multi-word and single-word terms, are identified in text analytics and used to measure proximity to a benchmark vector derived from existing content. Acknowledging the subjective nature of qualitative factors, fuzzy weights are used in the multi-criteria ranking phase to determine a single rank encompassing relevance, reliability, and timeliness. The paper delineates the complete technique with an inclusive demonstration of its execution using an actual health information portal as a test bed. The technique can be sufficiently generalized and applied in other domains. In the next phase of the project, we will focus on validation of the technique with experiments involving domain experts as well as user studies to highlight its benefits and further establish its purpose in CM. Future research will also investigate the advantages of ripple-down rules [41] over fuzzy MCDM when generalizing the technique for application in other domains with incremental usage over time.
8,232
2014-04-23T00:00:00.000
[ "Computer Science" ]
Sparse Coding with a Somato-Dendritic Rule Cortical neurons are silent most of the time. This sparse activity is energy efficient, and the resulting neural code has favourable properties for associative learning. Most neural models of sparse coding use some form of homeostasis to ensure that each neuron fires infrequently. But homeostatic plasticity acting on a fast timescale may not be biologically plausible, and could lead to catastrophic forgetting in embodied agents that learn continuously. We set out to explore whether inhibitory plasticity could play that role instead, regulating both the population sparseness and the average firing rates. We put the idea to the test in a hybrid network where rate-based dendritic compartments integrate the feedforward input, while spiking somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic plasticity is not strictly required for regulating sparseness: inhibitory plasticity can have the same effect, although that mechanism comes with its own stability-plasticity dilemma. Going beyond point neuron models, the network illustrates how a learning rule can make use of dendrites and compartmentalised inputs; it also suggests a functional interpretation for clustered somatic inhibition in cortical neurons. Author Summary Ever since the inception of neural networks in the 1950s, their engineering applications have relied on very simple artificial neurons that ignore many of the features of the cells in our brains. Research into the finer details, such as their extensive dendritic trees, was mostly the work of biologists. But the computational capabilities of dendrites are now attracting the attention of the machine learning community and there are attempts to make use of them in deep neural networks. Our work gives one example of the kind of computations that become possible once one steps beyond simple point neurons to include more biological details. Our topic is sparse coding, a field which studies how neural systems can discover structure in natural stimuli such as images. We show how adding dendrites to artificial neurons lets them solve the task in a different way. This may have benefits for creating robots that learn from experience, and suggests a number of electrophysiological experiments that could teach us more about how real neurons work. Introduction reflecting instead some fundamental structure in the input -a characteristic that reminds us of the suspicious coincidences of Barlow [10]. As for competitive learning, described by Rumelhart & Zipser [11] , it aims to reduce the redundancy of the code and decorrelate the output dimensions, so that each neuron responds to a different feature. This usually involves a winner-takeall system [12], or inhibitory connections between the coding neurons [13,14] -an organisation which is equivalently called lateral, recurrent or mutual inhibition. Starting with Földiák [15], these two heuristics have been applied in a variety of sparse coding networks with rate-based [16][17][18] and then spiking neurons [19][20][21][22]. These networks have in common the use of Hebbian lateral inhibition to decorrelate the output, and of nonlinear Hebbian rules to perform projection pursuit on the feedforward input. [24] is an early example of such a rule, inducing depression when the output activity is below average and potentiation when it is above average. This steers gradient descent towards an activity distribution with heavy tails, which typically converges onto one of the independent components. [23], the precise shape of that nonlinear function is not critical. The trick is to keep it aligned with the activity distribution throughout learning, so that the potentiation region stays centered on the tail. Usually, this is done by enforcing a constant norm for the weight vectors, or by using a homeostatic term that moves the potentiation threshold according to the average activity of the neuron, as in the BCM rule. That homeostatic term has the effect to regulate the lifetime sparseness of the neuron and is also called intrinsic plasticity (IP) by Triesch [25], to distinguish it from synaptic plasticity. As noted by Brito & Gerstner In most models, IP needs to be faster than the Hebbian component of learning [26,27]. But in vivo, IP tends to be slower, acting over a timescale of days rather than minutes [28,29]. Besides, fast homeostasis could be particularly disruptive for animals and robots that learn continuously, and cannot assume that the feature detectors they have acquired will be stimulated at regular intervals. Here we propose an alternative scheme that does not require fast homeostatic plasticity. The idea is to put mutual inhibition itself in control of the Hebbian nonlinearity: stimuli for which many neurons compete to respond, and neurons that are often active as well, would attract more lateral inhibition and be subject to a higher potentiation threshold. In other words, instead of using intrinsic plasticity to enforce lifetime sparseness, this scheme would regulate both the population and the lifetime sparseness through synaptic plasticity. To do so, we need a mechanism through which the feedforward learning rule could measure the amount of competition on an input-by-input basis and use it as a negative feedback. But artificial neural networks usually employ point neurons, where all inputs are added together into a single activity variable. The consequence is that the learning rule cannot distinguish between stronger lateral inhibition -the signal to become more selective -and weaker feedforward activity that results from synaptic plasticity or from fluctuations in the input. The solution could be to integrate the feedforward and recurrent pathways in separate neural compartments, for instance the soma and a dendrite. The dendritic compartment could then estimate the amount of somatic inhibition by comparing its local depolarisation with the somatic activity that it perceives via backpropagating action potentials. The idea has been tried before, although not on a sparse coding task. In Körding & König [30], lateral inhibition can prevent the backpropagating action potentials from reaching the dendrites, which induces depression in dendritic synapses via spike-timing dependent plasticity. Urbanczik & Senn [31] use probabilistic spiking neurons where the dendritic compartment tries to match the somatic potential; this results in depression when unpredicted external inputs inhibit the soma, and potentiation when these unpredicted inputs are excitatory instead. Here we set out to investigate whether a variant of these somato-dendritic learning rules could discover sparse codes in natural stimuli. We found that one can adjust the somatic and dendritic transfer functions to produce a BCM-like curve where the threshold between depression and potentiation follows an instantaneous measure of somatic inhibition. This lets the network learn sparse codes without fast intrinsic plasticity. Network model Our model is a network of leaky integrate-and-fire (LIF) neurons, each with an extra dendritic compartment ( fig. 1). There are two fully-connected pathways: a recurrent inhibitory pathway between the somas, and a feedforward pathway between the input and the dendrites. The network is meant to model a small patch of neural tissue where full connectivity is an acceptable approximation; hence we keep the number of neurons small ( ≤ 1024). With respect to the dimensionality of our input stimuli, this translates to networks that range from undercomplete ( / ≪ 1) to slightly overcomplete ( / ≈ 1.3). Figure 1: Architecture of the network. Annotations indicate the feedforward input , leaky integrate-and-fire (LIF) somas and their firing rate , dendritic compartments and dendritic activity , and feedforward and recurrent pathways with weights and , respectively. The symbol • denotes an inhibitory synapse, ∘ an excitatory one. The recurrent pathway mediates all-to-all inhibition via spikes and conductancebased somatic synapses. For simplicity we do not use separate inhibitory interneurons. Although that architecture deviates from biology and Dale's law, King et al. [21] found that replacing direct inhibition with interneurons did not substantially alter the results of Zylberberg et al. [20]. The feedforward pathway targets the dendrites and contains both excitatory and inhibitory synapses. It carries rates instead of spikes; doing so allows us to employ a classical Hebbian formalism in the learning rule and discrete-time dendritic compartments. A spike-based input and continuous-time dendrites would be more biologically plausible, but the model would also become substantially more complex; we reserve these for future work. Here we use a rectified linear activation function in the dendrites, with some modifications to account for the overal transfer properties of biological dendrites (see Methods for details). The network operates as follows. We present each input pattern to the dendrites and compute the dendritic activation . This results in a constant current flow from the dendrite to the soma while the somas compete to respond for 100 timesteps ( = 0.5 ms). Then we compute firing rates using both the num-ber of spikes and the spike latencies. Finally, we apply the feedforward and recurrent learning rules. We repeat these steps for the next input pattern, etc. Feedforward learning rules The weight of each feedforward, dendritic synapse is updated according to a nonlinear Hebbian rule: where is the input rate, is the dendritic activation, is the somatic firing rate, is the learning rate, controls the scale of the weights and sets the potentiation/depression ratio. The rule can change the sign of the weights, switching between excitatory and inhibitory synapses. It is gated by post-synaptic activity: there is no change of weight when both and are zero. This ensures that the weights do not fade when the neuron is silent. The term ( − ) at the core of that somato-dendritic rule is reminiscent of the Delta rule [32,33], and it can be seen as a rate-based variant of the rules used in Körding & König [30] and Urbanczik & Senn [31]. In Urbanczik & Senn, the purpose of learning is to correct the mismatch between the somatic activity, , and its prediction by the dendrite. Thus, they use an error-correcting term − ( ), where is the dendrite's own model of the somatic transfer function and ( ) ≈ when the dendritic prediction is correct. In contrast, here the goal is not to achieve a perfect prediction of the somatic activity by the dendrite, but to exploit the mismatch between and so that it creates a BCM-like curve modulated by inhibition ( fig. 2). In the absence of somatic inhibition, we set so that − is non-negative and the rule behaves like a linear Hebbian rule. In the presence of somatic inhibition, − is zero for subthreshold inputs, negative for for excitatory inputs that fail to elicit enough somatic spikes, and positive for those that produce a strong response. This yields a nonlinear Hebbian rule where the effective threshold between potentiation and depression depends on the amount of competition received for each particular input, without averaging over the recent activity of the neuron. Figure 2: The learning rule produces a BCM-like curve controlled by somatic inhibition. Each curve plots the effective Hebbian nonlinearity − as a function of the net dendritic input (assuming = 0 so that we can ignore the effect of the weight decay term − ). Injecting a constant inhibitory current into the soma (marked on the curves) shifts the potentiation threshold to the right. The bumps in the curves are a consequence of the way we compute the firing rate and mark the occurence of an extra spike. Note: this figure was generated with a finer timestep dt = 0.01 ms to smoothe the discontinuities in the curves caused by the discrete spike times. Crucially, the two LTD terms are gated by the dendritic activity , and will therefore not be suppressed by somatic inhibition that affects . This allows the learning rule to depress the synapses that are active when the soma is strongly inhibited, shifting the distribution of the net dendritic input back to the left and making the neuron more selective as a result. In contrast, Földiák [15], Zylberberg et al. [20] and King et al. [21] use a single heterosynaptic LTD term of the form − that is gated by somatic activity and cannot induce depression in response to lateral inhibition. Instead, these networks work the other way around: they first make the output selective through IP, and then transfer that selectivity to the receptive fields by pruning the synapses that are silent when the neuron is highly active. When > 0 the rule is able to convert some of the recurrent inhibition into feedforward inhibition, producing receptive fields that have both ON and OFF fields even with a non-negative input. Note that homosynaptic LTP and LTD are swapped if we interpret negative weights as inhibitory synapses. Finally, we apply a separate regularisation rule, taking care not to change the sign of the weight: where determines the amount of regularisation. This does not fundamentally change the operation of the learning rule, but simplifies the receptive fields by suppressing the weights of weakly correlated input dimensions. Recurrent learning rule The somatic synapses that mediate lateral inhibition are plastic as well. The weight of each recurrent, somatic synapse between a pre-and a post-synaptic neuron follows a standard Hebbian rule with pre-synaptic gating: where is the learning rate and controls the scale of the weights. Gating by pre ensures that the inhibition from a winning neuron to a losing neuron decays, but the reciprocal connection does not. The asymmetry prevents a single neuron from taking over all the input features [34]. In practice, we use a much faster learning rate for the recurrent inhibition compared to the feedforward synapses ( ≫ ); otherwise receptive fields are unstable and oscillate between selective and non-selective features. Receptive fields Our first experiment is to look at the receptive fields of the neurons after training on various types of inputs. The expectation, for a sparse coding network, is that these receptive fields should correspond to selective features (rather than complete patterns) and that the neurons should be silent most of the time. Trained on the MNIST dataset of handwritten digits [35], the network learns receptive fields that respond to fragments of digits or pen strokes, as shown in fig. 3. These receptive fields resemble the ones learned by sparse auto-encoders [8], despite the fact that we use a different algorithm -a coincidence which can be explained if these pen-stroke shapes are indeed the independent components of MNIST digits. The activity of the network is sparse throughout the training period, both in terms of lifetime and population sparseness (figs. 4, 5). We also test a variant of MNIST called Fashion-MNIST [36], which uses the same format but consists of small images of items of clothing like shoes and Figure 3: The network learns pen-stroke shapes from MNIST digits. A: sample input stimuli. Black corresponds to zero and white to one. B: receptive fields (weights) of a network with 256 neurons after training on 120,000 digits (28 × 28 pixels) with random distortions. Middle gray corresponds to zero, lighter pixels to excitatory weights, and darker pixels to inhibitory weights. shirts. Training the network on that dataset extracts the outlines of the input stimuli and also separates some of their constituent parts ( fig. 6). [38]. Images are typically not presented to the network in their raw form, but first processed either by a difference-of-Gaussians filter that models the transformations happening in the retina, or by a whitening transform that equalises the variance across spatial frequencies [39]. Both types of pre- Field [1], the NASA dataset yields sligtly more elongated receptive fields but gives otherwise similar results. This is probably due to the more frequent occurence of straight edges in indoor scenes. Linear decoding The next series of experiments aims to check whether the network's output is indeed a good encoding of the input. This does not necessarily follow from an analysis of the receptive fields; for instance, a network could succeed in extracting individual independent components, but still fail to encode the mixture of components present in any given input. More specifically, we would like to check whether the sparse encoding produced by the network can be linearly After that classification task, we turn to linear regression and attempt to reconstruct natural images from the output of the network. While Zylberberg et al. [20] inverted the transformation manually by reusing the network's encoding weights for decoding, here we train a linear model to predict the input patch given the sparse output of the network. We did not attempt to quantify the reconstruction error: pixel-wise measures such as the peak signal-to-noise ratio are neither very informative of how much structure is preserved, nor easy to interpret when comparing different scenes, and better metrics based on structural similarity are non-trivial to compute [41]. Qualitatively, we find that even a small network with 64 neurons preserves the general features of the scene ( fig. 9), despite reducing the dimensionality of the data by a factor of 4. Larger networks are able to encode finer details -the larger text on the sample image starts to be legible with 256 sparse coding neurons. Stability and response to perturbations In most machine learning experiments, the input data is randomised so that its distribution is mostly homogeneous over time. This is not the case for embodied agents that learn continuously: an animal samples from small regions of the input space as it moves from one place or activity to the next. Thus an important challenge in artificial neural networks is to learn online on non-homogeneous data. Sparse coding networks with a homeostatic term make an explicit assumption that the average firing rate of each neuron is constant, and the violation of that assumption could be a factor in catastrophic forgetting. The next experiment aims to explore whether the absence of a homeostatic term in our model makes it more robust to perturbations. In fig. 10, we first train the network on the full MNIST dataset with Gaussian noise ( = 0.2) added to the digits and clipped to [0, 1]. After 150000 stimuli, we remove the MNIST input and continue training on the background noise. We restore the input and train again on the full MNIST dataset for 150000 stimuli. Finally, we perform one last training round on a subset of MNIST that contains only the zeroes, with all other digits removed. We find that the receptive fields retain their selectivity despite fading during the period when the network receives only background noise, and recover with minimal changes when the original input is restored ( fig. 11): thanks to the lack of fast IP, input deprivation does not induce catastrophic forgetting. As long as the distribution of the independent components remains the same, there is A and C in fig. 11). In contrast, we observed a constant shifting of the receptive fields when replicating other models such as the one by Zylberberg et al. [20]. However, the receptive fields do change rapidly when we switch from the full MNIST to zeroes only: they adapt to match the new distribution of the independent components and forget the features that were specific to other digits (such as straight lines). Thus the lack of IP protects against forgetting during input deprivation but does not block continual adaptation to the input, as long as the new stimuli overlap with existing receptive fields. A small number of neurons (typically one or two) respond strongly to the noise during the period of input deprivation (bright receptive fields in fig. 11; dark lines in figs. 10, 12). Average firing rates for the other cells are low; again, this can be explained by the absence of a homeostatic term that would drive every neuron towards a target firing rate. Since the background noise does not contain any structure, these few active cells are sufficient to encode it and inhibit other neurons, protecting their receptive fields. also no drift with continued learning (compare The transient increase in activity when the input is restored does not exceed three times the baseline: spikes remain sparse throughout, and come back to normal after 10 seconds ( fig. 12). Since the neurons have fixed somatic and dendritic thresholds, that increase must come from the decay of lateral inhibition or from a shift in the excitatory/inhibitory balance of the feedforward weights. In contrast, in a network with IP, homeostatic adjustment of the thresholds to the background noise would cause a temporary saturation of the transfer function and loss of sparseness when the input is restored. Figure 12: The network is robust to input deprivation. Top: mean number of spikes per neuron per stimulus; the green line marks the average value before mark A. Bottom: raster plot of the output spikes. Following input deprivation (mark A), firing patterns return to normal within 20 seconds after the input is restored (mark B), except for the neurons that responded strongly to the noise (these take somewhat longer). Dendritic learning and compartmentalised inputs Biological neurons are more than just thresholding devices [42]: they can perform computations that are significantly more complex than the traditional artificial neurons with a single summation stage feeding into a sigmoidal or rectified linear (ReLU) transfer function. These include temporal processing at various levels from the synapses to the soma, and multi-stage integration of inputs in the dendrititic tree [43,44], achieving capabilities in individual cells that would normally require a network of point neurons. With this paper, we give one example of the types of learning that become possible in neurons with separate compartments. Point neurons are capable of sparse coding, but they must approach the problem of from the angle of lifetime sparseness. Our contribution is to show how the addition of a dendritic compartment gives the learning rule access to more information that lets it modulate nonlinear Hebbian learning via population sparseness as well. [30] could not explore, as they used simpler stimuli made of a single independent component. [31] to reward-modulated learning with multiple dendrites. One area of future research is therefore to apply our sparse coding rule to neurons with more than two compartments, for instance for the purpose of learning sparse codes from a bottom-up input and predictive associations from top-down sources at the same time. A single dendritic compartment is still a stark simplification over the finely branched structure of dendritic trees. Legenstein & Maass [45] use multiple dendrites to solve a nonlinear binding task where each neuron learns to respond to multiple patterns (for instance AB and CD), while ignoring other combinations of the same input dimensions (AC and BD); Hawkins & Ahmad [46] exploit a similar mechanism. As for Schiess et al. [47], they extend the somatodendritic learning rule of Urbanczik & Senn The wave of interest in modelling neural networks with compartmentalised inputs now extends to the neuromorphic hardware that can simulate them efficiently, with some experimental support for dendrites on the SpiNNaker chip [48], and multiple compartments and input traces on Intel's Loihi [49]. As the idea makes its way from biology to machine learning [46,50], we expect to see a shift from a paradigm where artificial neural networks employ very simple units and rely on supervised learning to distribute a task over this generic computational substrate, to a paradigm where single neurons perform a substantial amount of computation and where the structure of these neurons already encodes a particular approach towards solving the task. Learning sparse codes without intrinsic plasticity Our findings confirm that intrinsic plasticity is not strictly required to learn sparse codes. In addition to its role in decorrelating the population responses, plastic lateral inhibition can also regulate sparseness through its effect on the nonlinear Hebbian learning rule. Freed from the need to provide that fast negative feedback, IP could instead act on timescales slower than Hebbian plasticity, and help recruit previously silent dendrites. Without fast IP, the network can be made more robust to temporary input deprivation. Because of their selective receptive fields, the neurons respond only weakly to background noise, and the post-synaptic gating in the learning rule protects the synaptic weights from rapid changes. But replacing intrinsic plasticity with synaptic plasticity is still not enough to cope with the changes that an animal or robot would encounter as it switches between tasks and environments: the network remains susceptible to rapid and extensive reorganisation when novel inputs overlap the existing receptive fields, or when the distribution of the independent components changes. On the one hand, that kind of adaptability is desirable as natural environments are not static and the quick acquisition of novel stimuli can be critical for survival. But on the other hand, it should disturb existing receptive fields as little as possible so as not to erase previous experiences and all the associations that build upon them. Although increasing sparseness and careful tuning of learning rates could help, it is likely that solving that stability-plasticity dilemma will require ad-hoc gating mechanisms. Some candidates are the conditional consolidation of synaptic changes [51], neuromodulation and attention [52,53], or a mechanism based on top-down prediction errors like the Adaptive Resonance Theory [54]. On sparse coding and associative readouts In Buzsáki's perspective [3], every pathway that links two populations of cells involves a readout or transformation of one neural code into another. But in machine learning, complex transformations often require multiple layers. Does the brain use interneurons for that purpose, or does it somehow solve the problem without them? Although there are excitatory interneurons in the cortex (layer IV stellate cells), we know that inputs from distal areas converge directly onto the dendrites of pyramidal cells [55], which favours the direct readout hypothesis. That architecture would also scale better to larger networks. Compared to dense codes (where every cell participates in coding every stimulus), there are two reasons why sparse codes should make direct readouts easier to learn. First, because fewer active units mean fewer weights to tune for any given mapping -in that sense, sparseness could act as a form of regularisation that prevents overfitting. Second, because separating the independent components of the signal should also help to disentangle the factors of variation, and make the problem more linearly tractable. Compared to a local code (where every stimulus has its own dedicated cells), sparse codes should allow the readouts to generalise to novel inputs while retaining the ability to encode small differences. Instead of encoding each stimulus as a whole (as happens in nearest-neighbour clustering, self-organising maps and strict winner-take-all networks), a sparse coding network encodes each stimulus as a combination of features that it shares with other stimuli. It is thus able to respond to the familiar features of an unfamiliar input. Our results indicate that the use of sparse codes can help, allowing a simple linear readout to reach the same accuracy as a multi-layer network trained on the raw input. This suggests that one could learn transformations from one sparse code to another sparse code by adding just an extra set of synapses to the target neurons. However, the decoding tasks we performed in this paper are not necessarily representative of the kind of readouts that a neural system embodied in an animal or robot needs to perform: in the case of MNIST, the target classes are few and mutually exclusive; and in the case of natural images, the output space is the same as the input space. It would therefore be of value to test the idea with more realistic types of inputs and readout -for instance, encoding sensory and motor information and learning predictive associations between one modality and another. And the lesser improvement in linear decoding on the Fashion-MNIST variant shows that detecting a complex arrangement between the parts of a sparse code may still require more than a single linear readout -not just an extra set of synapses, but an extra set of compartments as well, as in Legenstein & Maass [45], leveraging dendritic arithmetics to solve the task [44]. Biophysical interpretation In certain aspects, the architecture of our network ressembles cortical networks. Pyramidal cells are also the convergence point of distal inputs and local recurrent pathways, and their somas receive targeted inhibition from parvalbumincontaining basket cells that are activated by neighbouring neurons [56]. This suggests that the cortex might be making a similar use of compartmentalised inhibition for the purpose of learning sparse codes. However, given the diversity of cortical inhibitory pathways [57], the roles of compartmentalised inhibition in cortical neurons must be considerably more complex than our model can possibly account for. For instance, other types of inhibitory interneurons bring recurrent inhibition to the dendrites [58,59]; one could attempt to include these in a computational model, following up on the work of Spratling & Johnson [60]. Wilmes et al. [61] also modelled the mechanism postulated by Körding & König [30], where inhibition does not suppress somatic spiking, but blocks the backpropagating action potentials on their way to the dendrites -a mechanism which could also sustain a sparse coding rule. As it stands, our somato-dendritic learning rule (eqs. 1 and 2) contains a number of hypotheses about plasticity in biological neurons. Loosely speaking, the term corresponds to backpropagating action potentials (bAPs), while the term signals dendritic activity. This implies that somatic spikes should lead to long-term potentiation (LTP) of active excitatory synapses, while dendritic activity should cause depression (LTD). For dendritic inhibitory synapses, the situation should be reversed, with dendritic activity leading to LTP and bAPs leading to LTD. There is some evidence in support of the first hypothesis: bAPs are a classical trigger of LTP, while dendritic spikes [62,63] and NMDA receptor activation [64,65] have both been linked to LTD or blockage of LTP. But there is also evidence to the contrary: NMDAR activation is central to LTP as well [64,66,67], and dendritic spikes can induce LTP without bAPs [68]. Nonetheless it seems that dendritic LTP without bAPs requires a local sodium spikelet [69,70]: a fast, spike-like depolarization that sometimes, but not always, accompanies NMDA spikes [71]. This suggests rephrasing our biophysical interpretation and equating the term with fast voltage transients -either bAPs or sodium spikelets -while the term would correspond to slower dendritic events like elevated calcium. The question is then what sort of conditions can trigger sodium spikelets in the absence of bAPs, and how the operation of the learning rule would change if these were included in the model. [72] bring some support to the notion that plasticity at dendritic inhibitory synapses could be reversed com-pared to excitatory synapses. They report LTD of inhibitory inputs coincident with bAPs, and LTP for those that come up to 800 ms after the train of action potentials. The latter fact also hints at a dimension of temporal processing which we ignore in this model and which we could explore in further workfor instance adapting the learning rule to learn transitions and sequences. As for inhibitory plasticity, Holmgren & Zilberter More generally, these questions call for further electrophysiological investigations of somato-dendritic plasticity rules. There would be much to learn from experiments that vary dendritic and somatic activity independently -controlling the number of somatic action potentials emitted during a dendritic spike, or the relative timing of somatic and dendritic events. Somatic compartments and somatic synapses The somatic compartments are standard LIF neurons. The membrane potential follows the following equation: where and are the currents from the dendrite and somatic synapses, respectively. We use a fixed spiking threshold and after-spike reset without a refractory period: We compute a firing rate that takes into account the number of spikes and also their latency relative to the stimulus onset 0 . First we define a trace that increases after each spike and decays exponentially: Then we normalise so that the area under the curve is the number of spikes, and integrate over the stimulus window: Thus a spike that occurs towards the end of the window contributes less to the total than a spike that occurs early. That reset does not seem to be critical for our findings, but we did not explore the issue further. Dendritic compartments Dendrites are rate-and current-based. The net dendritic input and dendritic activation for each neuron post are as follows: The initial weights are drawn from a normal distribution (std = 0.01). The current from the dendrite to the soma is a nonlinear function of the dendritic activation: Here the goal is to reproduce the active properties of biological dendrites. Above a certain input threshold, regenerative activation of the NMDA receptors causes dendritic spikes. These lead to a sharp increase in membrane potential followed by a plateau where stronger inputs cause no further increase in voltage [73,74]. We model this with a step function and the offset 0 . However, stronger inputs do increase the duration, and reduce the rise time of the plateau, producing more somatic spikes. We model this with the linear term . In practice we adjust 0 to cancel out the somatic rheobase, so that suprathreshold dendritic activation elicits at least one spike in the absence of somatic inhibition. We find that is not critical for our findings as long as all dendrites respond to some inputs at the start of the simulation. Here we set it to zero, although a small positive value would better reproduce the data in Milojkovic et al. [73] and Oikonomou et al. [74]. Table 3 Receptive Fields Throughout this paper we use the weights of the neurons as a proxy for their actual receptive fields. Showing all the weights of the network on the same image requires that we normalise each receptive field separately, because neurons that respond to narrow features have larger absolute weights than those that respond to broad ones. Nonetheless, we make sure that zero weights appear as the same middle gray for all neurons, allowing quick identification of ON (brighter) and OFF (darker) areas. Thus we normalise the receptive field = [ 1→ ⋯ → ] of each neuron as follows when generating the figures: where = (max 1≤ ≤ | → |) −1 and is the number of input dimensions. MNIST We use both the standard MNIST dataset [35] and the Fashion-MNIST variant [36], each with 60,000 training samples and 10,000 test samples. We map the full range of the data to the interval [0, 1]. When training the sparse coding network, we shuffle the patterns and distort them with random shears and translations, as done in LeCun et al. [40]. The purpose of these distortions is to increase the number of distinct training samples, and also to remove the correlations introduced by the centering of the patterns. We do this by applying the following affine transformation with the origin at the center of the pattern: where each is a random variable drawn from ( = 0.1), and each is a random variable drawn from ( = 2.0). Distorted digits produce more localised receptive fields than the centered patterns, which in turn improves the performance of classifiers trained on the output of the network. When training and testing the classifiers themselves, we freeze the weights of the sparse coding network and we use the plain stimuli without distortions. Natural images We use two datasets of photographic images: one by Olshausen & Field, which consists of natural outdoors scenes [76], and one compiled from public-domain images by NASA [37], which can be found in the supporting information of this paper (S1 File). In both cases, each image was converted to grayscale, resized to an area of 200,000 pixels, preprocessed using the same whitening transform as Olshausen & Field [1], and then normalised to unit variance. No further normalisation was applied to the individual patches used for training; in particular, the patch mean was not subtracted from the input. Note that in contrast to MNIST the natural image stimuli contain both positive and negative values. We interpret these as ON and OFF channels from the retina; while it would be more realistic to split the ON and OFF values into separate, non-negative channels, we did not attempt this here. For the reconstruction experiment, the input image was tiled into overlapping patches with a width of 16 pixels and a stride of 8 pixels. Each input patch was run through a sparse coding network pre-trained on the NASA dataset. The sparse output was then fed as the input to a linear model trained with ridge regression to recontruct the original patches. Finally, the predicted patches were placed at their original locations and averaged to account for the stride overlap.
8,464.4
2018-10-25T00:00:00.000
[ "Computer Science", "Biology" ]
Cell Survival from Chemotherapy Depends on NF-κB Transcriptional Up-Regulation of Coenzyme Q Biosynthesis Background Coenzyme Q (CoQ) is a lipophilic antioxidant that is synthesized by a mitochondrial complex integrated by at least ten nuclear encoded COQ gene products. CoQ increases cell survival under different stress conditions, including mitochondrial DNA (mtDNA) depletion and treatment with cancer drugs such as camptothecin (CPT). We have previously demonstrated that CPT induces CoQ biosynthesis in mammal cells. Methodology/Principal Findings CPT activates NF-κB that binds specifically to two κB binding sites present in the 5′-flanking region of the COQ7 gene. This binding is functional and induces both the COQ7 expression and CoQ biosynthesis. The inhibition of NF-κB activation increases cell death and decreases both, CoQ levels and COQ7 expression induced by CPT. In addition, using a cell line expressing very low of NF-κB, we demonstrate that CPT was incapable of enhancing enhance both CoQ biosynthesis and COQ7 expression in these cells. Conclusions/Significance We demonstrate here, for the first time, that a transcriptional mechanism mediated by NF-κB regulates CoQ biosynthesis. This finding contributes new data for the understanding of the regulation of the CoQ biosynthesis pathway. Introduction Coenzyme Q (CoQ) is a small lipophilic molecule that transports electrons from mitochondrial respiratory chain complexes I and II, to complex III [1]. In addition, CoQ functions as a cofactor for uncoupling proteins [2] and other mitochondrial dehydrogenases [1]. CoQ mainly acts as an antioxidant and can prevent cell death under certain stress conditions, particularly in mitochondria-DNA depleted cells [3,4]. CoQ also regulates the extracellulary induced ceramide-dependent apoptotic pathway [5]. CoQ is composed of a benzoquinone ring and a polyisoprenoid chain, derived from tyrosine and mevalonate, respectively. Its biosynthesis depends on a pathway that involves at least ten genes (COQ genes). Among them, COQ7 is proposed to encode for a key regulatory component of a multisubunit enzyme complex [6]. However, there is no information about the precise regulation of CoQ biosynthesis pathway except that peroxisome proliferator-activated receptor alpha (PPARa) is involved [7]. We have previously shown that Campothecin (CPT) treatment increases CoQ biosynthesis rate and reported that CPT in mammals up-regulates COQ7 [8]. Thus, the stimulation by CPT is a useful tool for deciphering transcriptional mechanisms of the regulation of the CoQ biosynthetic pathway in mammals through up-regulation of COQ7 gene. Camptothecin (CPT) is a cytotoxic drug widely used in cancer therapy. It is known that the main target for camptothecin is the nuclear topoisomerase I (TOP1) [9][10][11]. Double-strand DNA breaks derived from the inhibition of nuclear TOP1 are considered the main cause of apoptosis induction by CPT [11]. CPT also induces an increase of reactive oxygen species (ROS) in different cancer cell lines including H460 cells [12,13,14,15,16,8]. There are recent reports supporting the role of oxidative stress in the induction of apoptosis by CPT and its derivatives [15,17]. In response to topotecan, a CPT water-soluble derivative, cells activate their antioxidant defense mechanisms and a number of antioxidant enzyme activities, such as catalase, manganese-dependent superoxide dismutase (MnSOD), and glutathione peroxidase [16]. Also, the addition of catalase was able to protect cells from CPT induced apoptosis in HL-60 leukemia cells [12]. Furthermore, catalase administration to U-937 promonocytic cells also attenuated apoptosis induction by CPT and other cytotoxic drugs [18]). NF-kB is a redox-sensitive transcription factor, which regulates antioxidant enzymes such as MnSOD encoded by the SOD2 gene. NF-kB is also activated by CPT in several cell types [19][20][21][22]. In fact, NF-kB activation is frequently abrogated by antioxidants [23,24]. NF-kB has been shown to play a key role in the regulation of cell death, either as inducer or, more often, as blocker of apoptosis, depending on the cellular type and the insult [25,26]. Thus, we have proposed that NF-kB could be one of the mediators of the cellular effects by CPT through the activation of the CoQ biosynthesis pathway. CoQ biosynthesis is dependent on NF-kB Oxidative stress rises as an important activator of NF-kB that can be abrogated by antioxidants [23,24]. We have previously demonstrated that in H460 cells, which were blocked in the CoQ biosynthesis pathway, exhibited a increased sensitivity to production of ROS and cell death induced by CPT [8]. H460 cells treated with 10 mM CPT for 24 hours were fixed and probed with p65 antibody to confirm that the NF-kB system is active in these cells. Immunofluorescence experiments showed that the transcription factor translocated into the nucleus in response to CPT ( figure 1 A). In order to test whether NF-kB elicits a survival response in H460 cells and if the induction of CoQ biosynthesis in CPT treated cells is dependent on NF-kB activation, cell viability and cellular CoQ levels were measured in the presence of Bay 11-7085, a specific inhibitor of NF-kB [27]. The inhibition of NF-kB increased cell death and reduced CoQ levels induced by CPT ( figure 1 B and C). In parallel, we observed that the increase of COQ7 mRNA measured by real time PCR in CPT treatments was significantly abolished by Bay 11-7085 (figure 1 D), suggesting a dependence of COQ7 gene to NF-kB. It has previously been reported that NF-kB system is downregulated in DU145 RC0.1, a cell line resistant to CPT [28]. We used parental DU145 and RC0.1 cells to evaluate protein levels of the NF-kB system components p65 and IkBa and observed a very low content in RCO.1 compared to parental cells (figure 2 A). In RC0.1 cells, CPT treatment induced a higher production of ROS compared to parental DU145 and H460 cells (figure 2 B). Since mitochondrial ROS generation stimulated by CPT is responsible for the induction of CoQ biosynthesis, CoQ levels and COQ7 mRNA were measured in both parental DU145 and RC0.1 cell lines after 10 mM CPT treatment for 24 h. The parental DU145 cell line showed an increase in both CoQ levels and COQ7 messenger in response to CPT. However, the RC0.1 cells showed only a trend to increase CoQ levels and COQ7 messenger figure 2 C and D). Since the NF-kB system is down regulated in RC0.1 cells, this finding would confirm the requirement of NF-kB to CPT induced CoQ biosynthesis. NF-kB specifically binds to COQ7 kB sites Transcription element search software (TESS) sequence analysis [29,30] of 4 Kb of the 59-flanking region, exon 1 and the beginning of the first intron of the human COQ7 gene revealed two potential binding sites for NF-kB, three for Sp1, as well as one for RXR ( figure 3). The same analysis revealed other putative binding sites but we decided to focus on those potentially related to the oxidative stress response, particularly NF-kB because it is a redox-sensitive factor that has been shown to be activated by CPT [19]. One of the putative NF-kB binding sites, kB1 site, was found in the promoter region of the gene (position 2360 to 2350) whereas the second one, kB2, was identified at the beginning of the first intron of the gene (position +120 to +131). Electrophoretic mobility shift assays were performed to evaluate the binding of the NF-kB transcription factor to the hypothetical binding sites found in COQ7 gene. Nuclear extract from CPT treated cells were incubated with digoxigenin-labeled oligonucleotides encompassing kB1 or kB2 putative binding sites. The incubation with kB1 probe generated a band shift in response to CPT treatment (figure 4 A). The intensity increase of the shifted band achieved significance after 1 hour of treatment. Competition assays with an excess of either unlabeled normal or nonspecific oligonucleotides confirmed the specificity of the binding (figure 4 B). In addition, competition assays using antibodies against diverse subunits of NF-kB were also performed. Anti p50 impaired the formation of the protein-DNA complex. However, it failed to react with anti p65 and anti p52 sera (figure 4 C). We assayed the time course binding of NF-kB to kB2 site (figure 5 A). Incubation of the kB2 probe with nuclear extracts of CPT treated cells resulted in a band shift that reached a maximum intensity at 6 hours of treatment. In order to demonstrate that the in vitro NF-kB binding to kB2 was specific and that the composition of the heterodimer was p65/p50 we ran competition (figure 5 B) and super-shift assays (figure 5 C). Our results strongly support the in vitro specific binding of NF-kB to both sites in the COQ7 sequence in response to CPT. COQ7 kB binding sites are functional To evaluate the physiological importance of the COQ7 kB binding sites, firefly luciferase reporter constructs containing several regions of the COQ7 59-flanking sequence (figure 6 A) were transfected into HeLa cells. We have previously demonstrated that CPT also induces cell death and stimulates CoQ biosynthesis in HeLa cells [8]. Transfected cells were treated with 10 mM CPT and their luciferase activity was measured. Results represent firefly luciferase activity normalized on the basis of the constitutive b-gal activity of a control vector that was cotransfected. Three hours of CPT treatment induced a significant increase in the reporter activity when pGL3-2 construct was assayed. Reporter assays performed with pGL3-1 construct, with a deletion of 1150 bp in 59, did not show response to CPT (figure 6 B). These data then suggest that it is necessary a large region consisting of at least 2770 bp are required for COQ7 to respond to CPT. The presence of the NF-kB inhibitor Bay 11-7085 during the CPT treatment of the transfected cells completely abolished the induction of the reporter gene expression (figure 6 B). To further understand the system, deletions of seven bp inside the kB binding sites were made to generate the pGL3-2kB1, pGL3-2kB2 and pGL3-2kB1+kB2 plasmids. None of the constructs responded to CPT (figure 6 C), suggesting that both binding sites are necessary for the transcriptional induction of COQ7 in response to CPT. To determine which IkB isoform was involved in the NF-kB activation by CPT, cytosolic levels of both isoforms were determined by western blot. Both IkBa and IkBb were degraded following the CPT treatment, which was prevented by Bay 11-7085. On the other hand, Bay 11-7085 also inhibited the nuclear translocation of NF-kB subunit p50 (figure 6 D). Discussion The lipidic antioxidant CoQ is considered a central component of the antioxidant defense, protecting cells from membrane peroxidations and regenerating the reduced forms of exogenous antioxidants [31]. CoQ has also been proposed to prevent apoptosis derived from oxidative stress induced by different stimuli [32,33]. The details about the CoQ biosynthesis pathway are unfolding. There are at least ten genes (COQ) involved in a complex biosynthetic pathway [34,6]. However, there is still little knowledge about its regulation [1], except that CoQ biosynthesis is activated in mouse liver by the nuclear regulator PPARa [35]. As the number of identifiable human pathologies that are associated with a primary CoQ deficiency increases, there is a need for the full understanding of its biosynthetic pathway, from the proteins participating in the enzymatic process, to the regulatory mechanisms [36]. We have previously shown that CPT induces the up-regulation of COQ7 [8], which is considered to have both a regulatory and kinetic role in CoQ biosynthesis [37,6]. Thus, we have focused on the regulatory mechanisms of COQ7 expression as a marker of CoQ biosynthetic pathway transcriptional regulation. Searching for putative binding sequences for known transcription factors we have identified two hypothetical binding sites for NF-kB through the analysis of the COQ7 59 sequence using TESS program based on TRANSFACT database v6.0 [29,30]. NF-kB activation has been observed in diverse physiological processes including immune response regulation, inflammation and development [25]. It is also active in some tumors, during chemotherapy response and under oxidative stress [38][39][40]. In fact, NF-kB activation by many agents can be abrogated by antioxidants [23,24]. Moreover, it is known that the antioxidant enzyme MnSOD is transcriptionally activated through NF-kB in response to oxidative stress [41,42,21,22]. NF-kB is generally accepted to be activated by the nuclear DNA damage exerted by CPT [19]. However, DNA damage might not be the only process by which CPT triggers NF-kB, as the antioxidant pyrrolidine dithiocarbamate (PDTC) is very effective in abolishing NF-kB activation by the drug [23]. There is no information about NF-kB involvement in CoQ biosynthesis regulation, which positively responds to oxidative stress conditions such as vitamin E withdrawal and aging [43][44][45]. Our results demonstrate that NF-kB is able to specifically bind to both COQ7 kB sites in response to CPT. Super-shift assays indicated that the dimer that recognizes the kB2 site is composed of both p65 and p50 subunits and the homodimer p50/p50 probably binds to kB1. It is important to note that NF-kB activation by CPT has been extensively studied. However, assays have generally been done using consensus sequences corresponding to either immunoglobulin heavy chain gene [19] or the HIV enhancer [23], but we have used here a consensus sequence specific for COQ7 gene. Thus, although it has been defined p65/ p50 as the only complex involved in CPT-mediated NF-kB activation [23], we can not rule out the possibility that the COQ7 kB2 site is recognized by a p50/p50 homodimer. The specific binding of NF-kB to a DNA probe does not demonstrate its functionality in vivo but luciferase assays demonstrated that COQ7 was transcriptionally activated by CPT. The different constructs assayed have shown that it was necessary to have at least 2150 bp of the 59 flanking region for COQ7 to respond to the drug. A construct containing 1000 bp of the 59 flanking region (pGL3-1) was not able to respond to CPT. TESS analysis of the sequence of this region shows the presence of putative binding sites for Sp1 and RXR transcription factors. Interestingly, NF-kB and Sp1 have been found to cooperatively bind to DNA in different promoter systems [46][47][48]. Moreover, interaction between NF-kB and Sp1 has been also found in the Firefly luciferase activities expressed in HeLa cells transfected with two different reporter constructs (pGL3-1 and pGL3-2) under different treatments. Luciferase activity was normalized on the basis of the b-gal activity constitutively expressed by the co-transfected plasmid pC110. Results are the mean6SD of at least three independent transfection experiments. BAY indicates the presence of the NF-kB inhibitor Bay 11-7085. * P,0.05, ** P,0.001, a P = 0.05 between the treatment and its control. C. Deletion analysis of the COQ7 putative kB sites. The kB sites were partially deleted as indicated in blue. CPT responsiveness of these constructs was assayed as described before. Results are expressed as the relative CPT/control response in transfected cells. Results are the mean6SD of three independent transfection experiments. D. Western blot analyzing the presence of different NF-kB system elements on the cytosolic or nuclear compartments. Punceau staining of the membrane used for the Western blot is presented as loading control. doi:10.1371/journal.pone.0005301.g006 promoter of the oxidative stress responsive gene SOD2 [22]. On the other hand, RXR has been shown to be required for CoQ biosynthesis and its induction by cold exposure in mice [35]. Both basal binding and CPT-activated COQ7 mRNA transcription promoter are inhibited by the NF-kB inhibitor Bay 11-7085, which targets the NF-kB activating IkB kinase complex (IKK) [27], supporting the NF-kB participation in the transcriptional regulation of the gene. Inhibition of NF-kB by Bay-117085 not only avoided the CPT-dependent increase of COQ7 mRNA, but also the increase in CoQ levels. Additionally, viability assays have shown that the inhibition of the transcription factor sensitizes cells to die by CPT. All these results further support the hypothesis that CPT triggers a NF-kB survival response involving the antioxidant CoQ. The treatment of cells with chemotherapeutic compounds such as CPT induces cell death but some cell types can survive via activation of antioxidant pathways [49], including the increase of CoQ biosynthesis [8]. Here we have shown that CPT increases ROS production. This situation activates NF-kB that, among other factors, induces the expression of COQ7 gene by transcriptional regulation, which, in turn, increases CoQ biosynthesis. These findings contribute to the understanding of the proteins that participate in the enzymatic process of CoQ biosynthesis, which will help further our understanding of the CoQ 10 biosynthetic pathway and its involvement in CoQ deficiency syndrome in humans. Cell viability measurement Dead cells become propidium iodide (PI) permeable as a consequence of the lost of plasma membrane integrity. Thus, cell viability was determined by PI (10 mg/ml) staining and flow cytometry analysis. Measurement of CoQ levels Lipid extraction from cells samples was performed as described previously [8]. CoQ6 or CoQ9 were used as internal standard. Cell samples were lysed with 1% SDS and vortexed for 1 min. A mixture ethanol:isopropanol (95:5) was added and the samples were vortexed for 1 min. To recover CoQ, 5 ml of hexane was added and the samples were centrifuged at 10006g for 5 min at 4uC. The upper phases from three extractions were recovered and dried on a rotatory evaporator. Lipid extract were suspended in 1 ml of ethanol, dried in a speed-vac and kept at 220uC until analysis. Samples were suspended in the suitable volume of ethanol prior to HPLC injection. Lipid components were separated by a Beckmann 166-126 HPLC system equipped with a 15-cm Kromasil C-18 column in a column oven set to 40 uC, with a flow rate of 1 ml/min and a mobile phase containing 65:35 methanol/n-propanol and 1.42 mM lithium perchlorate. CoQ levels were analyzed with an ultraviolet (System Gold 168), electrochemical (Coulochem III ESA) and radioactivity detector (Radioflow Detector LB 509, Berthod Technologies) when necessary. ROS measurement by flow cytometry Free radical measurement was achieved by H 2 DCF-DA or CM-H 2 DCF-DA (Molecular Probes). These compounds are cellpermeant indicators for reactive oxygen species that are nonfluorescent until removal of the acetate groups by intracellular esterases and oxidation occurs within the cell. H460 cells were seeded in 35-mm dishes and treated when confluence was reached. Cells were incubated with 10 mM CM-H 2 DCFDA or H 2 DCF-DA for the last 30 min of treatment, washed and detached with trypsin. Flow cytometric analysis was performed with Epics H XL cytometer (Coulter) after trypsin removal. Population was selected using FS and SS and the fluorescence was measured by FL1 detector (525620 nm). H 2 O 2 was used as a positive control for detection of cellular free radicals. Immunoblotting Mitochondrial, nuclear or cytosolic fractions were resolved in a SDS-polyacrilamide gel electrophoresis (SDS-PAGE) (with variable acrilamide percentage according to the molecular weight of the protein of interest) and transferred to nitrocellulose membranes in a semi-dry transfer system (Trans-Blot, Biorad). After verification of equal loading using Ponceau S, membranes were blocked with 5% non-fat milk in TBS buffer (1 h room temperature or over night at 4uC) and stained with the appropriate primary antibodies (2 h at room temperature or over night at 4uC). Anti IkBa, IkBb p50 and actin were diluted 1:1000. Incubation with anti rabbit (1:10000) or anti mouse (1:5000) HRP-conjugated secondary antibodies was performed during 2 hours at room temperature. Immunolabeled proteins were detected by membrane exposure to X-ray film after incubation in enhanced chemiluminiscence reagent (Immun-Star HRP Substrate Kit, Biorad). Real Time PCR Relative expression levels were determined by real-time PCR. Cells were seeded in 35-mm plates and treated appropriately. Floating-dead cells were discarded and monolayer was washed with cold PBS. Total RNA from cells cultures was extracted with the TriPure Isolation Reagent (Roche). RNA preparation was treated for DNA removal with deoxiribonuclease I (Sigma) and cDNA was obtained from 1 mg of RNA by using the iScript cDNA Synthesis Kit (Biorad). Real Time PCR was performed in a MyiQ TM Syngle Color Real Time PCR Detection System (Biorad) coupled to a Biorad conventional thermocycler. Primers for COQ7 and the housekeeping gene were designed with the Beacon Designer 4 software. Amplification was carried on with iQ SYBR Green supermix (Biorad) with the following thermal conditions: 30 s at 95uC and 35 cycles of 30 s at 94uC, 30 s at 60uC and 30 s at 72uC. All the results were normalized to the levels of actin mRNA. At least three independent experiments were performed and the results averaged. Electrophoretic mobility shift assay (EMSA) Synthetic oligonucleotides encompassing the putative kB sites of the promoter region of the human COQ7 gene (kB1F-TAAAG-CAGGAAATACCGTGCCT; kB1R-AGGCACGGTATTTC-CTGCTTTA; kB2F-AGCTAGGGAATTTTCGCTTGA; kB2R-TCAAGCGAAAATTCCCTAGCTC) were purchased from MWG Biotech AG (Germany) as single stranded oligonucleotides. Digoxigenin labelling of the oligonucleotides was performed following the DIG Gel Shift 2nd Generation Kit protocol (Roche). Briefly, complementary forward and reverse oligonucleotides were denatured 2 min at 95uC and placed at room temperature for 30 min to generate double stranded oligonucleotides. A reaction was set up in a final volume of 20 ml in 1X reaction buffer (0.2M potassium cacodylate, 25 mM Tris-HCl, 0.25 mg/ml BSA (pH 6.6) at 25uC) containing 4 pmol of double stranded oligonucleotide, 5 mM CoCl 2 , 0.05 mM digoxigenin-ddUTP and 20 U of terminal transferase and incubated 15 min at 37uC. The reaction was stopped with 2 ml of 0.2 M EDTA (pH 8.0). Resultant labelled probe was diluted to a final concentration of 16 fmol/ml with TNE (Tris-HCl 10 mM; NaCl 100 mM; EDTA 1 mM). Preparation of nuclear extract was performed by an adaptation of the protocol described by Schreiber and co-workers [50]. Briefly, the day before the treatment H460 cells were seeded at a density of 8 10 4 cells/cm 2 . After the treatment, cells were directly lysed on the flask by the addition of an hypoosmotic buffer (10 mM Hepes, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 0.625% NP-40, 1 mM DTT, 0.5 mM PMSF and protease inhibitor cocktail (Sigma)). Cell lysate was incubated for 5 min on ice, vigorously vortexed during 10 s and centrifuged at maximum speed for 30 s. Supernatant was reserved as cytoplasmic extract and stored at 280uC. The pellet was resuspended in an hyperosmotic buffer (20 mM Hepes, 0.4M NaCl, 1 mM EDTA, 1 mM EGTA, 1 mM DTT, 1 mM PMSF and protease inhibitor cocktail (Sigma)) and incubated on ice during 15 min in agitation. Extracts were centrifuged at 16000 g 30 min at 4uC and supernatants were recovered as nuclear protein extracts without DNA. Electrophoretic mobility shift assays (EMSAs) were performed following the DIG Gel Shift 2nd Generation kit (Roche) indications: 20 mg of nuclear extract were incubated in 1X Binding Buffer (20 mM Hepes pH 7.6, 1 mM EDTA, 10 mM (NH 4 )SO 4 , 1 mM DTT, 0.2% Tween-20, 30 mM KCl), 1 mg poly [d(I-C)], 0.1 mg poly Llysine and 64 fmol of DIG-labelled probe. Competition assays were performed in the presence of a 10X excess of cold specific or non specific (Oct2A: Forward strand GTACGGAGTATCCAG-CTCCGTAGCATGCAAATCCTCTGG; Reverse strand CCT-CATAGGTCGAGGCATCGTACGTTTAGGAGACCAGCT) probe. For supershift assays 0.4 mg of rabbit polyclonal antibody (p65, p50 or p52) was pre-incubated with the nuclear extracts for 30 min on ice before the binding incubation. Samples were loaded in a 4-8% native acrilamide: bisarilamide (30:1) gel (0.5X TBE buffer) and electrophoresed at 150 V in 0.5X TBE buffer. Electrotransference to a nitrocellulose membrane (Hybond TM -N+, Amersham) was performed with a Trans-Blot Semi-Dry system (Biorad) following the manufacturer's instructions. Samples were fixed to the membrane by UV-crosslinking (70 mJ/cm 2 ). Dig-labelled probes were detected with anti-digoxigenin and visualized with the chemiluminescent substrate CSPD from the DIG Gel Shift 2nd Generation Kit (Roche) and the membrane exposure to an X-ray film. Cloning of COQ7 promoter region and generation of luciferase reporter constructs Inserts in pGL3-1 and pGL3-2 vectors were generated by PCR (special thermal conditions: 6 cycles with an annealing temperature of 42uC, followed by a Touch Down-PCR starting at an annealing temperature of 62uC and finishing at 42uC) from the BAC clone RP11-626G11 (AC099518; BACPAC Resource Centre, Children's Hospital Oakland Research Institute). COQ7 promoter region was first cloned into pGEM-T Easy Vector (Promega) using primers containing the restriction enzyme sites for SacI (59) and HindIII (39): hCQ7pSacIF (GTAGAGCTCTCCAAGGGTGTAA), CQ7pHin-dIIIR (GTTAAGCTTGTCCTGTTCACAG), for pGL3-1, and hCQ7pSacI3 (GTAGAGCTCACAGAGGGAGG) and CQ7pHindIIIR for pGL3-2. Primers were designed to amplify the 59 flanking region of the COQ7 gene (1000 pb for pGL3-1 and 2150 for pGL3-2), the 59 UTR region (60 bp), the complete first exon (73 bp) and, partially, the first intron of the gene (487 bp). Once verified by sequencing, fragments were subcloned into the pGL3basic vector (a generous gift from Dr. Blesa, Centro de Investigación Príncipe Felipe, Valencia, Spain). Simple and double kB sites deletions for the pGL3-2DkB1, pGL3-2DkB2 and pGL3-2DkB1+2 vectors were carried out by using the Quikchange II XL Site-Directed Mutagenesis Kit (Stratagene) with the following primers pairs: for kB1 site, CCTTCCACGTAGGTAAAGCAGC-GTGCCTTGTTAATAAGTAAT and ATTACTTATTAA-CAAGGCACGCTGCTTTACCTACGTGGAAGG; for kB2 site CGGTCTAGCGAGCTAGGCGCTTGAGGTTTGGGTC and ACCCAAACCTCAAGCGCCTAGCTCGCTAGACCG. Reporter gene assays Endotoxin-free plasmids were obtained by Qiagen EndoFreeH Plasmid Maxi Kit, to avoid unspecific activation of NF-kB. HeLa cells were transfected with FuGENE 6 (Roche) following the manufacturer's instructions. Briefly, the day before transfection, 2.9610 5 cells were plated in 12 well dishes (DMEM 10% serum without antibiotics). One hour before transfection, media was changed. For each single transfection 2.2 ml of FuGENE reagent were diluted in 50 ml of non supplemented OPTIMEM media (Gibco) and incubated for 10 minutes at room temperature. 0.74 mg of total DNA was added to the FuGENE diluted transfection mixture and further incubated for 30 minutes at room temperature. Then, the transfection reagent/DNA complex was added to the cells. Cells were co-transfected with pGL3-1 or pGL3-2 (0.37 mg) and pCH110 (0.37 mg), a plasmid containing the bgalactosidase gene regulated by a constitutive promoter (a generous gift from Dr. Carrión, Universidad Pablo de Olavide, Sevilla, Spain). The transfection proceeded for 48 hours before cells were appropriately treated. After that, death cells were discarded and monolayer was washed with cold PBS and lysed in 100 ml of Luciferase Lysis Buffer (Promega). Lysates were vigorously vortexed for 10 s and centrifuged 12000 g during 15 min. Aliquots were stored at 280uC until used. b-galactosidase activity was quantified in the same extracts used for luciferase activity measurement. 30 ml of lysate sample was mixed with 20 ml of H 2 O and 50 ml of reaction buffer (200 mM Na 2 HPO 4 pH 7.3, 2 mM MgCl 2 , 100 mMb-mercaptoethanol, 1.33 mg/ml ONPG [onitrophenyl-b-D-galactopyranoside]). Samples were incubated for 45 min/ 1 h at 37uC and b-galactosidase activity was spectrophotometrically measured at 405 nm in a Sunrise plate reader (Tecan Austria GmbH) coupled to the Magellan software. Luciferase activity was measured by a manual luminometer Luminoskan TL Plus (Thermo LabSystems). 2 ml of cell extract were added to 50 ml of reaction buffer (Luciferase Assay Buffer from Promega Luciferase Assay System kit) in a luminometer cuvette. Light intensity was measured inside the detection linear range. Results are expressed as relative intensity units (LRU). Definitive results are expressed as relative LRU/ b-gal activities.
6,203.4
2009-04-23T00:00:00.000
[ "Biology" ]
Be Careful What You Wish For : Pending Privatization of Australian Higher Education engagement as per the institutional missions. Such a system will also make it possible for both the administrators and university staff to identify organizational goals that are worthy of financial reward—thereby reinforcing institutional values. In addition, merit pay moderates institutional budgetary constraints by limiting the amount of funds dedicated toward across-the-board salary increases. The same policy of differentiated pay, based on institutional context, should apply for university executives.During the recent industrial fracas, vice-chancellors were reported to have illegally awarded themselves a 100 percent salary hike.Why should vice chancellors at nascent institutions-like Karatina, Kisii, and Chuka-with student population barely crossing the 2,000 mark command the same pay as leaders in complex urban universities like Kenyatta and Nairobi with student populations of 60,000 and 54,000 respectively?The dexterity and mental energies required to run the latter far outweighs the former.Policy guidance from the Commission on University Education and the state education office on vice-chancellor compensation will be invaluable in this regard. In all, permanent ceasefire will not be possible without a democratization of budget making in the state universities.Union allegations of high-level corruption at the uni-versities coupled with student strikes over fee increments show how opaque the university budgets have become.If universities can publicize mundane activities-like cultural shows, high profile visits, and gate openings-they can at least share budget information with their constituents as national and county governments do.They could do well to borrow from American institutions, where budgets are posted online and university presidents give annual state of the university address.Further, proposals for fee increase need to be exhaustively discussed with students before implementation. Be Careful What You Wish For: Pending Privatization of Australian Higher Education Anthony Welch Anthony Welch is professor of education at the University of Sydney, Australia.E-mail<EMAIL_ADDRESS>T he Australian government's recent national spending audit, commissioned by the incoming federal government in advance of the mid-May Budget, opened a Pandora's box of proposals-not least in higher education.Now that the federal budget has been proclaimed, it is clear how well these ideas accord with the relevant minister's own views.While not all ideas were taken up, at least three repay closer attention: public funding of higher education, privatization, and regulation. Minister Pyne's recent speech in London professed shock that more Australian universities were not in the top 50 worldwide, as one reason supporting a shake up in higher education.This is the kind of statement we expect from ministers of education anywhere-the Malaysian minister, among many others, has made similar noises in recent years.But in Pyne's case, the reference to the Times Higher Education World Reputation Rankings can only be explained as either the expression of a minister-either not familiar with the details of his portfolio or as a way of making a political point.The Times Higher Education rankings, of course, give substantial weight to reputation, rather than actual performance.The much more robust, reliable Shanghai Jiao Tong Academic Rankings of World Universities (ARWU) shows that, while Australia has no entry in the top 50 for 2013, five universities (Melbourne, Australian Na- The discontent over university salaries stems from a triumvirate of three interrelated factors: union-initiated costof-living salary adjustments, merit pay, and equity.Number 77: Fall 2014 tional University, Queensland, University of Western Australia, and Sydney) are all listed in the top 100.Considering the relatively small size of the system, that is a respectable result: Canada, in many ways comparable but substantially larger, only has four universities in the ARWU top 100. An Australian Harvard? But both the minister and treasurer want even better rankings.So what would it take to get even one of Australia's universities into the upper echelons of this illustrious list?Harvard University, for example, always first in global rankings, luxuriates in an endowment fund that peaked at US$36 billion before the recent recession and is well on the way to reattaining it.So, it would take the combined total assets of two of Australia's wealthiest mining magnates (Gina Rinehardt, around $18 billion) or six of its wealthiest casino moguls (James Packer, $6 billion), for even one Australian university to compete in that league.But Australia should not hold its breath.Harvard of course is exceptionally wealthy, but other leading US institutions are not that far behind-Yale's endowment fund is valued at US$22 billion and Princeton's at US$17 billion.In Australia, the University of Sydney's 2013 campaign, that set a target of AU$600 million, was Australia's largest but compares with University of Pennsylvania's US$4.3 billion, Columbia's US$5 billion, and Northwestern's US$3.75 billion targets.So, if Minister Pyne's claim that he wants several Australian universities to be in the world's top 50 is to be believed, he should have recommended a vast increase in federal funding to higher education, in the recent budget. Other Funding Sources Sadly, just the opposite was true-as proposed to shift the cost burden even further onto students.The government's share of funding is scheduled to fall by 20 percent, while students will pay substantially more in fees.This is despite the fact the Organization for Economic Cooperation and Development (OECD) data show that Australian higher education already rates poorly, relative to other member countries, in terms of public support for higher education.Australian students already bear a higher proportion of the costs of their university education than most OECD countries, and the current proposals to remove the current cap on fees would exacerbate the situation.Worse, funding per student has been declining for some time, most notably during the Howard years (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006), when funding actually declined by 4 percent, in contrast with the OECD average rise of 49 percent.Students currently contribute 41 percent of the costs of their studies; the Audit Commission proposed raising this proportion to 55 percent.In addition, the proposed reduced threshold for student loans repayment would mean that students should have to commence repayments much earlier and substantially reduce their lifetime earnings-since repayments would be pegged to the full cost of the loan, rather than the current consumer price index. The proposal to uncap fees has proved divisive in at least two senses.Vice Chancellors of the top-tier Australian Group of Eight (Go8) research universities, who have most to gain, have tended to support a lift on the current fee cap.Even though they, too, will lose government funding-one estimated that its Faculty of Arts and Social Sciences would lose $10 million per year, while public funds to Engineering, Environmental Sciences, Communications, and Science would be cut by AU$5,000 per student.Other vice chancellors, with less to gain and a greater concern with equity, have been more critical-arguing that, if fees rise, poorer students will be deterred from studying, particularly from the more expensive programs.Greg Craven, for example, vice chancellor of the Australian Catholic University, warned of the divisive potential: "you don't want to have one Rolls Royce, and twelve clapped out Commodores."The proposal also pits students, who are understandably resistant to even higher costs for their university education, against (at least the Go8) universities. Funding the Private Sector A second key reform plank would see government funding opened to the private sector, a major change in a system that has been very largely public.At a time when, as part of an overall austerity drive, the current national government is proposing to rid itself of thousands of federal public servants; this would seem to be at odds with current rhetoric about preserving quality.In particular, a major expansion of providers would likely outstrip the capacity of the current national agency charged with regulating the sector-Tertiary Education Quality Standards Agency (TEQSA).Here, Australia's recent history of opening the vocational education and training sector to private providers is instructive.In that instance, state government regulators were overwhelmed by a dramatic increase in the number of provid-Minister Pyne's recent speech in London professed shock that more Australian universities were not in the top 50 worldwide, as one reason supporting a shake up in higher education. ers-some of which were genuine and some much more concerned with generating income than providing quality educational programs, facilities, or staff.As a result, regulators in many states could not maintain quality across the sector, with calamitous results.Headlines appeared of flyby-night providers and of international students-particularly from India, who were being misled by the institutions themselves, or duped by unscrupulous agents.When the press in India got wind of such incidents, sensational stories of Indian students being abandoned, duped, or attacked spread rapidly across newspapers and other media.Vocational student numbers from the subcontinent plummeted, and the reputation of the entire education sector suffered.The promised cuts of 50 percent to TEQSA funding clearly flies in the face of such precedent and raises the prospect of a similar outcome in higher education. If not all the implications of how far and how fast the new federal government wishes to deregulate and privatize higher education are yet clear, there are worrying signs that ideology has trumped sober policy analysis.If so, there are real risks for the higher education sector, including reputational risks that could imperil international higher education enrollments.Be careful what you wish for. Juan Ugarte Juan Ugarte, a Luksic Visiting Scholar at Harvard University, is professor at Catholic University of Chile, and former head of Higher Education at the Secretariat of Education in Chile's government (2010)(2011)(2012)(2013).E-mail<EMAIL_ADDRESS>C hile became the first South American nation to achieve membership in the Organization for Economic Cooperation and Development.Across a broad spectrum of socioeconomic and political measurements, including higher education performance, Chile tops the rankings across the Latin American region.That is because Chile's enrollment rates approach 60 percent, and almost 30 percent of Chile's population of 25-34 year-olds has attained tertiary education, well above the average for the region.Scientific productivity and impact, in proportion to the size of population, also positions Chile at the front of the Latin American region.A review of 2013 rankings like QS Latin American University Rankings, and Shanghai Academic Ranking of World Universities permit us to conclude that Chile has the highest density of "high-quality institutions" in the region. Two factors help explain Chile's exceptional performance in Latin America.The first is the nature of its system: state and nonstate universities compete in the same academic arena, and both enjoy public financial support.The second is the contribution that US universities have made to the development and modernization of Chilean universities. State and Nonstate Universities Since its birth as an independent republic, Chile has established a constitutional right to "freedom in education."In essence, this is the state obligation to ensure universal access and the right of citizens to choose their preferred institution.In higher education, this principle first materialized through the creation of the state university: the University of Chile in 1842 and then a nonstate university-the Catholic University in 1888.With this base, Chile's higher education system expanded its capacities through efforts of state and private foundations.Later, in 1923, Parliament approved public financing support for all of these institutions.Other national organizations, like the President's Council of Chilean Universities and the National Commission for Sciences and Technology, were created to support general university activities.Parents and students now enjoyed the option of selecting the best university to realize their academic ambitions, knowing they would receive the same benefits (such as scholarships) in any of them.Playing the same field, both state and nonstate institutions competed with strong incentives to attract students, faculty, and resources.Developing under these conditions, it is clear that the mixed nature of Chile's higher education system-the only one in Latin America using this model-helped explain its success, at least in part. The Contributions of US Universities Even though earlier contributions exist, the middle of the 20th century saw Chile and the United States sign two agreements that marked a turning point in modernizing the Chilean higher education system. In 1955, under the auspices of the United States Agency for International Development, the University of Chicago signed an agreement with the School of Economy of Catholic University of Chile, permitting a generation of economists to do their graduate studies in Chicago and creating the very influential group called "Chicago Boys."Professors Arnold C. Harberger and Milton Friedman played crucial roles in this effort.Friedman authored the expression "the miracle of Chile," to denote the impact of this new generation of scholars on national economic and institutional policy.Under the military government and influence
2,835
2014-09-01T00:00:00.000
[ "Education", "Political Science", "Economics" ]
J.S. Mill's Puzzling Position on Prostitution and his Harm Principle Abstract J.S. Mill argues against licensing or forced medical examinations of prostitutes even if these would reduce harm, for two reasons: the state should not legitimize immoral conduct; and coercing prostitutes would violate Mill's harm principle as they do not risk causing non-consensual harm to others, their clients do. There is nothing puzzling about Mill opposing coercive restrictions on self-regarding immoral conduct while also opposing state support of that conduct. But why does Mill oppose restrictions on prostitutes’ liberty if those restrictions could prevent harm to third parties? Mill's position is not puzzling once we recognize that his harm principle is not a harm-prevention principle that warrants restrictions on liberty to prevent harm no matter who caused it (as David Lyons famously argued) but instead warrants restrictions on liberty only of individuals who are the morally relevant cause of that harm. Mill's discussion of prostitution shows he prioritizes both individuality and moral progress over harm reduction. Introduction John Stuart Mill thinks prostitution is immoral.In a letter to Lord Amberley of Feb. 2, 1870 Mill writes that prostitution is 'second only to rape' in its 'evil propensity' to satisfy sexual desires; it offers not even a 'temporary gleam of affection and tenderness' and completely uses a woman as a mere means for a purpose she must find disgusting (CW 17:1693). 1Because prostitution is immoral, Mill does not think the state should legitimize it by regulating or licensing prostitutes.In 1871 Mill testified against the Contagious Diseases Acts , hereafter referred to as 'the Acts'.The Acts required suspected prostitutes to be examined and forcibly detained for treatment if found to have a sexually transmitted disease doi:10.1017/S003181912300027X© The Author(s), 2023.Published by Cambridge University Press on behalf of The Royal Institute of Philosophy.This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/),which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. First published online 25 October 2023 Philosophy 99 2024 1 CW refers to Mill, 1963Mill, -1991, 33 vols, 33 vols.Cited as volume: page.OL refers to On Liberty.I refer to prostitutes as female and their clients as male because Mill did; Mill was either unaware of or ignored the existence of male prostitutes in his day. (STD), a primary aim being to protect soldiers who frequented prostitutes (Jose and McLoughlin, 2016, pp. 254-56;Waldron, 2007).One reason Mill objects to the Acts is that like licensing schemes, they encourage immoral conduct by making it safer, and Mill believes that rather than focus on harm-reduction, the state should promote moral progress. 2In his testimony against the Acts Mill relies on his moral objection to prostitution also in claiming that police may prevent solicitation in the streets (CW 21:369); presumably solicitation by prostitutes, when done publicly, is the sort of 'offence against decency' that in On Liberty he says the state may prohibit (OL,.In addition to these two claimsthat the state should not license prostitutes, and that it should prevent public solicitation, both of which are motivated by Mill's commitment to moral progress -Mill makes a third claim about prostitution that may seem at odds with these first two.Mill objects to the Acts also because they impose 'a penalty for being a common prostitute' (CW 21:352) and he does not think prostitution should be illegal. It isn't puzzling for Mill to think that prostitution should be legal while also thinking that the state should not morally condone or legitimize it. 3What does seem puzzling is that in his seminal work of political philosophy, On Liberty, Mill supports state restrictions on liberty in order to prevent harm, yet is unwilling to restrict prostitutes to prevent them from harmfully spreading disease.As we'll see, Mill reasons that any harm they cause to their client was consented to, and if their client proceeds to spread an STD to a third party, they and not the prostitute cause that harm.Jeremy Waldron finds Mill's opposition to the Contagious Diseases Acts 'bewildering' given Mill's defence of the harm principle.In On Liberty Mill defends individuality (OL ch.4), which his harm principle promotes by ensuring that individuals are free to engage in self-regarding conduct even if it flouts customs or social norms, so long as their conduct doesn't harm others.According to the harm principle, the only end for which the state may legitimately exercise coercive power is to prevent harm to others.On Waldron's view, given that the Acts aim to curb the spread of STDs and thereby reduce harm, shouldn't Mill support the Acts? 2 As I note in section 5, Mill's concern with moral progress is connected to his defense of utilitarianism. 3 Cf. Skorupski, 1999, pp. 223-24: Mill endorses 'permissive neutrality' (the state may not impose legal obstacles to pursuing one's conception of the good so long as in doing so one doesn't harm others) but rejects 'persuasive neutrality' (the state must refrain from encouraging a particular conception of the good). 2 Mark Tunick (Waldron, 2007, p. 16) I have two objectives: a subsidiary goal is to clear up the puzzlement about Mill's views on prostitution; my larger goal is to show how Mill's position on prostitution casts doubt on a prevalent interpretation of the harm principle and motivates us to seek an alternative understanding of this central feature of Mill's political philosophy.I argue that Mill's position is not puzzling after all, for the goal of Mill's harm principle is not harm reduction. In section 2 I establish that Mill believes prostitution should be legal, even though he opposes the licensing of prostitutes and believes police should prevent public solicitation.In section 3 I argue that Mill's position rests on a particular interpretation of the harm principle that has been in recent disfavour: that the harm principle warrants the use of coercion only on those who themselves proximately cause non-consensual harm, and not to prevent harm no matter who caused it.This interpretationthat the harm principle is a 'harm-causation' principlewas originally laid out by D.G. Brown (Brown, 1972).In section 4 I resurrect and extend Brown's position, defending it against David Lyons' opposing view that Mill's harm principle is a 'harm-prevention' principle, and that Mill would permit the state to coerce an individual to prevent harm to others even if that individual did not proximately cause the harm (Lyons, 1997).In section 5 I show how Mill's claim that the state should not license or legitimize prostitution further supports the position that Mill's primary concern is not harm reduction.Licensing would make prostitution safer, yet Mill opposes licensing, because he thinks the state should promote moral progress.Section 6 then addresses an apparent inconsistency between Mill's view that prostitution should be legal, and his view that the police should prohibit public solicitation.If the harm principle requires that prostitution must be permitted because the prostitute is not the morally relevant cause of non-consensual harm to others, why would Mill restrict public solicitation?While my main purpose is to interpret rather than evaluate Mill, in section 7 I conclude with some evaluative comments about the implications of Mill's defence of a harm-causation as opposed to a harm-reduction principle. Mill's Position on Prostitution While Mill thinks prostitution is immoral, and for that reason opposes state licensing of prostitutes, he defends a principle of libertythe harm principleaccording to which the prostitute should be free to engage in immoral, self-regarding activity in private.Mill opposes 'legal moralism', or the view that the state may legally punish conduct that is regarded as immoral even if that conduct doesn't harm others. 4In his testimony on the Acts Mill opposes seduction and bastardy laws, explaining: 'at present my feeling is against any attempt however much it may be agreeable to one's moral feelings, to restrain illicit intercourse in that way'. 5Laws should keep us from harming others, but not force us to be moral. While consensual sex between a man and a prostitute may not be entirely self-regarding as it can put the man's wife or other intimate partners at risk of receiving an STD, nevertheless Mill doesn't think that prostitutes should be punished for selling their sexual services. 6Mill doesn't explicitly say this in On Liberty but there is compelling textual evidence that this is his position.First, Mill says in the 'Application' chapter that '[f]ornication, for example, must be tolerated' (CW 18:296).That alone is no proof that Mill thinks prostitution should be legal since one could think that fornication with a prostitute should be treated differently.But he then immediately takes up a puzzle: while fornication must be tolerated, 'should a person be free to be a pimp?' (CW 18:296).Pimps are 'accessories' to prostitution by facilitating the transaction between prostitute and client, and Mill wonders why we should punish 'the accessory when the principal [the prostitute] is (and must be) allowed to go free': why fine and imprison 'the procurer, but not the fornicator?' (CW 18:297).Mill was torn by a similar question 12 years later when in his testimony on the Acts he is unable to conclude on the 'very difficult' question of whether brothels should be permitted cf. 369).7 My point is that the question of why we should punish the pimp but not the prostitute is puzzling for Mill only because he assumes that we should not punish the prostitute.This is as close as we get to direct evidence in On Liberty that Mill does not think prostitution should be a crime. I his testimony on the Acts, Mill supports the criminalization of prostitution for girls under 17but only because they aren't yet adults and so their 4 Feinberg (1984, p. 12) (defining legal moralism). 5 CW 21:370, my emphasis.See also CW 26:664.Mill's hesitancy ('at present my feeling') may reflect a tension between his commitments to individuality and to moral progress. 6 The harm Mill is concerned with regarding prostitution is the spreading of STDs and not anything else.In section 3 (n.16), after discussing Mill's conception of harm, I explain why he could dismiss other possible 'harms'. 7 These questions raise complexities, including free speech concerns, that I address in Tunick (2022).liberty can properly be interfered with (CW 21:368)the implication again being that adult prostitution should not be illegal. 8 The Harm Principle and Mill's Position on Prostitution Mill defends our liberty to engage in self-regarding activity that could not harm others, but as prostitutes risk spreading harmful STDs, wouldn't Mill have good reason to think prostitution should be illegal?In this section I argue that Mill's harm principle is not a harm-reduction principle.It does not permit the state to coerce me merely if doing so would reduce the amount of harm in the world.It may coerce me only if I am the morally relevant cause of that harm; and Mill does not regard the prostitute as the morally relevant cause of harm when their client spreads an STD to an innocent third party. In On Liberty Mill introduces the harm principle as holding 'that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their members, is self-protection.That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others' (CW 18:223).There are several ambiguities as to what the principle means, and I now address two that are important in understanding Mill's position on prostitution.In later sections I address a third ambiguity. First, what constitutes 'harm to others'?Some commentators interpret this broadly: if you do something that upsets, offends, or merely displeases me, such as having sex in public in plain view of me or my children, and I don't consent to your activity, you have harmed me and your activity can be regarded as 'other-regarding' and subject to possible interference.9On this view, society could have jurisdiction over you if you prevail over me in a competition in business or athletics, or the courting of someone we both love, because that would harm me using this wide conception of harm.But Mill clearly rejects that account of harm.In On Liberty he says the state should not intervene when someone loses out in a 8 Cf. McGlynn (2012, p. 16): Mill opposed licensing or 'legalizing' prostitution but did not advocate its criminalization.competition, because 'society admits no rights, either legal or moral, in the disappointed competitors' (CW 18:293).This passage indicates that for Mill, harming others involves more than producing a bad consequence such as offending or displeasing them.It sets back interests they have that are regarded as rights, through an act to which they do not consent. 10If there is no violation of a right, there is no harm. 11Mill adds that there should be a 'definite damage, or a definite risk of damage' or 'perceptible hurt to [an] assignable individual except himself' for an action to be placed in the 'province […] of morality or law' (CW 18:282).These clarifications help limit the ambiguity of Mill's principle, but Mill still leaves open the question of what rights there are.Does my failing to rescue a drowning person, when I easily could, constitute 'harming them' and therefore legitimately expose me to punishment?That depends on whether they had a right to be saved.It isn't always clear on Mill's view what rights society ought to declare.I return to this ambiguity in section 4. I've already introduced a second ambiguity of the harm principle.It might mean what the words in Mill's introductory statement of it literally say: the state may coerce individuals if doing so will 'prevent harm', regardless of whether the person being coerced caused the harmthis is the harm-prevention principle.But there is a competing interpretation: the state may use coercion upon individuals only if those individuals are the morally relevant, 'proximate' cause of non-consensual harm to others.According to this 'harm-causation' principle, there are two conditions that must be met for the state legitimately to coerce me.First, I must be a proximate cause of harm to others.It is not enough that 'but for' my conduct harm to others would not have resulted; my conduct must have a direct connection to the resulting harm, with no intervening voluntary cause of that harm that would nullify my responsibility for it.12For example, 10 I rely on OL, CW 18:276 ('not injuring the interests of one another; or rather certain interests, which […] ought to be considered as rights '),and CW 18:225 ('if it [a]ffects others only with their […] consent'); and follow Rees (1960), Brink (1992, p. 85), Donner (2009, p. 161), andThomas (1983).This conception of harm is developed by Feinberg, who distinguishes harm from 'hurt' (Feinberg, 1984, pp. 45-57) although Mill himself sometimes uses these terms interchangeably.11 Riley refers to the loss suffered by losers in a competitive market as a 'non-consensual harm' (Riley, 2015a, p. 795).Really, it is no harm because there is no right to succeed in a competition. if I fire a gun at you and miss, but cause you to flee in panic, and as a result you are injured after running in front of a truck that can't avoid hitting you, I am the morally relevant cause of your injury; but if the truck driver could have easily avoided you but purposely struck you, they are an intervening voluntary cause that replaces me as the most proximate, morally-relevant cause of your injury.Second, even if I do directly injure another party, if they consented to my doing so I cannot be said to have harmed them.If I freely consent to fight you in a duel with pistols at close range, knowing your skill as a marksman, and I am injured by your shot, I consented to that risk and so you have not caused non-consensual harm. Resolving the ambiguity of whether the harm principle is the 'harm-prevention' or 'harm-causation' principle is essential in addressing prostitution.Waldron, in finding Mill's opposition to the Contagious Diseases Acts 'bewildering', assumes that Mill defends the harm-prevention principle. 13His Mill is willing to restrict liberty to prevent harm to others, and since the Acts prevent the spread of disease, shouldn't Mill support the Acts? Mill recognizes that ensuring the health of the community is within the province of government (Acts, CW 21:357).In On Liberty Mill distinguishes the 'preventive function of government' from the 'punitory function' and says both may be employed to fight crime (CW 18:294).But we should coerce or punish the right party: not the prostitute, but the one who foreseeably and proximately causes harm to a non-consenting, innocent third party.Mill argues this explicitly in his testimony against the Acts.The Royal Commission conducting hearings on the Acts asks Mill, can't the state get involved if the object is to prevent harm to third parties such as wives or other innocent parties who might get the disease from the man?Mill replies that the woman doesn't transmit the disease to these third parties, the man does, and so the man is more properly targeted if that is the Acts' aim (CW 21:354).Later he makes a similar point: '[I]t is only a man who having been infected himself can communicate infection to an innocent person' (CW 21:362).Mill suggests that the state impose 'very severe damages' result would not have occurred ' -Commonwealth v. Rosado, 434 Mass. 197, 202 (2001) we can interpret 'continuous' to mean there was no intervening voluntary cause.For discussion of proximate causation see Feinberg (1965).13 Waldron points to Lyons' work for a 'good account' of Mill's harm principle as authorizing intervention to prevent harm regardless of who is to blame (Waldron, 2007, p. 18 n. 36). Waldron is not convinced.Referring to the 'consented to' provision of the harm principlewhich says that the protected sphere of liberty includes not only conduct 'which affects only [my]self' but also conduct which 'affects others […] with their [c]onsent' (CW 18:225) -Waldron writes, 'Certainly, Mill would have had little patience with the objection that the transmission of infection did not count as harm inflicted by the prostitute because the transaction was consented to' (Waldron, 2007, p. 18).Waldron acknowledges Mill's testimony that the man 'knowingly places himself in the way of' the disease and the women have nothing to do with its direct spread to others (Waldron, 2007, p. 28, citing CW 21:354).But according to the harm-prevention principle that Waldron takes Mill to defend, that doesn't matter: the threshold 'necessary condition' for coercing me is met merely if I do something that somehow contributes, even indirectly, to the injury of others, including 'unknowing (and therefore non-consenting) [third] parties' (Waldron, 2007, p. 18).This is why Waldron must instead turn to other grounds to account for Mill's opposition to the Acts: by applying only to female prostitutes and not their male customers, they impose an unequal burden based on one's sex. 14Other scholars who explain Mill's opposition to the Acts similarly turn to Mill's commitment to equal treatment of the sexes.For Jim Jose and Kcasey McLoughlin, Mill opposes the Acts because they reflect 'sexist thinking': the real aim of the Acts is to enshrine male privilege; and for Clare McGlynn, Mill opposes the Acts because they wrongly target women instead of the male clients who create the demand for prostitution. 15Mill also raises due process objections.Under the Acts, police could 'apprehend' women on suspicion of being a prostitute and bring them to a magistrate, who could confine them for up to 6 months if they refused to be examined (CW 21:351).Mill objects that police discretion can be abused, and that the hearings did not provide for a jury . While Mill clearly had equal protection and due process objections to the Acts, his opposition is based more essentially on his assessment that the Acts violate the harm-causation principle.It provides a threshold test for when state coercion is permissible, and the prostitute's activities don't reach its bar.The prostitute can be contrasted 14 Waldron (2007, p. 28, drawing on CW 21:368, 356;cf. pp. 25-26, 35).15 Jose andMcLoughlin (2016, pp. 261-62), McGlynn (2012).8 Mark Tunick with the individual who incites a frenzied mob to commit an imminent act of violence and who Mill says 'may justly incur punishment' (OL, CW 18:260).The inciter manipulates the mob and might be seen as a proximate cause of the harm the mob, in its 'frenzy', proceeds to nonvoluntarily inflict; but prostitutes who spread STDs presumably do not manipulate their clients, and if their client proceeds to give an STD to a third party the client would be an intervening voluntary cause of that harm.In On Liberty Mill does not explicitly defend one interpretation of the harm principle over the other; but his insistence that 'the prostitute is (and must be) allowed to go free' (CW 18:297), and his reason, which he provides in his testimony on the Actsthat the prostitute doesn't transmit STDs to third parties, her client does (CW 21:354, 362)makes sense only according to the harm-causation principle. 16 Re-Interpreting the Harm Principle as the Harm-Causation Principle Does this interpretation of the harm principle stand up in light of other positions Mill takes, or is Mill's position on prostitution an anomaly?Over 50 years ago D.G. Brown noted that if we take Mill's introductory formulation literally as permitting the state to limit one's liberty if doing so would prevent harm, the state could punish me to deter you from causing harm even though I did nothing that risked harming others.So Brown reformulates the principle to say that 'the liberty of action of the individual ought prima facie to be interfered with if and only if his conduct is harmful to others' (Brown, 1972, p. 135). David Lyons, responding to Brown, defends the harm-prevention interpretation instead.He argues that Brown's harm-causation 16 One might think prostitutes cause harm in other ways: by impeding moral progress, or causing the harm of adultery to the spouse: I thank an anonymous reviewer for raising this point.But Mill does not regard the impeding of moral progress as 'harm' as it does not involve definite damage or risk of damage to an assignable individual.Mill's position regarding adultery is complicated by his own relationship with a married woman and not one I venture to explore; but he could plausibly think adultery, too, does not harm, as it does not set back interests that are regarded as rights.Even if there were a right not to be disappointed by one's spouse's infidelity, it would be the cheating spouse, not the prostitute, who committed a breach of trust. principle fails to account for some positions Mill takes.Lyons focuses on the following passage from On Liberty: [There are m]any positive acts for the benefit of others, which [one] may rightfully be compelled to perform; such as, to give evidence in a court of justice; to bear his fair share in the common defence, or in any other joint work necessary to the interest of the society of which he enjoys the protection; and to perform certain acts of individual beneficence, such as saving a fellowcreature's life, or interposing to protect the defenceless against ill-usage; things which whenever it is obviously a man's duty to do, he may rightfully be made responsible to society for not doing.A person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.(CW 18:224-25, my emphasis) Lyons argues that failing to testify at trial, save a fellow creature's life, or pay taxes to contribute to joint undertakings such as the common defence does not cause non-consensual harm, and so if Mill meant to defend Brown's harm-causation principle, coercive interference would not be warranted for these omissionsyet Mill says one may be 'compelled to perform' these acts (Lyons, 1997, pp. 116-17).To avoid that inconsistency, Lyons argues that Mill defends not Brown's version of the harm principle but a 'harm-prevention' principle that can justify coercive interference in these cases.According to that principle, '[h]arm to others can be prevented not just by interfering with acts that can be said to cause, or that threaten to cause, harm to other persons'; merely preventing harm to other persons suffices as a reason for restricting behaviour (Lyons,pp. 124,.Giving testimony in court can be required as testimony is needed for the criminal justice system to effectively prevent future harm (121); aiding someone who is injured can be required to prevent further harm to others (119), even if the bad Samaritanwho fails to aidwouldn't be the proximate cause of harm; and we may coerce individuals to pay taxes because cooperation requirements 'may well provide the only means of preventing or eliminating some significant harms, such as malnutrition and starvation' (122).Lyons suggests that on Mill's view one might, as a means of harm-prevention, even be forced to contribute to foreign aid efforts for the purpose of preventing war (123). Lyons' account fails to explain Mill's position that prostitution must remain legal, and Mill's opposition to the Acts.Though Mill knows prostitutes can spread STDs, and that the Acts could help reduce that risk, he still insists that prostitutes cannot be coerced 10 Mark Tunick because they do not proximately cause non-consensual harm to others. But that is not the only textual evidence favouring Brown's view that the harm principle is a harm-causation principle and not Lyons' harm-prevention principle.In On Liberty, Mill says that I should be free to publish potentially dangerous opinions, such as that tyrannicide is lawful, or that corn-dealers starve the poor, even though doing so might inspire one of my readers to harm others.But, he continues, if I directly incite a crime, by delivering my opinion to an excited mob assembled before the house of a corndealer, or if my encouraging tyrannicide has a 'probable connexion' to a wrongful act, my freedom of speech can be restricted. 17 In theory this distinction could be justified using the harm-prevention principle.One could argue that the harm principle permits restrictions in either casepublishing one's opinions, inciting a particular person to actto prevent harm; but that whether the state should take measures that the harm principle would permit must be decided by the principle of utility, and that principle would support punishment only of the direct inciter, given the tremendous disutility of chilling speech addressed to a general audience. 18But that is not what Mill argues.Instead, he argues that publications for the general publiceven if they could lead to substantial harms, such as tyrannicide, or attacks on merchantsmust be permitted because they are not proximate causes of harm.For Mill, we decide whether restrictions on liberty to prevent harm are warranted based not merely on the utility the restrictions would have, but on whether there is a 'probable connexion' between the exercise of liberty and the harm: in the case of speech that could lead to harm, between speaker and perpetrator of the harmful act (OL, CW 18:228n). 19When there is notas when I publish a tract for a general audience that happens to instigate a reader to commit a crimethe perpetrator's intervening voluntary act absolves me of responsibility for the harm that results. Mill indicates that I can be coerced only to prevent harm of which I am the morally relevant cause and not, as Lyons holds, to prevent harm regardless of who caused it also when, in laying out his harm principle, he says that we cannot restrict an individual's liberty 17 CW 18:228n (tyrannicide); CW 18:260 (corn-dealers).For discussion see Tunick (2022, pp. 401-2). 18 This line of argument follows the approach laid out in Turner (2014). 19 For discussion see Cohen-Almagor (2017, pp. 582-86); and Tunick (2022).11 J.S. Mill's Puzzling Position on Prostitution unless they had a malicious intent: 'the conduct from which it is desired to deter him, must be calculated to produce evil to someone else'(CW 18:224, my emphasis).Malicious intent to cause harm is necessary but not sufficient to subject one to coercion.Even if the publisher of opinions supporting tyrannicide hoped their publication would incite some reader to commit murder, and that is why they published their views, Mill still would not restrict their liberty to publish without a 'probable connexion' between speaker and actor.But Mill says intent is a requirement, and that supports the interpretation of the harm principle as the harm-causation principle.The prostitute is not subject to coercion not only because there is an intervening voluntary cause of any harm to a third party that results from her act, but also because she lacks the intention to injure innocent third parties. Two other objections to Lyons' interpretation challenge the evidence he musters to support it.First, the three omissions to which Lyons points as evidence that Mill endorses state coercion for conduct that does not itself cause harmfailing to testify at trial, save a fellow creature's life, or pay taxesmight be construed as proximately causing harm.Second, even if we disagree, Mill's harm principle could still be the harm-causation principle: owing to a further ambiguity in his principle, the interference Mill might support in these cases may fall short of the coercive exercise of power that the principle rules out.I lay out these objections in turn. Lyons assumes that Mill's support of the use of 'compulsion and control' in the three cases can't be accounted for by the harm-causation principle.While there is no direct textual evidence either way regarding whether Mill regards any of these three failures to act as proximately causing harm, a plausible case can be made that they do.My failure to testify in a criminal trial could proximately cause non-consensual harm by letting a dangerous person go free.Lyons may assume that person would be an intervening voluntary cause of any future harm they inflict, just like the prostitute's client who, after receiving an STD from the prostitute, then spreads the disease.Their intervening voluntary act eliminates me as the proximate cause of the resulting harm.But when my failure to testify results in the release of a dangerous suspect, their very release could cause definite damage to assignable individuals who sought justice, or who would suffer anxiety over a looming threat the defendant on trial would pose to them if released.I am the proximate cause of these harms.My failure to pay taxes that help fund the common defence might also be said to proximately cause foreseeable harm to assignable individuals whose interests are setback by now having to shoulder an unfair share of the overall tax burden.While the harm is diffusea single individual's evasion of taxes may cause no perceptible damage to anyone in particularthe aggregate effects of noncompliance does constitute perceptible damage; and there is no intervening voluntary cause of that harm to others to nullify the role the tax evader plays as proximate cause.Lyons may assume the harms being prevented when the state coerces me to provide financial support for the common defence are the assaults we would suffer if we had inadequate defences, and he would be right that I am not the proximate cause of those harms, the assaulters are; but the harm Mill could have in mind is the increased tax burden everyone else faces, of which I am a proximate cause. The hardest case to reconcile with the harm-causation principle may be that of the bad Samaritan, such as the person who doesn't attempt to rescue a drowning swimmer when they easily could.For Mill to think it justified to punish the bad Samaritan using the harm-causation principle, he would first have to think that failing to prevent the drowning itself causes the drowning, even though the drowning would have occurred if the bad Samaritan were nowhere in the vicinity.One might think, in general, that omissions or inaction cannot be the cause of harm.Mill, however, disagrees: 'a person may cause evil to others not only by his actions but by his inaction' (OL,CW 18:225). But Mill would also have to think the omission is a cause of harm.Recall that for Mill, a harm is a setback to interests that are regarded as rights.A swimmer who drowns because of a strong rip current suffers misfortune.But for Mill to justify punishment of the bad Samaritan using the harm-causation principle, he would have to think that in failing to act, they set back interests that are regarded as rights and therefore harmed the swimmer; he would have to think that the drowning person had a moral or legal right not to suffer that misfortune. 20ecause of the ambiguity in Mill's harm principle that I discussed in section 3 regarding what constitutes a right the violation of which could be considered a harm, Mill could regard failing to rescue as violating a right, though we can't be sure if he would.In his essay 'Comte and Positivism' Mill says that someone who disappoints our expectations of what a moral person would do can properly be blamed: 'inasmuch as everyone, who avails himself of the advantages of society, leads others to expect from him all such positive good offices and disinterested services as the moral improvement attained by mankind has rendered customary, he deserves moral blame if, without just cause, he disappoints that expectation.Through this principle the domain of moral duty, in an improving society, is always widening'. 21Disappointing such expectations might be seen as a breach of promise that sets back interests of others that are regarded as rights, thereby harming them (Berger, 1997, pp. 49-50).In On Liberty Mill says that a breach of contract can be made a 'subject of legal punishment' (CW 18:295).Mill could think that a legislature might 'raise' a promise or contract by creating a right to be rescued, just as it might create a right that others testify in court cases impacting me, or that I pay only my fair share of taxes and not more. Yet Mill might be wary of adopting this position.Doing so could set a precedent for legislators to expand the state's authority to restrict individual liberty simply by declaring rights.The state could declare a right not to be offended or displeased.Mill, in defending individuality, forcefully objects to the 'monstrous principle' that would establish an expansive social right that others not act to 'weaken and demoralize society' (OL,CW 18:288).Mill does say that what rights there are is settled by the principle of utility (Utilitarianism, CW 10:250; cf.OL, CW 18:224), and one might think Mill would trust legislators to reject expansions of rights that threaten individuality using that principle.Yet presumably legislators enacted the Contagious Diseases Acts to promote social utility.To do so, they implicitly asserted a right of innocent third parties not to face a risk of disease, the protection of which right would justify coercing prostitutes.Mill, who opposed the Acts, could doubt that utilitarian-legislators can be trusted to adequately respect individual liberty.He could think we need the harm-causation principle's requirement that to restrict liberty not only must a legislatively-declared right be violated, but the targeted activity (or omission) must proximately cause setbacks to the interest of others that results in 'definite damage'.Only then would individual liberty be protected against a state that enforces an unduly expansive list of rights, and not be 'swallowed up' by utilitarianism.If that is how Mill would resolve the ambiguity in his harm principle of whether there is a right-violation, he may well see the failure to rescue as triggering the harm-causation principle.He explicitly says one can cause evil by their inaction; and he could see my failure to rescue you from drowning as a setback to your interests that causes definite damagea requirement 21 CW 10:337-38, quoted in Brown (1972, p. 153). Mark Tunick not met, at least to the same degree, where I merely displease or offend you, or beat you in a competition. Even if we don't agree Mill could think the three omissions Lyons points to proximately cause harm to others, Mill's harm principle could still be the harm-causation principle.Mill does say that one may be 'compelled to perform' the acts in question and be 'made responsible to society', and so Lyons has good reason to think Mill allows for punishment even of those whoassuming we don't accept the previous line of argumentdon't cause harm.But I now argue that because of a further ambiguity in his harm principle, Mill could claim that the measures the state takes to hold people responsible for their omissions in these three cases falls short of the exercise of coercion Mill reserves only for those who proximately cause harm to others. In addition to the two ambiguities of Mill's harm principle I discussed in section 3, there is a third: what constitutes an 'interference with liberty' or rightful 'exercise of power' against one's will that is not warranted unless it prevents harm to others?Legal punishment is the most obvious example, and Mill explicitly refers to it (OL,CW 18:292).But what about fines?Time, manner, or place regulations that merely limit the circumstances under which one may act but do not outright prohibit the activity?Refusing to subsidize or license the activity?What about forms of interference that are undertaken not authoritatively by the state, but by private individuals, such as exhortations, group interventions, or boycotts?Mill isn't entirely clear.When Mill says those who setback the interests of others may be 'subjected either to social or to legal punishment' (OL, CW 18:292) he has in mind punishment inflicted not only by state actors but by private individuals, acting either in isolation or in coordination with others.He gives as examples of interference, or an 'exercise of power,' 'compelling' someone to do their duty (CW 18:224) and 'compulsory labor' (CW 18:295), so he has in mind coercive exercises of powerinterferences that force one to act in a certain way.That he means to single out 'coercive' exercises of power is evident also from passages where he refers to other sorts of interference which he says must be permittednon-coercive means of persuasion such as exhortations or expressions of contempt.Mill thinks such 'natural penalties' are permissible means to morally improve those whose conduct we find distasteful or contemptible (CW 18:282).Not only may they be inflicted in response to self-regarding activityactivity that does not harm othersthey may even be a more appropriate form of interference than an exercise of coercive power against someone who does harm others.The harm principle permits or warrants but does not require the use of coercion. Immediately after giving his examples of omissions for which one can be 'made responsible', Mill adds that to be made justly accountable to society for one's inaction 'requires a much more cautious exercise of compulsion' than is required to respond to one's actions.'To make any one answerable for doing evil to others, is the rule; to make him answerable for not preventing evil, is, comparatively speaking, the exception'.Mill then says that in deciding whether the person failing to act can be held 'justly accountable', we need to consider 'the special expediencies of the case: either because it is a kind of case in which he is on the whole likely to act better, when left to his own discretion […]; or because the attempt to exercise control would produce other evils, greater than those which it would prevent […]' (CW 18:225). Here Mill echoes Bentham's argument in Introduction to the Principles of Morals and Legislation that there are 'cases unmeet for punishment' where punishment is warranted but for utilitarian reasons is not implemented (Bentham, 1789, ch. 13).But that may not be Mill's main point.In recognizing degrees of responsibility depending on whether one acted or failed to act, Mill may implicitly acknowledge that there are varying degrees to which someone might be said to proximately cause resulting harm.Mill doesn't think a prostitute proximately causes harm to non-consenting third parties and so the prostitute can't be punished or subject to other coercive interference for trading in sex; but there are other cases where there is a less attenuated connection between act or inaction and result.Mill, in referring to a 'more cautious exercise of compulsion', may also have in mind how coercion is a scalar property, and that there may be ways of 'compelling performance' falling short of punishment.The ambiguity in the terms 'interference with liberty' and 'exercise of power' in his harm principle provides Mill some leeway so that even if he thought that, like prostitutes, bad Samaritans, tax evaders, or those failing to testify in court did not proximately cause harm, in saying they could be 'compelled to perform' he could be referring to means of compelling that fell short of punishment, such as fines, or the exhortations and other natural penalties he allows even for self-regarding conduct that does not proximately cause harm to others.More likely, given that Mill says that a person may 'cause' evil even by inaction (CW 18:225), he could think that they proximately cause harm at least to some degree, which could support ways of 'compelling to perform' that may even include punishment.In either case, we needn't follow Lyons in rejecting the harm-causation 16 Mark Tunick interpretation of Mill's harm principle to explain Mill's willingness to hold these individuals accountable for their omissions. Mill's Objection to Licensing Prostitution Lyons' view that Mill is willing to exercise coercion upon an individual to prevent or reduce harm even if that individual is not the morally relevant cause of harm is contradicted by Mill's claim that prostitution must be legal because the prostitute does not proximately cause harm to non-consenting third parties.In this section I present a further objection to Lyons' reading of the harm principle.As noted in section 1, one of Mill's major objections to the Acts is that by in effect licensing prostitutes, the state legitimizes their conduct. 22Mill's objection is puzzling to those who see his political philosophy as centrally concerned with harm reduction (Waldron, 2007).By turning to his reasons for opposing licensing, we see that Mill is more concerned with promoting moral progresssubject to the constraints imposed by the principle of libertythan he is with reducing harm. Mill does not think the state should prohibit prostitution, because prostitutes don't proximately cause harm.But to license is not to outright prohibit.Mill thinks individuals should be free to engage in risky self-regarding behaviour without state meddling, but not necessarily at liberty to engage in commerce with each other free from state regulations that could ensure the transactions are safe.In On Liberty Mill says that 'trade is a social act' that 'affects the interest of other persons' and therefore comes under the jurisdiction of society (CW 18:293).Mill opposes regulations restricting a buyer's ability to purchase goods and services for their self-regarding aims (CW 18:288); but he allows for regulations of sellers.The state can't restrict my liberty to buy poisons for self-regarding purposes, for example, but it can regulate sellers of poison: To require [of a buyer of poisons] in all cases the certificate of a medical practitioner, would make it sometimes impossible, always expensive, to obtain the article for legitimate uses.The only mode apparent to me, in which difficulties may be thrown 22 Mill recognizes that the Acts don't issue licenses, but he says 'there is hardly any distinction' between what the Acts require and a licensing system (CW 21:357), though he acknowledges that licenses 'have still more the character of toleration of that kind of vicious indulgence, than exists under the Acts at present' (CW 21:356). in the way of crime committed through this means, without any infringement, worth taking into account, upon the liberty of those who desire the poisonous substance for other purposes, consists in providing what, in the apt language of Bentham, is called 'pre-appointed evidence'.(CW 18:294) Mill then explains: sellers can be required to document purchases to deter crimes involving poisons, or to help catch a criminal after the fact (CW 18:295).Because the exchange of sexual services for money is a 'social act', then on this reasoning the state should be permitted to regulate its sale as well, without putting up a barrier for the buyer.So why does Mill oppose licensing of prostitutes, which would not create an outright barrier to purchasing sex? Mill's main objection is that licensing prostitutes will legitimize prostitution.He says this repeatedly. 23Mill distinguishes 'attacking evils [such as STD transmission] when they occur, in order to remedy them', from 'making arrangements beforehand which will enable the objectionable practices to be carried on without incurring the danger of the evil' (CW 21:358).He opposes the latter because he does not think the state should 'enable' or condone the morally objectionable practice: 'I do not think that prostitution should be classed and recognized as such by the State' (CW 21:359); he opposes 'toleration of that kind of vicious indulgence' (CW 21:356).By having hospitals devoted to prostitutes, the State would be going out of its way to facilitate prostitution, which would legitimize the practice (CW 21:354). To be sure, Mill gives apparently prudential reasons for not wanting to legitimize prostitution.If prostitution is made safer it will be encouraged (CW 21:355), increasing the demand for prostitutes and in turn the supply (CW 21:364).If we refuse to condone prostitution, we'd impress on people that it is immoral, and there may be fewer prostitutes on the streets (CW 21:368). But in wanting to reduce even safe prostitution, Mill shows that his overriding concern is not harm-reduction: it is to discourage immorality, or 'moral injury' (CW 21:371).Nor is his main concern, as some have suggested, a feminist opposition to male exploitation of women.Mill objects even to safe prostitution but not because he thinks women are forced into prostitution; in his testimony before the Commission Mill says that women 'voluntarily' choose to be 23 Cf. Collini, p. xxxviii: '[Mill] makes the Acts' official endorsement of vice the chief ground of his objection to them ',citing CW 21:353,356,360,and 371. 18 Mark Tunick prostitutes.He objects, rather, because the life of prostitution that they choose is 'degrading' (CW 21:368).Members of the Commission who favoured the Acts because they would reduce sexual disease were clearly irked by Mill's opposition: they asked Mill if he is really fine letting women come out and spread disease right and left, or leaving them to 'rot and die' rather than save them with the Acts (CW 21:365,366).First Mill replies that the question is unfair; anyone suffering a wretched disease can be laid hold of and given proper medical treatment .He then sticks to his objection, concluding his testimony by reiterating that we should not make 'safer than it would naturally be' a 'course which is generally considered worthy of disapprobation', for if we did, it would not be 'considered very bad by the law, and possibly may be considered as either not bad at all, or at any rate a necessary evil' (CW 21:371).Mill opposes the Acts because they would undermine a commitment to moral progress he thinks the state should pursue, a commitment that is grounded in his distinct theory of utilitarianism.While one might think that a utilitarian should support the Acts because they surely would reduce harm, Mill's utilitarianism seeks not harm reduction, but moral improvement.Mill thinks we should seek the 'higher pleasures' enjoyed by the 'cultivated mind'.24 It is not the quantity but the overall quality of lives lived that is to be promoted. One might object to Mill's position: it is more important to prevent harm than discourage immorality.Mill's position is at odds, for example, with government programs that provide drug addicts with sterile needles and tools to check that the illegal drugs they take are not laced with lethal substances, on the ground that it is more important to reduce harm than morally condemn drug use. 25One might also challenge Mill's assumption that by licensing an activity the government necessarily expresses approval of it.But regardless of whether we agree with Mill, his position on licensing indicates that Mill prioritizes not only individuality but moral progress over harm reduction.24 Utilitarianism, CW 10:213, 218, 249.For further discussion see Tunick (2022, pp. 399-400).25 Abby Goodnough, 'Helping Drug Users Survive, Not Abstain: "Harm Reduction" Gains Federal Support', New York Times, June 27, 2021. Mill on Public Solicitation Mill opposes criminalizing prostitution based on his defence of individuality: individuals should be free to make and pursue their own choices of how to live so long as they don't proximately cause nonconsensual harm to others.While the state must respect individuality by adhering to the harm principle, Mill also thinks the state should promote moral progress.This is why he opposes the licensing of prostitutes even though licensing would reduce harm. One puzzle remains concerning Mill's position on prostitution.As I noted in section 1, Mill apparently supports public solicitation lawslaws that prohibit prostitutes from advertising their services in public places.In his testimony on the Acts, Mill says the police have a duty to 'prevent solicitation in the streets', 'in order to preserve the order of the streets' (CW 21:369). 26This might seem to contradict On Liberty's defence of the principle of liberty: if prostitution in private does not proximately cause harm and warrant coercive state interference, why would its solicitation in public?But in a notoriously cryptic passage in On Liberty Mill opens the door to restrictions of normally self-regarding acts when done in public, if they are 'indecent'.Mill writes: Again, there are many acts which, being directly injurious only to the agents themselves, ought not to be legally interdicted, but which, if done publicly, are a violation of good manners, and coming thus within the category of offences against others, may rightfully be prohibited.Of this kind are offences against decency […].In addition to solicitation Mill could have in mind acts such as sex in a public place, or offensive displays akin to the displays of swastikas in a neo-Nazi march. 27Targeting offenses against decency sounds like the very legal moralism Mill explicitly disavows in saying that power cannot be exercised against someone's will except to prevent them from non-consensually harming others.If 26 I say 'apparently supports' because Mill had just been discussing under-age prostitutes, and the question abruptly shifted to solicitation in streets: it's possible (though unlikely) that Mill was referring here only to street solicitation by under-age girls.27 Both Wolff and Riley discuss the public sex example and offer others including masturbation, self-mutilation (Wolff, 1998, p. 4), swearing insultingly at one's wife in a public place, a parade by the KKK, and flatulating in public (Riley, 2015b, pp. 272, 275-77, 280-81). Mark Tunick prostitutes don't non-consensually harm others, isn't it inconsistent for Mill to think that public solicitation may be prohibited?(Wolff, 1998, p. 4). We could just dismiss the passage. 28But that would be a mistake: we shouldn't ignore Mill's commitment to moral progressit is a central component of his political philosophy.But how can we reconcile it with Mill's defence of the harm principle? Riley finds no inconsistency, by arguing that public indecencies cause harm, and while I agree that we can reconcile Mill's position on public solicitation and other indecencies with his harm principle, I would take a different route to do so.Some of Riley's examples of indecencies involve threats to public healthpublic urination, defecation, vomiting, sneezing (Riley, 2015b, p. 275)and I agree these present no conflict with the harm principle as these actions could foreseeably cause harm.In the case of nuisances most of us would regard as non-harmful, such as public sex, Riley presents what seems to me an unconvincing argument: they cause perceptible damage by crowding out higher priority uses of public places (276) unconvincing because failure to maximize efficient use of public resources violates no right of an assignable individual not to suffer definite damage or perceptible hurt.Riley suggests that such public indecencies can disappoint 'legitimate expectations' that emanate from laws and customs, and thereby deserve 'moral blame', and that 'deliberative majorities' may reasonably declare them as wrongful (274-75).In that case, leaving aside the harm principle's requirement that there be 'perceptible hurt' and 'definite damage', the public indecency would setback interests that are regarded as rights, meeting a key criterion for causing harm.But as I noted in section 4, that approach risks swallowing up the harm principle into utilitarianism, as legislators could simply declare rights not to be displeased or offended. There is another way to resolve the apparent inconsistency.When in his testimony on the Acts Mill agrees that the police have a duty to prevent public solicitation, or in 'On Liberty' he says public indecencies can be rightfully prohibited, he doesn't clarify what measures the police may take to preserve the public order.This calls to mind the ambiguity in his harm principle that I introduced in section 4: what constitutes an 'interference with liberty' or rightful 'exercise of power' that is warranted only to prevent harm to others?Mill's testimony would conflict with his harm-causation principle if he means the state may forcibly detain and punish prostitutes, because prostitutes, on his view, don't proximately cause harm to others.But if the 'rightful prohibition' of indecent public activity that Mill allows for in 'On Liberty' is a time-manner-place regulation, it could be distinguished from the coercive restriction of liberty that the harm principle rules out, inasmuch as the activity is still permitted in private. 29When in his testimony Mill agrees that police may prevent solicitation 'in the streets', he may have in mind a 'place' regulation similar to zoning laws that restrict the location of bars and adult entertainment clubs. Conclusion Mill believes the state may not punish prostitutes because prostitutes do not proximately cause non-consensual harm, and we shouldn't coerce people merely because we think they are acting immorally; yet he opposes state licensing of prostitutes, which could reduce harm, because he does not want to legitimize an immoral practice.While Mill defends his harm principle because it protects individuality, his defence of non-neutral state policies that promote moral progress, such as refusing to license prostitution, can potentially be more of a threat to individuality than Mill allows for.Consider laws that recognize marriages between a man and woman but not between same-sex couples, and that deny important benefits to non-married partners.One might argue that these laws do not restrict liberty in the way a law prohibiting homosexual sex would, because being denied tax benefits, hospital visitation rights, or countless other benefits is not the same as having one's liberty curtailed: liberty is freedom from hindrance and physical restraint, not entitlement to government support. 30But this argument fails to recognize that when the state refuses to recognize a marriage it inflicts dignitary wounds upon, stigmatizes, and demeans same-sex couples, and can injure or harm their children. 31Mill's commitment to moral progress not only can risk increasing the amount of harm in the world; it can also sometimes threaten the very individuality Mill wants to protect. Mark Tunick The threat the pursuit of moral progress poses to individuality is apparent when we turn to passages in which Mill supports non-coercive interference both by the state and individuals to promote moral progress, interference that is permitted by his harm principle because it falls short of 'exerting power' or 'compulsion'.Such natural penalties where I shun or voice displeasure or contempt to you because I disapprove of your self-regarding activitiescan promote individuality by being a means of exercising our freedom of expression and association. 32But they can also stifle individuality.Mill is well aware of this and sets limits on the exertion of social pressures: we may avoid the offending person but we may not parade our avoidance (OL, CW 18:278); we may privately warn our mutual friends about him, or deny him the 'perks of affection'; 33 but perhaps not organize boycotts. 34But by leaving his harm principle ambiguous as to where in the range of the scalar property of coercion an exercise of power becomes illegitimate, Mill risks justifying forms of interference that may compromise his commitment to individuality and liberty. 35 Mill's discussion of prostitution and the Acts may not leave us with an entirely satisfactory position, but it is significant.It strikingly illustrates how both his concern for individuality and his utilitariangrounded concern for moral progress prevail over the goal of harm reduction.We miss this significance if we construe the harm principle as a harm-prevention principle. 36
12,997.6
2023-10-25T00:00:00.000
[ "Philosophy" ]
Melt Blending with Thermoplastic Starch Starch is a natural polymer synthesized by green plants as energy source. In comparison with low-cost synthetic polymers, starch is inexpensive, abundant and renewable raw material for the development of polymeric sustainable materials. It has be used in its native granular form as rigid filler or transformed in a thermoplastic material for melt blending with synthetic or natural polymers. Polymers filled with dry starch granules behave as typical composite materials where modulus increases and ductility decreases due to the stiffening effect of the starch granules (Willett, 1994, Kim et al, 1995, Chandra & Rustgi, 1997). An important disadvantage showed by polymeric composites filled with granular starch is the low starch content that can be added, especially for application where high ductility is required (Griffith, 1977). In contrast to the ordered structure of starch molecules in granular starch, thermoplastic starch (TPS) is an amorphous material that can flow and be deformed as any synthetic polymer (St.-Pierre et al., 1997). Crystallinity of starch granules is destroyed by the application of heat and shear in the presence of moisture during the gelatinization process. The addition of a good plasticizer, such as glycerol, allows TPS to be extruded at the processing temperatures of most commodity polymers (St.-Pierre et al., 1997). Mechanical performance of TPS material blended with synthetic polymers depends on a series of parameters including blend morphology (particle size and shape, and particle dispersion and distribution), interfacial adhesion and the intrinsic characteristics of TPS (Rodriguez-Gonzalez et al., 2003b). It has been reported that melt blending of TPS with synthetic polymer is an excellent alternative for the development of sustainable and more environmentally friendly product (Rodriguez-Gonzalez et al., 2003b). Introduction Starch is a natural polymer synthesized by green plants as energy source.In comparison with low-cost synthetic polymers, starch is inexpensive, abundant and renewable raw material for the development of polymeric sustainable materials.It has be used in its native granular form as rigid filler or transformed in a thermoplastic material for melt blending with synthetic or natural polymers.Polymers filled with dry starch granules behave as typical composite materials where modulus increases and ductility decreases due to the stiffening effect of the starch granules (Willett, 1994, Kim et al, 1995, Chandra & Rustgi, 1997).An important disadvantage showed by polymeric composites filled with granular starch is the low starch content that can be added, especially for application where high ductility is required (Griffith, 1977).In contrast to the ordered structure of starch molecules in granular starch, thermoplastic starch (TPS) is an amorphous material that can flow and be deformed as any synthetic polymer (St.-Pierre et al., 1997).Crystallinity of starch granules is destroyed by the application of heat and shear in the presence of moisture during the gelatinization process.The addition of a good plasticizer, such as glycerol, allows TPS to be extruded at the processing temperatures of most commodity polymers (St.-Pierre et al., 1997).Mechanical performance of TPS material blended with synthetic polymers depends on a series of parameters including blend morphology (particle size and shape, and particle dispersion and distribution), interfacial adhesion and the intrinsic characteristics of TPS (Rodriguez-Gonzalez et al., 2003b).It has been reported that melt blending of TPS with synthetic polymer is an excellent alternative for the development of sustainable and more environmentally friendly product (Rodriguez-Gonzalez et al., 2003b). Rheological and thermal properties of water-free TPS The rheological and thermal properties of water-free TPS materials having high glycerol contents (29, 36 and 40%) were evaluated by DSC analysis and rheological measurements in shear and oscillatory modes (Rodriguez-Gonzalez et al., 2004).TPS materials were labeled according to their glycerol content.Hence, TPS29,33, TPS36 and TPS40 have 29, 33, 36 and 40% of glycerol.As previously mentioned, TPS materials prepared in this work are almost water-free starchglycerol systems.Compared with previous work, TPS materials prepared in this work are binary systems which allow a more straightforward evaluation of the effect of glycerol on the thermal transitions of starch.DSC analysis of TPS shows a thermal transition below ambient temperature that decreases as glycerol content increases (Figure 2).On the other hand, no thermal transitions are observed between 25 and 200ºC (not shown).The Tg of TPS decreases from -45 to -56°C as glycerol content increases from 29% to 40%.Van Soest et al. have reported the Tg of extruded TPS materials containing a starch/water/glycerol ratio of 100:27:5 of +59°C (Van Soest et al., 1996). Forssell et al. (1997) studied the thermal transition of TPS materials prepared in a melt mixer as a function of glycerol and water content.Depending upon the composition, TPS materials presented one or two thermal transitions.In that work, at the lowest water content (ca.1%) the upper transition of TPS decreases from 145 to 70°C as the glycerol content is increased from 14 to 29% while only TPS compounded with 29 and 39% glycerol showed lower transitions both at  -50°C.The upper transition was attributed to starch-rich phase while the lower transition was related to a starch-poor phase.Lourdin and coworkers prepared TPS cast films by mixing starch with different amounts of water and glycerol (Lourdin et al., 1997a;Lourdin et al., 1997b).Films having around 13% water content showed a reduction of Tg from 90 to 0°C when glycerol content increased from 0 to 24% (Lourdin et al., 1997a).In that case they observed a glassy to rubbery transition of TPS at around 15% glycerol.In a further paper, they compared the Tg of TPS films having around 11% water with respect to glycerol content and they found that Tg decreased from 126 to 28°C when glycerol content was increased from 0 to 40% (Lourdin et al., 1997b).Discrepancies in Tg values as a function of glycerol content can be related, as mentioned by Kalichevsky to the mixing history during TPS preparation (Kalichevsky et al., 1993). During on-line measurements, TPS extrudates did not present bubbles due to the almost absence of water.The pressure readings of TPS36 and TPS40 at 150ºC were quite regular while those of TPS29 were mostly irregular.For this reason only TPS36 and TPS40 were evaluated.As observed by other authors (Aichholzer and Fritz, 1998;Della Valle et al., 1992;Lai and Kokini, 1990;Senouci and Smith, 1988;Willett et al., 1995;Willett et al., 1998), the viscosity () of both TPS and PE1 melts display a power-law (shear thinning) behavior at the shear rate (  ) interval developed over die extrusion conditions (Figure 3).The  of TPS materials depends on the plasticizer content.An increment of glycerol content from 36% to 40% results in a reduction of 20% of  of TPS36 (at  ~ 130 s -1 ).TPS exhibits the rheological behavior of a typical gel as characterized by a storage modulus (G', Figure 4a) larger than the loss modulus (G", Figure 4b) and with both moduli largely independent of frequency over the amplitude of the experimental window (Ross-Murphy, 1995).This behavior is produced by the presence of an elastic network embedded in a softer matrix.The rigidity in those regions can be produced by chemical or physical crosslinking.The structure of the elastic network has been related to the crystallinity derived from the complexation reaction between amylose and lipids (Conde-Petit & Escher, 1995;Della Valle et al., 1998) and the physical entanglement of the high molecular weight polysaccharides (Della Valle et al., 1998;Ruch and Fritz, 2000). As expected, the augmentation of the glycerol content in TPS results in a reduction of both G' and G".However, the trend in the modulus curves was nearly the same, regardless of the glycerol content.From the study of low-concentration starch dispersions, Conde-Petit and Escher (1995) showed that the formation of amylose-emulsifier complexes modifies the viscoelastic response of potato starch dispersions.Crystalline regions produced during the amylose-emulsifier complexation form an elastic network, which is responsible for the liquid-like to solid-like viscoelastic modification.From the similarity of the trend of the G' curves shown in Figure 4a, it can be inferred in this work that glycerol variation does not affect the nature of the hypothetical crystalline elastic network, it just plasticizes the amorphous fraction of starch. The study of the viscoelasticity of starch-based materials has mainly focused on concentrated gels and dispersions ( 5% starch).In this work, the viscoelastic behavior of water-free TPS at high glycerol contents has been evaluated at 150°C.G' decreases as glycerol content increases and the changes are similar at both low and high frequencies.Della Valle and co-workers also studied the behavior of a water-free TPS at 150°C and found that the decrease of G' with glycerol content was dependent on frequency (Della Valle et al., 1998).However, that material was obtained by subjecting the TPS to a separate drying step, a process which can induce structural changes in the starch.The proportional reduction of G' as a function of glycerol content observed in this work is similar to that observed in starch gel systems (Kulicke et al., 1996).Figure 6a shows that the reduction of the glycerol content from 40% to 33% results in a quasi-linear increment of G', while the reduction from 33% to 29% glycerol produces a larger variation in G'.In the case of the elastic modulus of polymer composites, percolation theory explains the non-linearity produced by the phase inversion effect at high filler content (Willett, 1994).The limit of glycerol plasticization that produces the non-linearity observed in the G' of TPS at a concentration around 30% glycerol can be explained in a similar way.TPS can be considered as a homogeneous system composed of a hard elastic network and soft amorphous regions.Amylose complex crystallites, highly entangled starch molecules, poorly plasticized starch-rich sites, or a combination of them could compose the hard elastic network.Soft amorphous regions could be composed of well-plasticized glycerol-rich starch.Even though the elastic network is present at 33% glycerol, the soft amorphous regions dominate the viscoelastic response.Increasing glycerol content, beyond this concentration, produces a relatively small reduction in the rheological parameters.On the other hand, below 30% glycerol the phase inversion of a soft to a hard matrix occurs resulting in the domination of the viscoelastic response by the hard elastic network, which is in good agreement with percolation theory.That suggests a glycerol plasticization threshold at a concentration around 30%. Blending with polyethylene Blending TPS with synthetic polymers have shown the typical characteristics of immiscible polymer blends (St-Pierre et al, 1997).The melt blending of TPS with synthetic polymers has given place to a series of scientific and technologic developments.Such works differed in the mixing protocol and the type of additives used.Some authors proposed the use of two steps for the preparation of TPS-based blends (Aburto et al., 1997, Bikiaris et al., 1997a, 1997b, 1998, Prinos et al., 1998, Averous et al., 2000a, 2000b, 2001a, 2001b, Martin & Averous, 2001) while other preferred just one-step processes (Dehennau & Depireux, 1993, St-Pierre et al., 1997).Starch-based blends prepared in two steps are generally characterized for the preparation of TPS in a separated extrusion step.St-Pierre and coworkers presented a one-step blending process for TPS-based polymer blends (St-Pierre et al., 1997).They developed an extrusion system combining a TSE with a singlescrew extruder (SSE).TPS was prepared in the SSE, and then it was blended with LDPE in the last sections of the TSE.Using such an extrusion system, they demonstrated experimentally that a certain morphological control of PE/TPS blends could be achieved by varying the TPS concentration from 0 to 22 wt%.Those blends showed an unusual high level of ductility.An improved approach for LDPE/TPS blends in a one-step process was developed by Rodriguez-Gonzalez and coworkers (Rodriguez-Gonzalez et al., 2003b).It consisted of an extrusion system equipped with a single-screw extruder, from which molten LDPE is fed to the middle section of a twin-screw extruder.Suspensions of starch, glycerol and water were fed to the hopper of the twin-screw extruder and, as described in section 3, water-free TPS having 29, 36 and 40% glycerol (TPS29, TPS36 and TPS40, respectively) were prepared and melt blended with the LDPE as depicted in Figure 5.In order to evaluate the effect of PE and TPS viscosities on the morphology of LDPE/TPS blends two commercial LDPE resins, LDPE2040 (PE1, MFI = 12g/10min) and LDPE2049 (PE2, MFI = 20g/10min), and the three TPS were used. Effect of glycerol content on morphology PE/TPS blends display a discrete morphology where LDPE is the matrix, especially at low TPS content.The combined effect of glycerol content and the elongational flow exerted on PE/TPS blends (TPS concentration  30 wt%) during quenching can be observed in Figure 6.PE1 blends prepared with TPS40 and TPS36 (Figures 6a and 6b) show a high level of deformation in the machine direction.Conversely, blends compounded with TPS29 show very little deformation (Figure 6c) and even less when prepared with PE2 (Figure 6d).The singular morphologies displayed by PE/TPS blends are closely related to the differences in viscosity of both TPS and PE.As mentioned in section 3, it was found that 30% glycerol is required to effectively plasticize starch (Rodriguez-Gonzalez et al., 2004).From Figure 6, it can be seen that below that limit, the viscosity and elasticity of TPS are too high to allow the LDPE matrix to greatly deform the TPS dispersed phase.When the Low-viscosity PE2 is used, it can be seen (Figure 6d) that the dispersed particles of TPS are of a spherical nature and that the particle size has increased compared to those of PE2/TPS29 blends (Figure 6c).These results clearly demonstrate that a high degree of morphological control is possible for this system and that the full range from spherical dispersed phase to that of a highly deformed fibrillar phase can be obtained at a given TPS concentration level.In fact, it is apparent that the control of the glycerol concentration allows one to modify the state of the starch from that of a solid particle to that of a quasi crosslinked dispersed phase to that of a highly deformable material. Effect of TPS concentration on morphology The axial direction morphology of PE1/TPS36 blends was a combination of large fiber-like structures with small spherical-like particles (Figure 7).Increasing the TPS concentration reduces the number of small spherical particles due to particle-particle coalescence.The larger particle size of the TPS domains plus particle coalescence leads to the lengthening of TPS fibers in the machine direction.At high TPS loadings (above 45 wt%), it was difficult to distinguish whether LDPE or TPS constituted the matrix.Both components appear to be fully continuous in the axial draw direction.The orientation imposed by the elongational flow field at the die exit plays an important role in the continuity development of starch in these PE/TPS blends. The starch domain size increases in PE1/TPS29 as the TPS29 content increases (Figure 8).In contrast to the high continuity observed for the low-viscosity low-elasticity TPS36, TPS29 particles remain dispersed in a PE1 matrix, even at high loadings (conc. of TPS  49 wt%).It can be observed from Figure 8 that increasing the concentration of the TPS at low glycerol contents has little effect on the particle shape. Elongation at break ( b ) The relative elongation at break ( b / b0 ) in the machine direction of PE1/TPS blends is shown in Figure 9a.The results are excellent and demonstrate that at high glycerol contents (36% and 40%), the blends have an  b comparable to the virgin polyethylene ( b0 ) even at 53 wt% TPS.The  b values of PE1 blends drop with the addition of TPS29.If these data are compared with the morphology results from the previous section, it is clear that the high  b for blends with TPS36 and TPS40 is closely related to the ability to deform the TPS phase. In St-Pierre's work (St. Pierre et al., 1997), PE/TPS blends presented a maximum in the  b at around 10 wt% TPS followed by a dramatic drop at 22 wt%.In this work, the improved extrusion process and the controlled deformation of the TPS phase yields an important improvement in the  b of PE/TPS blends as a function of composition, as observed in Figure 9a.Such an improvement in  b is also, in part, due to a highly effective removal of water by venting before blending with polyethylene.In St-Pierre's process, TPS was blended with LDPE and then passed through the venting section.At low concentration, TPS was probably encapsulated into a LDPE matrix, which impeded proper water removal.The presence of water at the blending temperature (150˚C) can lead to the formation of bubbles in the extrudate, which weakens the final product (Verhoogt et al., 1995).In the present system, water was completely devolatilized from TPS before mixing with polyethylene (Favis et al., 2003). Young's modulus The relative Young's modulus (E/E 0 ) is demonstrated in Figure 9b.Once again the results are excellent.The E can be maintained at high levels even at high loadings of TPS36 and TPS40.At lower levels of glycerol (TPS29) the E of the blend can be seen to even exceed that of the neat polyethylene.These are unusual results considering the high levels of immiscibility between PE and TPS.The results also indicate the potential of tailoring the mechanical properties of the blend through an appropriate glycerol content.This unexpected result can be explained by good interfacial contact.Leclair and Favis found that the compression exerted by a crystalline matrix (HDPE), during crystallization, on an amorphous dispersed phase (PC) can result in good interfacial contact and a higher elastic modulus (Leclair and Favis, 1996).They also observed that this effect had a positive influence on the modulus only when the contraction took place on a smooth, nondeformable surface. Hydrolytic degradation of LDPE/TPS blends It is well known that acid hydrolysis of starch involves the random cleavage of glycoside bonds producing from oligosaccharides fractions to glucose units (Leach, 1984).In order to quantitatively determine the extent of continuity of TPS blends, samples were exposed to hydrolytic extraction.Figure 10 shows the percent continuity of starch as a function of TPS content for PE1/TPS40 and PE2/TPS40 blends.In both cases there is a monotonic increase in continuity as the concentration of TPS increases.At concentration of 43% or lower, blend morphology plays an important role on percent continuity of LDPE/TPS40 blends.Blends depicting elongated particles show higher percent continuity at comparative concentrations than those displaying spherical morphology.For instance, PE1/TPS40 blends containing 32% TPS40 have 66% continuity while PE2/TPS40 blends composing of 31% TPS40 have only 38% continuity.Above 50% TPS40, at almost 95% continuity, blend morphology does not make any significant difference.At 62 wt% TPS40 the percent continuity of starch domains reaches 100% and the starch phase could be completely extracted.This is indicative of the full connectivity of starch particles through the entirely sample (Figure 10).The use of hydrolytic degradation as previous technique to biodegradation studies could be an important tool to predict enzymatic and bacterial biodegradation. Enzymatic degradation of LDPE/TPS40 blends Numerous studies have been done to investigate the enzymatic hydrolysis of starch-based materials.These works involve blends system with synthetic polymers like LDPE (Danjaji, 2002), ethylene vinyl acetate (EVA) (Simons & Thomas, 1995;Araujo et al., 2004) and polycaprolactone (PCL) (Seretoudi et al., 2002).The kinetic of enzymatic degradation of TPS40 and LDPE/TPS40 blends is shown in Figure 11.Amylase from the enzymatic cocktail triggers the cleavage of 1-4 acetal link while glucoamylase attacks the 1-6 links of amylopectin (Chaplin & Kenedy, 1986), which results in starch solubilization and, consequently, weight loss.The extent of enzymatic degradation of starch is depended on TPS40 concentration.As expected, raw TPS40 is completely degraded during the first 36 hours.Blends of PE1/TPS40 having 62% and 32% and PE2/TPS40 (69:31) result in weight losses of TPS40 of 97%, 65% and 32%, respectively at 72 hours.Therefore, weight loss percent is related to the total amount of TPS40 in the blends.Percolation theory is concerned with the connectivity of one component (in our case, TPS40) randomly dispersed in another (Peanaski et al., 1991).Peanansky showed that below an apparent percolation threshold of 30% by volume (40 wt%) of granular starch, only small amounts were accessible for removal.Granular starches are compact particles, such as those observed in the PE2/TPS40 blends.Fiber-like particles observed in PE1/TPS40 blends could be responsible for a lower apparent percolation threshold in this system and, consequently, higher enzymatic degradation values (Li et al., 2005).Extent of enzymatic degradation of LDPE/TPS40 blends is very similar to that obtained by acid hydrolysis.On the other hand, TPS40 enzymatic degradation rate is depended on starch concentration and the accessibility of starch domains as is in the case of LDPE/TPS40 blends.TPS40 is almost insoluble in cold water.When TPS40 is exposed to cold water, it swells and glycerol and low molecular fractions become soluble, but the specimen shape remains intact.Enzymatic hydrolysis of insoluble polymers is known to be affected by the mode of interaction between the enzymes and the polymeric chains and typically involves four steps: (i) enzyme diffusion from the bulk solution to the solid surface, (ii) enzyme adsorption on the substrate, resulting in the formation of enzyme-substrate complex, (iii) catalysis of the hydrolysis reaction, and (iv) diffusion of the hydrolyzed fraction from the solid substrate to the solution (Azevedo et al., 2003).Blends with high loadings of TPS40 show an enzymatic degradation rate as fast as that of the raw TPS40 during the first 3 hours of exposure.This is probably due to the large amount of TPS40 observed on the surface of LDPE/TPS40 blends. Similarly, blends containing about 30% of TPS40 have less starch available on the surface and, consequently, the initial enzymatic degradation rate is slower than the others.As the soluble degradation products of TPS40 diffuse out of the sample, the number of active enzyme units available for starch degradation decreases resulting in a reduction of degradation rate.TPS40 is completely degraded in 36 hours, whereas PE1/TPS40 having 62% and 32% TPS40 and PE2/TPS40 compounded with 31% TPS40 reach their maximum degradation in 72 hours.Conversely, the 69:31 PE2/TPS40 stabilizes at a short period of about 20 hours, whereas the 68:32 PE1/TPS40 blends reaches its plateau at 48 hours.This is likely due to the connectivity of PE2/TPS40 (69:31) blend of starch from the surface; therefore the path of the enzyme is less obstructive. Microbial biodegradation Weight loss as a function of time is the most useful method employed to monitor biodegradation (Swanson et al., 2003;Bikiaris et al., 1997b).Figure 12 shows the weight loss of LDPE/TPS40 blends exposed to activated sludge as a function of degradation time.As expected, raw PE1 remains unchanged after 45 days.On the contrary, raw TPS40 is completely consumed within 21 days of exposure.For the LDPE/TPS40 blends, the maximum biodegradation extent is observed at times longer than the raw TPS40.If TPS40 particles are present only on the surface, and not interconnected with particles inside the LDPE/TPS40 blends, then it could be expected that starch domains would be completely biodegraded like the raw TPS40.Percent continuity observed in Figure 10 shows that TPS40 particles are interconnected one to another.At TPS40 concentration of about 30%, interconnection increases when the morphology of starch domains changes from spherical (PE2/TPS40 blend) to fiber-like particles (PE1/TPS40 blend).The extent of biodegradation of TPS40 at 45 days of extraction for PE1/TPS40 blends at 62%, 32% of TPS40 and PE2/TPS40 (69:31) were 92%, 39% and 22%, respectively.However, when the maximum biological extraction is compared with the maximum enzymatic degradation, important difference is noticeable, especially in blends with ca. 30 wt% TPS40. Kinetic of biodegradation of TPS40 and LDPE/TPS40 blends shows two stages (Table 1).In all cases, there is a fast weight loss during the first 1.From comparison of the three degradation techniques, it can be inferred that some phenomenon is taking place during the bacterial degradation of LDPE/TPS40 blends.Weight losses for acid hydrolysis and biodegradation were 100% and 92%, 66 and 39%, and 38% and 22%, respectively for PE1/TPS40 (38:62), PE1/TPS40 (68:32), and PE2/TPS40 (69:31).In the case of PE1/TPS40 (38:62), the difference can be neglected due to the possibility of bacterial waste accumulation inside polyethylene cavities.At around 30% TPS40, however, differences are more prominent.This could be related to other phenomena.Micrographs of the surface of PE1/TPS40 and PE2/TPS40 blends (reported elsewhere) show that pores on PE1 matrix left after TPS40 extraction are below 1 m, while those observed on PE2 ranged between 3 to 10 m (Tena-Salcido et al., 2008).On the other hand, different microorganisms have a length between 0.4 and 14 m and width of 0.2 to 12 m (Gibbon, 1997).In the case of blends having about 30% TPS40, it is possible that microorganisms or their colonies can restrict starch diffusion by obstructing the polyethylene pores to result in a significant reduction of the final extent of biodegradation. Conclusions The analysis of thermal properties of water-free TPS materials prepared in a TSE showed that granular starch was completely disrupted and that TPS shows a thermal transition below room temperature corresponding to the glass transition temperature and this Tg is dependent on glycerol content.As was observed for the thermal properties, the rheological properties were also highly dependent on glycerol content. of TPS36 at shear rate ~ 130 s -1 decreases by 20% when the glycerol content is increased from 36 to 40%.In the same way, G' and G" also decrease as glycerol content increases.However, a particularly dramatic variation is observed when the glycerol content is varied from 29 to 33%.These latter results suggest a phase inversion from a hard elastic network matrix to a soft amorphous one.The glycerol plasticization threshold thus occurs at a content of approximately 30%.This result concerning a critical plasticization threshold is very important for morphology control strategies. The PE/TPS blends prepared using the one-step process demonstrated levels of ductility and modulus similar to the virgin polyethylene even at very high loadings of TPS without the addition of any interfacial modifier.The excellent properties are a combination of both the melt blending process and a sophisticated morphology control.Through a control of the glycerol content and thermoplastic starch volume fraction, the above process can result in morphological structures, which run the full range of those observed in classical blends of synthetic thermoplastics.Spherical, fiber-like and co-continuous morphologies are observed.Control of the glycerol content of the starch allows one to control the properties of starch from that of a solid filler through to that of a highly deformable thermoplastic material.A wide range of potential properties can be exploited for this type of material. This material has the added benefit of containing large quantities of a renewable resource and hence represents a more sustainable alternative to pure synthetic polymers.Since the starch can be fully interconnected through morphology control, it is also completely accessible for biodegradation as opposed to the case of starch particles dispersed in a synthetic polymer matrix. In this work, a relationship between morphology and biodegradation of LDPE/TPS blends was discussed.Percent continuity of the blends is monitored by means of hydrolytic degradation, from which the results show that at TPS concentration below 50%, it is depended on LDPE viscosity and above that value it is independent.Enzymatic degradation is a technique that is closer to the actual biodegradation than acid hydrolysis but we have demonstrated both to have an excellent correlation.However, a correlation of these two techniques with bacterial biodegradation is difficult because of the accumulative deposit of bacteria through empty pores left by the loss of TPS.This difference is more pronounced for the two blends we investigated which contain ca.30% TPS.In these two blends, the extent of bacterial biodegradation was 39% and 22%, respectively which are less than 60% of the available TPS, as demonstrated by hydrolytic degradation. Fig. 2 . Fig. 2. DSC thermograms of TPS samples conditioned for 24h at 0% R.H.The glycerol content in TPS is 40, 36 and 29% from the top to the bottom. Fig. 3 . Fig. 3. Comparison of the viscosity of TPS40, TPS36 and PE1 measured on-line in the TSE at 150ºC. Fig. 5 . Fig. 5. Schematic representation of the one-step extrusion system designed for the melt blending of LDPE with water-free TPS. Table 1 . Biodegradation rate for TPS40 and LDPE/TPS40 blends as a function of exposure time in activated sludge.
6,442.8
2012-03-28T00:00:00.000
[ "Materials Science" ]
The comparison of automated clustering algorithms for resampling representative conformer ensembles with RMSD matrix Background The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. Results In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Conclusions Dunn index, Davies–Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14–19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results. Electronic supplementary material The online version of this article (doi:10.1186/s13321-017-0208-0) contains supplementary material, which is available to authorized users. Background Clustering algorithms used in a variety of situations, such as understanding virtual screening results [1], partitioning data sets into structurally homogeneous subsets for modeling [2,3], and picking representative chemical structures from individual clusters [4][5][6]. The use of clustering algorithms to group similar conformations is the most appropriate data mining technique to distill the structural information from properties of an MD trajectory [7][8][9][10]. Therefore, the selection of representative conformers is valuable and very important in the 3D-QSAR model, pharmacophore model, proteinligand docking [11], and Bayesian classification model from 3D fingerprints. Various conformation-generating algorithms are commonly used in commercially available programs and open source wares. The performance of such conformation generators have been evaluated by assessing the reproducibility of the X-ray bioactive conformer [12]. The existence of the bioactive conformer supports evaluation of correct conformation of the automatically selected conformers. However, if X-ray bioactive conformer information do not exist then the local minimum conformers or conformer ensembles with reasonable sizes were chosen to build the 3D models with a statistically desirable result [13,14]. Currently, the development of omics, network pharmacology and systems biology has motivated the field of chemo-informatics to predict the targets, off-targets, and poly-pharmacology of interesting compounds using in silico methods. Among these in silico target inference methods, the chemocentric approach (ligand-based target fishing) requires a simple assumption for structurally similar molecules have similar biological activity [15]. In general, this approach has used 2D structures for the similarity calculation rather than 3D structures due to the computational burden. However, 2D similar compounds can make highly experienced medicinal chemists suggest similar targets but it have less probability to give novel pharmacological effects in comparison to 3D similarity compounds [16,17]. Hence, the computationally intensive 3D similarity based target fishing is required. However, 3D similarity depends on the 3D conformation and 3D alignment. In contrast to 3D models of a specific target using bioactive conformers from X-ray such like our previous studies [18][19][20], the recent studies used a single low energy conformer or conformer ensembles under a specific algorithm [17,21,22] to acquire the 3D structure of a query molecule for target fishing. Some conformer ensemble under this program with a default size (e.g., 1 or 100), determined the similarity scores, which were able to change the first ranked target in target fishing. In this study, we have tried to investigate clustering methods to acquire reasonably small-sized conformer ensemble, which are representing conformational space of a drug to build 3D models with high coverage. When PDBs of targets are unavailable, this approach is one plausible solution to get robust 3D-QSARmodels. In particular, we tried to propose the best clustering method to acquire reasonable ensembles by comparing four different types of conventional algorithms: (1) a representative conformer k-means algorithm, (2) a hierarchical clustering with dynamic tree cut algorithm, (3) a linear kernel principal component analysis, and (4) a non-linear kernel principal component analysis. All four algorithms work based on relative distances, so they can easily be extended to multi-dimensional dissimilarity. We noted that the relative distances are directed dissimilarities between conformers. Since different matrix transformation functions could detect different patterns, the algorithms need to be tested over different metrics (transformation method) including admissible methods with respect to the stability [23]. All algorithms could be implemented in the process consisting of (1) conformer ensemble generation by omega [24][25][26], (2) shape based alignment by a Shape toolkit [27][28][29][30], (3) asymmetric RMSD (root mean square deviation) calculation (N × N) by the OEChem toolkit [31], and (4) a RMSD-based selection of representative conformers. The main contributions of this work are the next two. First contribution is to make it easy to adopt clustering algorithms for finding representative conformers with RMSD by automating the k and resolutions, which are required in the original clustering methods. The second is to provide the demonstration of the performance in finding representative conformers from initial sets with different clustering algorithms for reference information, so that researchers are able to find more proper algorithm for their research purposes. RMSD matrix Before describing the four automated resampling methods, the procedure to generate a conformer ensemble is illustrated. Shape-based alignments of the data sets in each conformer ensemble were conducted using OEChem [31] and the OEShape toolkit (OpenEye Scientific Software). All conformers were aligned based on the conditions of (1) brute forced N(reference) × N(fit) cases and (2) the class, "OEBestOverlay. " RMSD values between every aligned conformer were calculated to store these values in an N × N matrix, as shown in Fig. 1. In the N × N matrix, a row and a column are a conformer and a variable to use a total of N variables, even though RMSD was a variable to describe the relationship between a pair of conformers. The toolkit used for conformer generation, alignment of conformers, and RMSD calculation produced the non-symmetry matrix (but approximate symmetric) resulting from (1) selection algorithm of starting position for the alignment (inertial frame alignment algorithm), (2) rigidity of reference conformer during finding 'centers-of-mass' , and (3) single selection from multiple OEBestOverlay results. Some dissimilarity values in RMSD were modified to make the RMSD matrix symmetric. The RMSD values generated by the toolkit have all positive values satisfying d x, y ≥ 0. Some dissimilarity values in diagonal does not satisfy the property of d x, y = 0 if x = y, and the non-zero diagonal values changed to zero. We assume the reasons of the occurrence of the non-zero diagonal values are similar to the reasons for the non-symmetricity of the matrix: the starting position, the rigidity of reference conformer, etc. Further, to make the non-symmetric matrix symmetric, we applied matrix transformations for clustering. For the clusters built from directed networks, stability issue rises. It needed to be confirmed whether networks that are close to each other result in dendrograms that are also close to each other for a given hierarchical clustering algorithm. Carlsson et al. [23] Carlsson et al. [23] proved reciprocal clustering and non-reciprocal clustering satisfies stability. Reciprocal clustering defines the cost of an edge as the maximum of the two directed dissimilarities. The matrix transformation for reciprocal clustering can be formulated as: Ā X := max A X , A T X , where the max is applied element-wise. And a transformation for non-reciprocal clustering can be defined as: Ā X := min A X , A T X . Other transformations could be lower-triangle, upper-triangle, and average that do not satisfy the stability. It is worth to build clusters with different transformations since we also needed to test whether one clustering algorithm performs higher than the others over similar variations of dataset. When conducted clustering from RMSD matrix, lower diagonal part of the matrix was used in this study. The upper triangle part is removed and replaced by the lower triangle part to gain a symmetric matrix. Our manipulation on the matrix means that real value 'RMSD(A,B)≠RMSD(B,A)' approximately assumed into 'RMSD(A,B)=RMSD(B,A)' . Representative conformers and clusters We define a representative conformer ensemble as a subset that can describe the total sets in the best way. Each conformer in the subset was expected to be dispersed and to belong to each sub-group in the total set if any. The similarity and distance between conformers can be calculated by relative distance (not by absolute distance). The error would become greater if used a medoid instead of a mean due to the difficulty of calculating an absolute distance [32]. One way to calculate the mean center points with a relative distance is to convert the relative distances from each point to absolute distances from some virtual local points (support vectors) [33]. Here, the whole conformers were used as support vectors because we did not want to lose information. When use clustering algorithm we need to define a good cluster. Even though there does not exist a good definition for a good cluster that can be applied to every application domain [34], we follow a general definitiona cluster is a set of data objects that are similar to each other, while data objects in different clusters are different from one another [35]. However, we note that a good cluster in our research should explain diverse different characteristics of a dataset. Among recent reports on clustering for representative conformers, Kim et al. attempted to find representative conformers using divisive clustering methods from a large PubChem3D [36] conformer set [37]. Kothiwale et al. [38] used knowledge such as 'rotamer' libraries. Feher and Schmidt used the fuzzy c-means clustering method to find representative conformers using quantities and features inherent to the dataset [39]. Automated resampling methods Heuristic and approximation methods were applied to our clustering problem in this study because the clustering problem consider an NP (nondeterministic polynomial time) problem [40]. The four clustering methods are (1) the k-means clustering of multidimensional scaled RMSD values based on a linear kernel without suppling k explicitly, (2) the hierarchical clustering algorithm with dynamic tree cut based on a linear kernel without using an explicit threshold, (3) PCA (principal component analysis) with a linear kernel and (4) PCA with an RBF (radial basis function) kernel. When using clustering for representative conformers, it is a limitation of this research that deterministic initial methods were not applied such as initializing k centroids far apart from each other [41][42][43], and adopting deterministic initialization [44][45][46]. In this research, the initial centroids randomly was set and the greatest result was chosen after multiple runs. It is a limitation that the k-means algorithm returns different representative conformers every running with respect to the deterministic representativeness of representative conformers. We propose the application of deterministic initial centroids to a k-means algorithm in detection of representative conformers as a future work. In this work, we attempted to increase the adaptability of k-means for representative conformer set by automating the option of k. We also included hierarchical clustering and PCA based clustering for the comparison. When disable to estimate the shape of clusters in a conformer dataset in advance, a hierarchical clustering is a proper choice [47]. The clusters as a result become different depending on the resolution to the hierarchical tree. Since the resolution varies for each conformer dataset, it should be automated. To find linear characteristics of a conformer dataset, PCA is used for clustering. k-Means clustering The first trial performs to cluster the conformers and select representative conformers within the clusters. k-Means clustering using n variables acquired from multidimensional scaling of N dimensional variables in the matrix was performed to select representative conformers. k-Means is one of the most popular clustering methods, which tries to minimize the sum of the squared distance within the clusters [48]. However, k-means has a few disadvantages: it cannot find the global optimum and the user needs to specify the number of clusters, k. Our algorithm finds k automatically by aiming to maximize the descriptive power of the representative conformers based on MSQb. We expect descriptive representative conformers may minimize the mean of the squared distance of the clique within clusters (MSQw) and to maximize the mean of the squared distance of the clique between the clusters (MSQb). The conformers in a cluster would be similar to each other (like a clique) considering that the relative distances are based on the similarity among conformers. A clique is a group of conformers that were on average more similar to each other than any others. 1 The representative conformers based on the clique can be formulated as: where the formula is MSQb (Eq. 1) and the constraint is MSQw (Eq. 2). The number of clusters is k; the representative conformers for each cluster are c i and c j . The number of conformers for each cluster is c k . C ij is an index matrix that denotes whether each conformer belongs to a cluster or not (consisting of 0 or 1). C(c k ,2) is the number of possible combinatorial cases. In k-means clustering, the sum of the squared distance of a clique within a cluster (SSQw) declines as the number of clusters increases. The sum of the squared distance of a clique between clusters (SSQb) has a tendency to increase as the number of clusters increases, even though there were some variations in this trend (Fig. 2a). However, MSQb shows different patterns, where it stops increasing after a certain point (Fig. 2b). A simple moving average (SMA) was applied to smooth the MSQb curve. The example below used a window size (W) of 10. We used the highest point of MSQb as the number of clusters, k (Fig. 2c). The algorithm using k-means to find the representative conformers was expressed as RCKmeans (representative conformer k-means) and is described in Scheme 1. Initially, we ran k-means 100 times with different initial points to find the lowest MSQw. Since k-means finds MSQb K−i local optimums, it is necessary to reinforce the results with different initial points. Next, the algorithm repeated this step with increasing k. As k increased, the algorithm calculated the SMA with a window size of 10. When SMA started to decrease, RCKmeans tried to find the highest value for MSQb and returned the k at that time. Once the k clusters were detected, the conformers at the center of each cluster were selected as representative conformers the Cluster Center function did this. Hierarchical clustering with dynamic tree cut Hierarchical clustering is a bottom-up method, whereas k-means a divisive method. Hierarchical clustering techniques also popular for clustering. Hierarchical clustering requires a branch pruning procedure to make the clusters more meaningful with respect to the cluster sizes and the number of clusters. Langfelder et al. [47] tested different pruning methods and suggested the dynamic tree cut method for complex trees where one cannot find all of the clusters with one cut height (static method). The dynamic tree cut method starts to merge branches from the bottom to the top. The merging of two branches was evaluated by shape criteria. We used the minimum number of objects, the core scatter of the tree, and the gap between the branches as the shape criteria, as in [47]. Therefore, we adapted the dynamic tree cut method for clustering conformers in an entry. To remove the user's explicit intervention of specifying the depth of the tree Fig. 2 The trend of the squared distance of the clique between clusters for the entry 10; a SSQb along to k, b MSQb along to k, c SMA of MSQb along to k Scheme 1 k-means algorithm for representative conformers (RCKmeans) cut and separation, our pruning method tested four different depths and chose the depth where MSQb was the highest and the fewer in the sizes of clusters as described in Scheme 2 and Fig. 3. The tree was constructed based on the ward's minimum variance distance (MSw: mean squared distance within). Ward's method built trees in a way to minimize the variance [51,52]. The Dynam-icTreeCut algorithm for the representative conforms (RCDTC) is implemented within R [47]. Conformers that do not belong to any clusters could remain when tree cut. These outsides were assigned to the nearest clusters by PAM (partitioning around medoids) stages. Once the clusters were identified, the conformers at the center of each cluster were selected as representative conformersthe Cluster Center functionalized. Kernel PCA PCA used in many applications (e.g., data compression, visualization). PCA represent the differences with k-means to find representative conformers, and provides different results from k-means. k-means finds the representative conformers by the shape of the distances between the center and closer elements. However, PCA tries to determine the orthogonal linear pattern first, and then finds representative conformers based on the linear pattern. In factor analysis, PCA identified the variables with stronger factor loadings [53]. PCA detected linear patterns and then considered the conformers with the strongest factor loadings as representative conformers. Kernel PCA [54] used linear or a nonlinear form of PCA, and an applicable method for finding various types of relations among conformers. The covariance matrix of the data, While assuming that our conformer was mapped into feature space, ϕ(x 1 ) . . . , ϕ(x m ), the covariance matrix for PCA is as follows: We mapped a conformer into an infinite-dimensional feature space with the linear operator φ(x j )φ(x j ) T and calculated eigenvalues and eigenvectors. This way, we could calculate the distance between two conformers without knowing the absolute coordinates in 3D. To calculate the principal components of a test point x, we computed projections onto the eigenvectors, V n . The mathematical detailed proof of the following formula can be found in the Ref. [55]. The linear kernel is defined as: The values generated by the kernel function were analyzed using PCA, which could reasonably reduce the number of variables to produce components with minimal distortion of the data. At 80% explanatory power (in other words, the information loss was less than 0.2), the major component contributions were extracted among N variables: For example the eigenvector tables consisted of components (column) and conformers (row) as shown in Table 1. The second row represented cumulative explanatory power. From these components, the most representative conformers were chosen from the eigenvector tables. To choose the representative conformers, and kept the highest absolute values in each row and then chose the highest absolute values among the highest in each column. This way the most effective conformer chosen for each component. After limiting the explanation coverage to 80%, the four dimensions (V1-V4) chosen out of the 41 possible dimensions within the example. The values in italic font in the eigenvectors table became the representative conformers. This process knows as RCPCA (PCA for representative conformers). Nonlinear kernel PCA Nonlinear patterns may describe the conformer set in a more suitable way. For nonlinear PCA, the RBF kernel can be used [54]. Consecutively, the selection of representative conformers by kernel PCA was conducted to minimize distortion of the raw data (RMSD matrix). The conversion of RMSD values by RBF kernel requires σ 2 as in Eq. 8. The σ 2 should be calculated separately for each entry. The standard deviation of an entry calculated by the relative distances. The whole number distance between two conformers is C(m,2), where m is the size of an entry. We considered the mean of the standard deviation among the conformers as the standard deviation of the entry. The parameter, B, was designed for a generalization purpose. When B was less than 1, the kernel PCA had a tendency to find patterns by using the conformers closer to the support vectors, and vice versa. This value set B to 1 by default. The PCA method with RBF kernel was named RCPCA_RBF. The RBF kernel is defined as follows: The Wilson-Hilferty transformation was used to alleviate the skew caused in the higher dimension space [56]. The average (E) of the sum of squared distances takes the power of 1/3. The value of σ is calculated as follows: Data set Conformer set In public database, 3D-conformations of the chosen chemicals were generated by omega after the removal of molecules with a hypervalent metal complex due to the assignment of charge under the Merck molecular force field (MMFF) [14,37]. The energy window for conformer generation was selected based on the previous publications [4]. In the selection of the dataset for our study, the ideal criteria were: (1) the number of conformers (N) within a fixed energy window and (2) the difficulty of clear groupings in N by the N RMSD matrix. Our method should work well in all compounds; however, the results from examples with different rotatable bonds could confirm the algorithm performance. To be close to an ideal data set, structure diversity of our data set could be obtained through MACSS (structural key) based k-means clustering. In addition, the four properties also were considered for the selection of the data set; (2) NA (the number of heavy atoms), (3) NRB (the number of rotatable bonds), (4) NRE [nreffect = abs (NRB + (SR − SA)/5)]. In Table 2, 47 compounds with more than five rotatable bonds were selected using Knime [57]. Evaluation criteria To obtain ensembles of each representative statistical analysis of sampling method result was performed for the evaluation of the identified conformers. In statistics, if any sample is representative of a population, the sample can be called by a complete sample. A compete sample was used for inferences or extrapolations to the population. The statistical parameters (mean, standard SA the number of apparent single bonds in aromatic ring deviation) of the samples from the four different clustering methods were calculated because they described the distribution of each sample under parametric statistics. In this study, eta-squared and omega-squared values were used to evaluate the explanation power of the algorithms, and the conventional evaluations indices are also applied, which are Dunn index and Davies-Bouldin index [58]. A clustering algorithm for representative conformer sets may be considered better than another if it surpasses the performance of another across various validity indices [59]. Dunn index [60] assigns greater values to sets of clusters that are compact and well-separated clusters with a small variance between members of the cluster. Since the Dunn index considers the distance between clusters and the size of clusters, the highest value indicates optimal number of clusters. where, d ′ (k) stands for the distance in cluster k. Davies-Bouldin index yields lower value for more quality clusters, so the lowest value with k indicates optimal number of clusters [61]. where, the σ x is the average distance between any data in cluster x and c x . d c i , c j is the distance between two centers. Davies-Bouldin index evolved with different versions. We depict "complete" intra cluster distance and "single" inter cluster distance. When tested with "average" intra cluster distance, the results showed similar patterns in our experiment and we omit the illustration. Eta-squared (η 2 ), a nonparametric statistical method, defines how well the representative conformers explain the distribution [62]. A larger eta-squared value indicates a better representation of the distribution. However, there are limitations of the bias and accuracy in eta-squared [63,64]. To overcome these limitations, we also calculated omega-squared (ω 2 ). A greater omegasquared value indicates a better representation of the distribution [62]. Implications of the conformational space Our main contribution is on investigating clustering algorithms to find the reduction (abstraction) rate of the data, correlation between the population and sample, explanatory power, the computational complexity, and the memory usage. For this purpose, we apply four different clustering methods. Table 3 presents the number of representative conformers according to each sampling method. The pattern of the sampling numbers was RCPCA ≫ RCPCA_RBF > RCDTC > RCKmeans. Some outliers from the general pattern could be observed in entry 9, 20, 21, 29, 41, etc. Entries 9 and 20 showed an excessive number of samples in RCPCA. While entry 21 showed only one representative conformer in RCKmeans, entries 29 and 41 showed that the number of the samples extracted from RCKmeans was the largest. Four entries were displayed in 3D chemical space (Fig. 4). Every conformation of the 47 entries in 3D chemical space is available in the supplementary information (Additional file 1: Fig. S1). When the representative conformers (ball and stick) and the other conformers (gray wires) were carefully observed, the representative conformers in Fig. 4 helped us to judge the coverage of the representative conformers in an entry. PCA presented the best coverage of all methods due to an excessive sample number. Only two conformers chosen from the dynamic tree cut could cover the variation of the 3,4-dimethoxyphenyl acetamide group in entry 41 (Fig. 4a). To visualize the conformers in a 2D scatter plot, the dimensions of the RMSD matrix were reduced using PCA. For example, the first and second principal components (PC1, PC2) from the 41 dimensions of the entry 29 were used for visualization in the Fig. 5. The conformers were presented with different colors and shapes according to their cluster. The representative conformers are marked with red triangles. The MSQb was the highest when the k number was 5, as shown in Fig. 5a. RCKmeans found five representative conformers. RCDTC, RCPCA, and RCPCA_RBF found 4, 4, and three of the representative conformers respectively. The five representative conformers of RCKmeans consisted of conformers 4, 5, 8, 10, and 16, and the four representative conformers of the dynamic tree cut consisted of 3, 4, 5, and 10 to show three consensus conformers. The four representative conformers of RCPCA consisted of 16, 23, 34, and 38, and three representative conformers of RCPCA_RBF consisted of 1, 16, and 36 to present conformer 16 as a common result. Conformer 4, 5, 10, and 16 were chosen in more than two methods and the overlap ones would be more reliable. In entry 29, the conformational variations were generated from (1) the N-benzyl group, (2) the N-methoxy ethyl group, and (3) the 3-(4-methylthio)phenyl acryloyl group (Fig. 4b). Among the three variations, the variation of the aryl acryloyl group occupied the largest space. Conformers 4, 5, 10, and 16 perfectly covered the space of the N-benzyl group without overlapping each other and a significant portion of the (4-methylthio)phenyl acryloyl group. Figure 6 represents the x-axis was the conformer number (total of 36 conformers) and the y-axis was the RMSD. Each line and color represents each representative conformer ensembles. The more the two lines are fare away each other means the more the two lines cover the conformational space. Structural characteristics of the representative conformer ensembles During structural characteristic evaluation four representative conformers ensembles were found from 47 entries. The distributions of the conformers, the relations between the representative conformer ensembles and the whole conformers were analyzed to understand the characteristic of the algorithms. First, examined the distribution of the number of representative conformer ensembles of 47 data sets consisted of 107 conformers and result showed a large standard deviation ( Table 4). The representative conformer ensembles were reduced to 19-14% of the initial size. RCKmeans chose the smallest number of representative conformers on average (3.58) and the lowest standard deviation (1.93). The number of representative conformer ensembles from RCDTC was similar to the one from RCPCA_RBF. These results indicated that if one reduced the standard deviation in the number of representative conformer ensembles, RCDTC would be more proper than RCPCA_RBF. However, we note that a greater number of representative conformer ensembles had a greater tendency for a bigger explanatory power, and vice versa. Next, analyzed the relation between the number of representative conformer ensembles and the number of elements in an entry. The entry sizes varied from 12 to 500. RCDTC had the greatest value (0.87) for the correlation value between the two numbers. This indicates that RCDTC found a greater number of representative Table 3 The number of representative conformer ensembles from four algorithms using lower triangle matrix Bold are outliers (entry 9, 20, 21, 29, 41) in the sampling pattern conformer ensembles as the size of an entries increased. RCKmeans had a correlation value of 0.11, which indicated weak relations between the representative conformer ensembles and the elements in an entry. Another characteristic to consider when choosing a clustering method is the reproducibility. RCKmeans used random initial points for clustering. When repeated, the chances to find the same representative conformers as before would not be guaranteed. RCKmeans is not reproducible but the other clustering algorithms are reproducible. During this study, we noted that instead of interpreting the strength of correlation as an evaluation indicator, it would be better to consider it as different characteristics that depend on the applications. If one wants an equal number of representative conformer ensembles independent of the size of an entry, the clustering method with a low correlation and low standard deviation would be the proper choice. Each of the four algorithms showed different characteristics from one another, providing chances to choose a proper algorithm with respect to the application domain. Different matrix transformation methods build different dissimilarity matrices, and the number of representative conformers became different depending on them. Even though there were small variances in the number of conformers, RCPCA consistently generated more number of representative conformers than RCPCA_RBF, RCDTC and RCKmeans (Table 4). For the correlation between the size of an entry and the number of representative conformers, RCDTC was best among all transformation methods. Explanatory power of representative conformers We compared the clustering performance and the explanatory power of four algorithms in conformer dataset. In Table 5, the first and second columns showed the transformation methods and clustering algorithms, the third through sixth columns show the mean and standard deviation of Dunn index, Davies-Bouldin index, etasquared and omega-squared values for the 47 entries. The correlation between the mean squared distances of the representative conformer and the whole conformers, shown in the seventh column. Dunn index showed the greatest with RCDTC over five transformation methods, and Davies-Bouldin index was the lowest with RCDTC over other clustering methods as well. RCDTC showed the highest performance in these two conventional indices over other clustering algorithms. The eta-squared that represented the explanatory power was the lowest with RCPCA, however RCDTC provided the greatest omega-squared value of 0.35 after removing the overestimates. The omega-squared value of We conducted a paired t test to see the statistical significance of the difference of the means for each of the 47 entries that had values from the four algorithms. RCDTC had the greatest omega-squared value, compared to other three algorithms. The omega-squared values from RCDTC was significantly higher than that from RCKmeans (p = 0.0003) and RCPCA_RBF (p = 0.000), with the exception of RCPCA (p = 0.337). MSt (the mean of the squared distance of total conformers) indicates how the conformers in an entry were dispersed and MSb does that for the representative conformer ensembles. If the correlation between MSt and MSb is high, we could predict that the whole conformers and the representative conformer ensembles have strong dispersion relations. RCDTC had the greatest correlation of 0.9 and was followed by RCPCA (0.83), RCKmeans (0.82), and RCPCA_RBF (0.78). RCDTC had the greatest correlation consistently over different transformation methods. The consistency of the performance order indicated that the difference of d x, y � = d y, x in RMSD matrix was not as significant as to affect the performance order of the algorithms. Computational complexity The complexity of clustering algorithms is strongly related to the number n of data objects and the number k of clusters. From all experiments, the running times of four algorithms averaging 30 trials were compared. The run time shown in Table 6 is the sum of the running time (in s) for the 47 entries. RCPCA finished in 1.81 s, RCDTC (which had the greatest explanatory power) took 9.35 s. Cor (data size, run time) provided the relation between the size of an entry and the running time. RCDTC had the strongest correlation (0.99). The minimum running times were close to 0 for RCPCA and RCPCA_RBF, due to the small size of an entry, the smallest entry consisted of 12 conformers. The maximum running time was less than 3 s with RCDTC, RCPCA, and RCPCA_RBF. The maximum running time of RCDTC was 1.13 s (standard deviation = 0.06), which suggested the availability of using an online search. The computational complexity of k-means can be O(kn) [58]. The complexity of RCKmeans became O(tkn) as it repeated t times until finding the peak point with increasing k (n is the number of conformers in an entry and k is the number of clusters). After applied computational complexity algorithm, RCDTC used a general agglomerative hierarchical clustering algorithm during building the tree. The complexity of an agglomerative hierarchical clustering algorithm became different depending on the distance function [65]. The complexity of RCDTC with Ward's method was O(n 2 ). PCA used a singular value decomposition, which took O(kn) time [66]. The time complexity of RCPCA_RBF was similar to RCPCA. Several works explore the relative accuracy of various clustering algorithms in extracting the right number of clusters from generated data. The algorithm kept only the representative conformer ensembles as results, and the memory usage followed the regular clustering algorithms. The memory usage increased as follows: k-means < hierarchical clustering algorithm < PCA (Table 6) [40,66,67]. In order to compare the actual running time, four algorithms were implemented in R 3.2.2 [29] and ran in the environment of Windows 10 OS, 16 GB RAM, and an Intel Core i5-5200 CPU (2.2 GHz). In the future, these algorithms could be implemented as a service system. Thus, a user could install Python [68] and R [69] and submit a run command with the input structure file (e.g., sdf, mol2, oeb), and the system would provide the structure files of the selected representative conformers. Conclusions The work we present here analyzes and combines clustering partitions using four representative conformers ensembles were found from 47 entries as examples. This study intended to propose the representative conformers (with reasonable size) from conformational space because the automated conventional clustering methods did not require a learning process for determining the parameters or coefficients (as for conventional linear regression models). RCKmeans calculated the MSQb with increasing values of k, and then stopped after finding the maximum of MSQb. The second clustering method, RCDT performed with four different depths in a bottom-up hierarchical clustering selected the depth showing highest MSQb value. RCPCA and RCPCA_RBF extracted representative conformers at an explanatory power of 80%. All of the clustering methods are simple because they do not require any explicit parameters from the user; the algorithm automatically calculates all parameters and intends to maximize the explanatory power of the representative conformers. RCDTC was the most desirable clustering method presenting a consistent reduction of the data, the small size of a sample, and a high coverage of conformational space. In particular, if a drug has a long acyclic substituent (with high flexibility), the coverage of RCDTC (with less than half number of RC in RCPCA) was superior to the coverage of RCPCA. If a drug has the number of conformers less than 80 due to limited flexibility, RCDTC showed the least failure in acquiring 10% sized RC from original conformers. Even though RCDTC didn't present the best mean of eta-squared, it provided the best mean values of omega-squared after the removal of the overestimate. The result could be supported by a paired t test between the omega-squared value of RCDTC and the other clustering methods. The paired t test proved the significant of difference between RCDTC and RCPCA_RBF, RCDTC and Kmeans. The paired t test with RCPCA not shown any significance but the average number of samples in the RCPCA was 2.5 times greater than RCDTC. In addition, this tendency for RCDTC was supported by a 3D picture of the representative conformers and histograms of RMSD between the representative conformers and the whole conformers in the entry. Although this study used omega to generate the conformers, the performance of the clustering method was also retained for sampling conformers from the molecular dynamics simulation. The locally optimal sets of clusters for RCKmeans found by multiple retrials become different upon trials, so deterministic initialization methods need to be considered as a future work. The sequence process could add an advantage to the reported conformer sampling methods. The significance of this study is applicable to find plausible biological targets of new druggable scaffolds synthesized by chemical intuition without any biological background in future. Abbreviations MD: molecular dynamics; MDS: multidimensional scaling; MMFF: Merck molecular force field; MSb: mean of the squared distance between; MSw: mean of the squared distance within; MSQb: mean of the squared distance of the clique between clusters; MSQw: mean of the squared distance of the clique within clusters; PAM: partitioning around medoids; PCA: principal component analysis; RBF: radial basis function; RCDTC: DynamicTreeCut algorithm for the representative conformers; RCKmeans: representative conformer k-means; RCPCA: PCA for representative conformers; RMSD: root mean square deviation; SMA: simple moving average; SSQb: sum of the squared distance of a clique between clusters; SSQw: sum of the squared distance of a clique within a cluster. Authors' contributions Each author has contributed significantly to the submitted work. MK conceived and designed the project. With his leading, HK strongly proposed sampling algorithms and validation methods. With his criteria, CJ prepared data set and acquired figures and tables on 3D-conformers. With CJ's dataset, MK and HK performed practical experiments under R & python. MK made contents of an initial draft from every data & result and HK made a proofreading. MK, HK and DY drafted the manuscript and revised the manuscript. All authors read and approved the final manuscript.
9,059.4
2017-03-23T00:00:00.000
[ "Computer Science" ]
Introducing Formalism in Economics : von Neumann ’ s growth model reconsidered HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introducing Formalism in Economics : von Neumann’s growth model reconsidered Sandye Gloria-Palermo In the immediate post-War years, Mark Blaug (1999Blaug ( , 2003) ) identified the emergence of a new paradigm in economics, the so-called "formalist paradigm", which marked the arrival of the pre-eminence of (mathematical) form over (theoretical) content, and which is mostly characterised by the crucial importance economists give to a specific (non-constructive) kind of demonstration of existence of equilibrium.This revolution took shape in the 1950s and 1960s around the works of Arrow, Debreu, Patinkin, Solow, Dorfmann, Samuelson and Koopmans.The objective of this paper is to interpret John von Neumann's growth model (1937) as a decisive step of this formalist revolution, and by doing so, contribute to the definition of the formalist paradigm in economics.The 1937 model, it will be argued, is the manifestation of von Neumann's involvement in the formalist programme of mathematician David Hilbert, and provides economists with the new mathematical tools and methodology that will characterise the emerging paradigm in economics. The 1937 paper gave rise to an impressive variety of contrasting comments as far as the filiations (classical versus neoclassical) of the growth model are concerned, and constitutes one of those enigmas which historians of economic thought are so fond of.However, the identification of an economic formalist paradigm allows one to go beyond the traditional demarcation line between classical and neoclassical economics and challenges the legitimacy of such a criterion.The issue of the nature of the assumptions upon which the 1937 model is based becomes much less relevant PANOECONOMICUS, 2010, 2, pp. 153-172 than that of the extent of the methodological innovation introduced by von Neumann, namely, the introduction of the modern axiomatic approach in economics. The aim of the following sections is to elucidate this interpretation through a rational reconstruction of the epistemological approach adopted by von Neumann in the 1937 paper.The result of this reconstruction may be summarised in this way: von Neumann gives here an economic interpretation to a specific formal system which he initially elaborated in his previous work of 1928 on game theory.Each term here has a precise meaning: a "formal system" is composed of (1) a set of symbols, (2) a set of rules for transforming these symbols into formulae, (3) a set of rules for transforming the formulae, and (4) a reduced number of formulae representing the axioms of the system to be observed.By construction, a formal system has no semantic content and may take on different interpretations.A "model" is an interpretation that is given to a formal system.The clear-cut separation between syntax and semantics -between the formal aspects of the system and its various interpretations -is one of the most salient characteristics of modern axiomatics. In order to prove that the scope of the 1937 model may be correctly grasped by understanding von Neumann's global epistemological approach, we will proceed as follows.It is first necessary to offer a brief overview of the growth model and of the controversy over the filiations (section 1); the variety of the comments is by itself an invitation to consider an alternative interpretation.We found such an alternative in von Neumann's involvement in the formalist Hilbertian programme so that the classical/neoclassical demarcation line may well be replaced by the formalist/nonformalist criterion, as Blaug (2003) and Nicola Giocoli (2003) suggest (section 2).The term "formalism" is ambiguous and requires further elucidation.In particular, the question of the impact of Gödel's discoveries on the formalist programme is of primary interest to us to the extent that, it will be argued, the 1937 paper is a manifestation of the pragmatic turn that Gödel lays on formalist mathematicians (section 3).We will then have all the elements to show that von Neumann's main achievement in his 1937 paper has been to propose to economists the substitution of the mechanical analogy with the mathematical analogy, as a result of his participation in the post-Gödelian mathematical formalist programme (section 4). The 1937 Model and its Various Interpretations In the 1937 article, von Neumann characterises the equilibrium configuration of an economy expanding at a uniform rate.In equilibrium, prices are constant, as are the quantity ratios between different goods.Several simplifying assumptions are introduced by von Neumann to make equilibrium possible: constant returns to scale; pure and perfect competition; unlimited quantities of goods available through the productive process (this applies to land and labour, no primary factors existing in the model); no savings from workers who are depicted as draft animals; and no consumption from producers who save the totality of their income. Production is considered a temporal process (of length of one period) of transforming one set of goods into another; for reasons of simplicity and for ensuring the unity of the solution, von Neumann also had to make the assumption that each good entered the productive process of all goods, be it as input or output, and also in an PANOECONOMICUS, 2010, 2, pp.153-172 arbitrarily small proportion.The cost of production of one good depends on the value of the goods necessary for its production, plus the interest rate; the prices of goods correspond to their production costs, whatever the preferences of workers or producers are. Solving this model allows identifying the following. Which, among the set of goods in the economy, are the free goods whose price must be fixed equal to zero, and what the prices are of the other nonfree goods; free goods are goods whose produced quantity exceeds the quantity used in the production process in a proportion higher than the rate of growth of the economy.Introducing the free goods rule allowed von Neumann to avoid the occurrence of negative prices at equilibrium, and, from a mathematical point of view, transform the representation of the economy by introducing linear inequalities into the model;  Which are the profitable production processes and which ones are nonprofitable and will, therefore, not be implemented (a profitability rule which, like the free goods rule, leads to the use of linear inequalities in the model); the model allows the determination of the maximum intensity with which each profitable process will be implemented -that is, the produced quantities of each good, and, thus, given the constant returns to scale assumption, the growth rate of the economy;  The dual symmetry of the model is one of its essential properties and manifests as follows;  Solving the model may be interpreted on the one hand as a problem of technological choice: given the price vector, it is possible to determine the vector of the maximum possible produced quantities and the optimal growth rate, under the constraint of the free goods rule and given the impossibility of consuming more than is produced;  Solving the model may also be interpreted on the other hand as a problem of economic expansion, which turns out to be the mirror image of the previous problem.It consists of determining the optimal price vector and interest rate which prevail, given the intensities of production processes, the efficiency rule, and the competitive constraint according to which no extra profits are allowed. Von Neumann showed that an equilibrium solution exists, that it is unique, and that the interest rate of this configuration is equal to the growth rate.The proof of existence breaks with the traditional attempts of demonstrating the existence of a general equilibrium configuration consisting of counting the numbers of equations and unknowns.Such an approach did not constitute sufficient proof of existence, and, furthermore, the model was formalised in terms of inequalities (the free goods rule and the profitability rule) and thus required specific mathematical tools.The demonstration of existence provided by the author consisted in an extension of Broüwer's Fix Point Theorem and represented the first introduction of topological tools in economic analysis: von Neumann introduced a new function, ф (X, Y), which represents the ratio between the total incomes and the total costs, and demon-PANOECONOMICUS, 2010, 2, pp.153-172 strates the existence of a solution of the growth model, amounting to demonstrating the existence of a saddle point for function ф.Now, the existence of this saddle point is itself the consequence of von Neumann's demonstration of a fix point lemma.This demonstration is non-constructive in the sense that no method is provided for the determination of the fix point; with this kind of demonstration, equilibrium thus becomes a purely logical concept.Existence is demonstrated by showing that nonexistence would involve a logical contradiction.As emphasised by Giocoli (2003, p. 8) and also Blaug (2003, p. 146), this kind of non-constructive proof (or "negative proof") allows a direct jump from the axioms of the model to its final outcome and accounts for the neglect of mainstream economists in the analysis of the economic process that leads to equilibrium. With the notable exception of Harold W.  Supporters of a classical interpretation insist on the heterodox nature of the assumptions on which the model is built.Kaldor, for instance, essentially based his position on von Neumann's assumption of infinite expansion of primary factors for, according to him, one of the defining features of mainstream economics is precisely the existence of a physical constraint on the available quantity of these resources.In the same way, Luigi Pasinetti (1977) stressed the circular character of the production process, whereas Heinz Kurz and Neri Salvadori (1993) insisted on its temporal dimension and on the proximity of certain of the model's characteristics with past contributions of classical authors, from Petty to Remak and von Bortkiewitcz.It is worth remarking that according to this line of interpretation, and contrary to what is defended below, the nature of the mathematical techniques used in demonstrations does not constrain the theoretical nature of the model.Accordingly, von Neumann's model would offer proof that optimisation tools do not constitute a selective feature of neoclassical economics;  Supporters of a neoclassical interpretation put to the fore more technical arguments to show that the model may be understood as a special case of the more general neoclassical framework.Such generalisations entail, among others, the introduction into the model of the intertemporal preferences of consumers (Edmont Malinvaud 1953), the consideration of labour as a primary factor constrained by an exogenous growth rate (Michio Morishima 1964), a relaxation of the assumption of circularity according to which each production process uses or produces a given quantity of each good produced in the preceding period (John G. Kemeny, Oskar Morgen-PANOECONOMICUS, 2010, 2, pp.153-172 stern, and Gerald L. Thompson 1956), etc.This interpretation consists ultimately in presenting the 1937 model as a crucial step in the construction of the neoclassical paradigm, starting from Léon Walras (through the formulation given by Gustav Cassel) and extending to the modern demonstration of existence by Kenneth Arrow, and Gérard Debreu. 1 It is possible to appraise the relevance of the controversy over the filiations of von Neumann's model from different perspectives.If it were simply a question of situating the model either in the classical or the neoclassical camp, then the extent of the confrontation would be rather narrow and the relevance of the debate questionable.However, from an analytical viewpoint, the implications of this confrontation have turned out to be very significant for both sides: in the orthodox camp, von Neumann's growth model is at the roots of linear programming, the turnpike theorem of Dorfman, Samuelson and Solow, and of modern proofs of existence of general equilibrium; in the heterodox camp, the growth model is certainly an important source of the classical revival of the 1960s that followed the publication of Sraffa's book.For instance, Goodwin's limit cycle model formalises short-term economic fluctuations along the quasi-stationary long-term equilibrium trend of von Neumann;2 Andras Brody (1970) From a strictly technical viewpoint, von Neumann's contribution is easy to identify: it consists in the generalisation of Broüwer's Fix Point Theorem.The original title of the paper is explicit: "About a System of Economic Equations and a Generalization of Broüwer's Fix Point Theorem".In 1945, Kaldor, then editor of the Review of Economic Studies, asked von Neumann to modify his title to "A Model of General Economic Equilibrium".However, the first sentence of the article is evidence of the author's priority: "The subject of this paper is the solution of a typical economic equation system...", adding a little further on that "... the mathematical proof is possible only by means of a generalization of Broüwer's Fix Point Theorem i.e. by the use of very fundamental topological facts.This generalised Fix Point Theorem…is also interesting in itself" (von Neumann 1945/46, p. 29). In order to reach this strictly-defined objective, he adopts a typical mathematical approach (Mohammed Dore 1989a) which consists of encompassing this problem PANOECONOMICUS, 2010, 2, pp. 153-172 (the extension of Broüwer's Theorem) within a set of more general problems (solving a system representing a growth economy), the resolution of which allows a solution of the original problem.This idea is endorsed by the fact that the Minimax Theorem is an unnecessarily heavy tool to demonstrate the existence of an equilibrium solution of this economy: Nicholas Georgescu-Roegen (1951) provides a demonstration exclusively based on the properties of convexity and separation of hyper-plans, supporting the idea that the growth model represented to von Neumann only a specific support which allowed him to back up his mathematical results. From a methodological perspective, the contribution of the 1937 model is much more complex to identify.It is the objective of this rational reconstruction to show that von Neumann's path-breaking contribution consisted of extending the standards of rigour of mathematical formalism to the community of economists.Discussion about the nature of the model's theoretical foundations is relegated to the background. It is worth noting that the majority of the protagonists to the filiations debate make a point of mentioning the limitations of their comments, recognising to a certain extent that the field of economics does not represent the privileged field of investigation of the author: Tjalling C. Koopmans (1964, p. 356) declared along this line that despite the unquestionable theoretical advance provided by the 1937 growth model, the paper is rather poor economics; in the same way, David G. Champernowne (1945/46, p. 10) conceded that the author approached the question of existence as a mathematician, putting the emphasis on aspects of the problems distinct from those upon which an economist would have insisted; notice also the comment of Sukhamoy Chakravarty (1989, p. 70) who, before introducing the Kaldor-Solow debate, asserted that it was possible ultimately that von Neumann himself considered his paper as essentially technical in nature. 3 "God, it is said, speaks to each of us in our own language…", Paul Samuelson (1989, p. 100) declared with reference to the 1937 paper, explaining further on that the genius of von Neumann's contribution fitted any capital model.von Neumann (1945/46, p. 2) himself cleared the question of the filiations in a lapidary (and, after the fact, ironic) style: "It is obvious to what kind of theoretical models the above assumptions correspond", as if this was not the issue at stake, drawing attention once more to the technical aspects and the nature of the mathematical approach itself. Von Neumann and the Formalist Programme of Hilbert: Before and After Gödel From the start, a significant problem seems to threaten our interpretation.It is of chronological order.The article of 1937 was designed, then published after von 3 To support this assertion, Chakravarty indirectly leaned on the book review Morgenstern wrote in 1941 on Value and Capital by Hicks.In a biographical note, in fact, it appears that Morgenstern submitted his review to the previous reading of von Neumann.It is possible to read here that the main criticism addressed to Hicks regards precisely the kind of mathematical techniques used to prove the existence of an economic equilibrium.Cf.Robert Leonard (1995) for a detailed analysis of the collaboration between Morgenstern and von Neumann, and, more precisely, for an analysis of the extent of the intellectual influence of von Neumann upon Morgenstern. PANOECONOMICUS, 2010, 2, pp.153-172 Neumann was informed of the famous theorem of impossibility of Gödel, devastator of the mathematical formalist programme and unanimously recognised as an element of rupture in the evolution of modern mathematics.von Neumann is also one of the first mathematicians to seize the range of Gödel's theorem and to take into consideration its methodological consequences.It is necessary at this level to reconsider the definition of the formalist Hilbertian programme in order to understand more precisely what the impact of Gödel's discoveries was, and to what extent it modified mathematical practices. 4he term "formalism" itself is ambiguous because it bears a double significance.In its commonly accepted sense, formalism indicates nothing other than the mere use of symbols and unspecified mathematical techniques to express an idea.It is not acceptance that this term implies when it is associated with Hilbert.By formalism, one then understands a particular philosophy of mathematics which reduces it to a formal language, and is opposed to intuitionism and logicism on the question of the foundations of mathematics. The debate on the foundations emerges among mathematicians at the end of the nineteenth century, while attempts to extend the traditional axiomatic (Euclidean) method to branches of mathematics other than geometry are multiplying.This method consists in accepting without demonstration a reduced set of postulates, the axioms, and deducing by logical inference a set of theorems.For a long time, the empirical obviousness of axioms seemed to guarantee the veracity of the theorems which it was possible to deduce.But the growing abstraction of the mathematical practice (axioms are less and less obvious) and the discovery by Cantor and Russell of logical antinomies (even if axioms were obvious, contradictions could emerge) bring to the foreground the question of the consistency of formal systems."Consistency" refers to a precise property: a formal system is consistent when it is impossible to deduce from its axioms two contradictory theorems.Three types of answer were advanced to give back to mathematicians their confidence in the rigour of mathematical practices. Logicists try to found the consistency of mathematics by defining it as a branch of logic.The Principia Mathematica of Whitehead and Russell, published in 1910, falls under this head.There, the authors proposed a formalisation of arithmetic, whose goal is to clarify and make explicit all the logical inferences used in the reasoning and to show that all the concepts of arithmetic can be brought back to concepts of pure logic.However, this step did not gain much support from mathematicians as this solution did nothing but move the problem: the consistency of arithmetic depended on that of logic, and the consistency of logic was then itself under discussion. PANOECONOMICUS, 2010, 2, pp.153-172 Intuitionists, headed by Poincaré and Broüwer, placed the authority of the perception and of the intuition of the mathematician above that of the logical principles and inference rules whose historical and cultural relativity were underlined.To be consistent, a system of calculation must thus be built from obvious and unimpeachable axioms and from rules of inference subjectively considered as reliable by the mathematician. 5To Luitzen Broüwer (1912, p. 125), the fundamental dissension which exists between intuitionalism and formalism is that a different answer is given to the question of knowing where the mathematical accuracy exists: to the intuitionalist, in human intellect; to the formalist, on paper.Thus, the consistency of a mathematical theory does not require a demonstration for intuitionalists insofar as it results from the construction itself of the theory, following the principles and the procedures acceptable to the majority of mathematicians. On the contrary, the response of formalists to the uncertainty on foundations consisted of trying to establish rigorous evidence of consistency of the various branches of mathematics.Demonstrations of consistency initially take the form of relative proofs.Thus, Hilbert showed that the consistency of Euclidean geometry depends on that of algebra.Thereafter, he tried, with the assistance of his disciples (the first of whom was von Neumann, but also Ackermann and Bernays) to provide an absolute demonstration of consistency of arithmetic. 6It is at this level that the famous impossibility theorem of Gödel intervenes.In 1931, Gödel arrived at a devastating result on the question of the foundations of mathematics.He, in fact, showed that it was impossible to provide a demonstration of absolute consistency of arithmetic. 7Gödel did not prove the inconsistency of arithmetic, rather, the impossibility of showing that it was consistent, leaving the door open to the potential occurrence of new logical antinomies.In his book of reference on the question, Morris Kline (1980) presented in a provocative way the debate on the foundations of mathematics as a major intellectual rout, liquidating the hitherto-dominant design of mathematics like point of organ of rigour and scientific exactitude.The title of his work, The Loss of Certainty, returned precisely to this radical reconsideration: mathematics cannot be unanimously regarded any more as a set of firmly established eternal truths. This result certainly cooled down the enthusiasm of formalists but did not put an end to the programme of Hilbert whereof the work on foundations constitutes only one part.Formalists gave up the hope to be able to show that mathematics were con-PANOECONOMICUS, 2010, 2, pp.153-172 sistent, but they did not give up their confidence in the power of modern axiomatics as an engine for discovering new scientific knowledge.As Giorgio Israel and Ana Gasca (1995) note indeed, the formalism of Hilbert was founded on the belief in a pre-established harmony between mathematics and physical reality, a harmony which makes it possible to conceive mathematics like the base of all exact scientific knowledge of nature.The normative aspect of Hilbert's programme can consequently be interpreted as follows: the mathematical analogy, understood as the systematic adoption of the modern axiomatic approach represents the good scientific practice and this, whatever the scientific field considered. I believe: anything at all that can be the object of scientific thought becomes dependent on the axiomatic method, and thereby indirectly on mathematics, as soon as it is ripe for the formation of a theory.By pushing ahead to ever deeper layers of axioms . . .we also win ever-deeper insights into the essence of scientific thought itself, and we become ever more conscious of the unity of our knowledge.In the sign of the axiomatic method, mathematics is summoned to a leading role in science. (Speech by Hilbert 1918, in William B. Ewald 1996;and Roy E. Weintraub 1998) The association between axiomatic method and scientific rigour thus justifies the second side of the formalist programme of Hilbert consisting concretely of trying to extend this approach to other scientific disciplines, physics initially, but also economics. 8Thus, Hilbert's formalism has a double finality: to solve the problem of the foundations of mathematics (and, at this level, the results of Gödel are without call); and to extend modern axiomatics to all the scientific disciplines.This second aspect of the programme, the aspect that can be described as the imperialist or normative side, survived to Gödel. Weintraub (2002, p. 90) identified these two aspects of the formalist programme. 9He distinguished between the Finitist Programme for the Foundations of Arithmetic (FPFA) whose objective was to found the consistency of arithmetic and the AA (axiomatic approach), the only aspect of the formalist programme which has actually influenced the process of mathematisation of economics through the contributions of von Neumann for the strictly Hilbertian version of the AA programme, and Debreu for the Bourbakist version. 10 Until 1931, von Neumann was strongly implicated in the two aspects of Hilbert's formalist programme.As far as the work on foundations is concerned, he contributed to the axiomatisation of Cantor's set theory.This theory, known as the "naïve" theory of sets because it was then not yet in axiomatic form, leads to logical inconsistencies discovered around 1900 by Cantor himself and by Russell.Since his doctorate thesis, von Neumann contributed to looking further into the axiomatisation of set theory proposed by Zermelo, Fraenkel and Skolem through the introduction of new axioms and methods, making it possible to avoid the occurrence of these contra-8 Let us note, besides, that if the first two points of Hilbert's list of 1900 relate to the question of the foundations of mathematics, item 6 invites to axiomatisation of physics on the model of mathematics. 9Leonard (1995, p. 732) also puts ahead these two aspects of the formalist programme of Hilbert: to base all the branches of mathematics on a sure axiomatic base; and to extend axiomatics to other fields. 10For an account of the differences between the Hilbertian and Bourbakist versions of formalism applied to economics, see Philippe Mongin (2003). PANOECONOMICUS, 2010, 2, pp.153-172 dictions.The axiomatic method is used in order to allow a rigorous representation of the theory within which the origin of contradictions can be easily found and possibly eliminated. Regarding the normative aspect of the formalist programme, since 1926 von Neumann tackled the question of the mathematical axiomatisation of quantum physics, then defined around the two competing presentations of Heisenberg and Schrödinger.This work led to the publication in 1932 of the Mathematical Foundations of Quantum Mechanics in which the author managed to unify these two visions within a single formal system.Game theory is another field where the project of exporting modern axiomatics to new fields of scientific knowledge appears: von Neumann followed at the beginning the developments of Zermelo on the axiomatisation of chess, a question much debated in discussions in mathematical circles of the inter-War period.It was a question of showing that a formal system could receive an interpretation in terms of social phenomena rather than in strictly natural terms.von Neumann generalised the application of Zermelo to the context of any type of zerosum games, and this work led him to the determination of the Minimax Theorem in 1928.From there on Hilbertian formalism could penetrate the field of individual interactions and be used for the analysis of social phenomena. The Pragmatic Turn Gödel's discoveries affected von Neumann deeply.They contributed to immediately putting a term to his work on the foundations of mathematics and signalled the beginning of what many commentators describe as a pragmatic turn in the scientist's method. 11Hilbert's programme on the foundations conveyed the hope of justifying the axiomatic method, to carry mathematical results to the statute of eternal truth.Gödel destroyed this hope, but the majority of mathematicians (von Neumann among them) decided to use this method all the same because it remained, in spite of the loss of certainty, a rigorous way of producing scientific knowledge.The second side of Hilbert's programme was unharmed. The main hope of a justification of classical mathematics -in the sense of Hilbert or of Brouwer and Weyl -being gone [Gödel's discoveries], most mathematicians decided to use that system anyway.After all, classical mathematics was producing results which were both elegant and useful, and, even though one could never again be absolutely certain of its reliability, it stood on at least as sound a foundation as, for example, the existence of the electron.Hence, if one was willing to accept the sciences, one might as well accept the classical system of mathematics.Such views turned out to be acceptable even to some of the original protagonists of the intuitionistic system.At present the controversy about the "foundations" is certainly not closed, but it seems most unlikely that the classical system should be abandoned by any but a small minority. PANOECONOMICUS, 2010, 2, pp.153-172 That said, if, after Gödel, it was accepted that it was impossible to found mathematics absolutely, however, indirect ways existed to comfort scientists and to relativise the loss of certainty they suffered in full measure.First of all, should a contradiction emerge, formalisation makes it easier to search for its origins and eventually to eliminate it thanks to the baring of all of the concepts and reasoning intervening in the theory.The position of the Bourbakist programme is for this reason evocative: the objective of this radical version of formalism is not to found mathematics any more, rather, to clarify, through the linking of formal systems with one another, the architecture and unity of mathematics.The mathematician must face contradictions, if they emerge, on a case-by-case basis. Absence of contradiction, in mathematics as a whole or in any given branch of it, thus appears as an empirical fact, rather than as a metaphysical principle.The more a given branch has been developed, the less likely it becomes that contradictions may be met with in its further development.[…] What will be the working mathematician's attitude when confronted with such dilemmas?It need not, I believe, be other than strictly empirical.We cannot hope to prove that every definition, every symbol, every abbreviation that we introduce is free from potential ambiguities, that it does not bring about the possibility of a contradiction that might not otherwise have been present.Let the rules be so formulated, the definitions so laid out, that every contradiction may most easily be traced back to its cause, and the latter either removed or so surrounded by warning signs as to prevent serious trouble.This, to the mathematician, ought to be sufficient; (Nicolas Bourbaki, 1949, p.3)There is a second means of reassuring the scientist about the consistency of his formal system.It consists of putting back to the foreground considerations of a semantic nature.This assertion requires further elaboration.A prominent characteristic of Hilbertian formalism is without any doubt the strict separation between syntax and semantics.To formalise a theory in the sense of Hilbert means indeed emptying it from all of its semantic content and giving an abstract representation of it -the formal system -in the form of symbols, formulae (among them axioms) and sequences of formulae having no more obvious bond with the theory of departure.The formal system thus formed is like an abstract box, deprived of any significance, on which the mathematician works in order to draw theorems.At this stage, the question of the realism of the axioms is completely irrelevant.But it would be erroneous to say that in axiomatics reality does not matter at all, for in the next stage of the axiomatisation process, the objective is precisely to assign models to each formal system, that is, to find an interpretation in terms of real phenomena for the formal system. 12A model consists of an interpretation of the formal system, each symbol receiving a meaning, and the same abstract box being able to receive various interpretations.The initial theory which inspired the formal system constitutes one model, among others.Formalism as a philosophy of mathematics is attached at this level with Plato's realism consisting of supporting the thesis that mathematics does not create anything, does not invent objects, rather, discovers pre-existent objects in the intellect.The power of PANOECONOMICUS, 2010, 2, pp.153-172 axiomatisation is due precisely to the fact that the "discovery" of an abstract box makes it possible to explain several real phenomena, and rests on the belief of a preset adequacy between the structure of mathematics and reality. From the axiomatic point of view, mathematics appears thus as a storehouse of abstract forms -the mathematical structures; and so it happens without our knowing how that certain aspects of empirical reality fit themselves into these forms, as if through a kind of preadaptation.(Bourbaki 1950, p.231)This vision of the world is opposed to constructivism, of which intuitionism is a specific form, and which considers that a mathematical object exists only through its elaboration.To formalists, on the contrary, the very existence of any mathematical concept refers to a precise property: that it is free from any contradiction. Before paradoxes and logical antinomies were discovered and encouraged mathematicians to work out absolute demonstrations of consistency, it was sufficient, in order to found a formal system, to find a model in which its axioms were valid.For a long time, the obviousness of the Euclidian axioms was sufficient to ensure the consistency of Euclidian geometry: if axioms were valid, then it was also the case for the theorems that one could derive from them.The so-called method of the models consisting of finding an interpretation to an abstract system in which its postulates are valid was largely used to give relative demonstrations of consistency to formal systems less intuitive than the Euclidean one.Gödel's discoveries led mathematicians to reconsider the value of this method.One cannot found the consistency of a formal system absolutely, but the discovery of a new and adequate model for this system reinforces its heuristic validity and comforts the mathematician regarding its consistency.The 1937 contribution of von Neumann may be interpreted in that way: a new semantic correspondence is associated with a formal system elaborated beforehand.In particular, von Neumann gave an economic interpretation to a formal structure which he previously discovered in game theory (1928).This idea was expressed explicitly by the author himself when he declared that "the question whether our problem has a solution is oddly connected with that of a problem occurring in the Theory of Games dealt with elsewhere" (von Neumann, 1945/46, p. 33, n. 1). The formal similitude between the 1928 and 1937 models is, however, not immediate.In 1928, von Neumann demonstrated the existence of a solution for a two-person zero-sum game without ever defining a system of linear inequalities and equations.As Tinne H. Kjeldsen (2001) states, the Minimax Theorem was developed in 1928 with no explicit connection with the theory of linear inequalities, and there are no elements that show that von Neumann would be aware at that time of this connection.However, the fact that this connection does exist is sufficient to corroborate this rational reconstruction.Kuhn and Tucker (1958) explicitly link the solutions of the minimax problem with a system of linear inequalities and equations which corresponds to the problem raised in 1937.They state explicitly that if the intensity and price vectors are both normalised, they form probability vectors which may be regarded as mixed strategies for the players of a zero-sum two-person game.Dore (1989b) also studied the connection between the system of inequalities and equations PANOECONOMICUS, 2010, 2, pp.153-172 of the 1937 model and the two-person zero-sum game of 1928: the strategies of player I are represented by the set of vectors of production intensities, those of player II by the set of price vectors.Payoff functions depend on the strategies chosen by each player: player I chooses the vector of the intensities of production which maximises his payoff function, given the choice of player II, supposed for his part to choose the least satisfactory solution for the first player.A symmetrical reasoning relates to the choices of player II.The Minimax Theorem ensures the existence of a saddle point which corresponds to the situation where the rate of growth is equal to the interest rate. The 1937 article illustrates the separation and hierarchy between syntax and semantics, typical of the axiomatic approach.The same formal system, the same box, indeed receives different interpretations, i.e. different models: one in game theory, one in economics, and even one in thermodynamics. 13Thanks to Gödel, we know that the consistency of this formal system is impossible to prove.However, the fact that this system fits different interpretations is a reassuring symptom of its consistency.The economic interpretation is, in this connection, the manifestation of the pragmatic turn of the mathematical formalist programme which consisted in considering not only the syntax aspect, but also the semantic step of the axiomatisation process through the identification of adequate new models.Further, with the 1937 paper, a new domain of application, economics, opened itself up to formalist mathematics, and, more generally, to mathematical analogy. From the Mechanical to the Mathematical Analogy The growth model was elaborated in 1931 in the United States and first presented to a mathematical seminar at Princeton, but it has definitely been arousing interest and enthusiasm since its discussion in the Karl Menger seminar in Vienna in 1934.One reason for the particular interest of Viennese scholars in the growth model lies in the total adequacy between von Neumann's epistemological approach in this paper and the specific philosophical context of the Vienna Circle, marked by analytical philosophy, logical positivism, and a project of unification of sciences. One finds a definite parallelism between the concerns of formalist mathematicians on one side and of logical positivist philosophers on the other.The major concern of mathematicians is to eliminate the possibility of contradictory theorems; the major concern of philosophers is to eliminate from their discourse all metaphysical proposition, i.e. any pseudo-scientific assertion whose intrusion in the reasoning may lead to logical inconsistencies.In both cases, discussions are directed towards the research of certainty in scientific reasoning. The principal theses of logical positivism are presented by Otto Neurath, Rudolf Carnap, and Moritz Hahn in an article of 1929, "The Scientific Conception of the World: The Vienna Circle", better known as the "Manifesto of 29".Logical posi-PANOECONOMICUS, 2010, 2, pp.153-172 tivism falls under the continuation of the positivist programme of Auguste Comte, Hume and Mach, whose objective was to base knowledge directly on experience.To this end, members of the Vienna Circle used the latest developments of modern logic from Frege, Peano and Russell.More precisely, logical positivism was born from the introduction of logical analysis into the positivist framework.Logical analysis consists in reducing scientific concepts and propositions to experience, to direct observation, from which all the remainder logically arises.In the same way that axiomatisation makes it possible to uncover the source of possible contradictions easily, logical analysis tracks pseudo-propositions and contributes to eliminating them from philosophical discourse.The project of Carnap is even more ambitious.The philosopher has been working on a project to work out a formal logico-mathematical language used to guard scientists against the surreptitious intervention of pseudo-propositions in their reasoning.Philosophy thus becomes analytical: it is finalised with the revelation of the significance of propositions and the elimination of meaningless propositions.This "turning point of philosophy" (Schlick 1959, p. 56) is an indicator of the ambition of logical positivism to aim at unitary science.With analytical philosophy, it will not be necessary any more to speak about philosophical problems, because all problems will be discussed philosophically, i.e. clearly and meaningfully.The call for the unity of science, explicit in the Manifesto, claims to be epistemological.It is a means for scientists of working out a way of making science, whatever be the field of production of knowledge, which ensures rigorous reasoning, free from metaphysics.This is logical analysis for Russell, the universal formal language for Carnap, and modern axiomatics for Hilbert. The unifying ambition of formalism asserts itself gradually.Initially, it was a question of unifying, through the development of modern axiomatics, all the branches of mathematics.Formalists, rather, their predecessors, analyticals, were then opposed to the purist vision of mathematics dominant by the end of the nineteenth century.According to purists, mathematics was to remain split in various branches, each defined by its own method of investigation.For example, purists refused geometric demonstrations based on Cartesian algebra.Analyticals, on the contrary (with Hilbert in the forefront), believed in the interaction of the various branches and shared an ideal of unification of mathematics, conceived as a unified system of knowledge.In a second step, this strong optimism exceeded the borders of the discipline; building from the success of the axiomatisation of quantum physics, formalists then invested the field of social phenomena. Economics is implied in the philosophical programme of the Vienna Circle through the active interaction of the members of Hans ser took as data and posed the equations of a system of generalised interdependence.The question of imputation thus becomes that of demonstrating the existence of a general equilibrium configuration.Schlesinger, however, did not start from the Walrassian model, but from the very similar one of Gustav Cassel (1923), in which he integrated the free goods rule in order to avoid obtaining negative prices in equilibrium.The adoption of this rule has important consequences on the formal structure of the model: inequalities are introduced into the model; inequalities are relations of exclusion which constrain the prices of goods and which have the statute of axioms in the formulations offered by the mathematicians (Abraham Wald and later von Neumann) called to the rescue to solve the new system thus defined.The introduction of inequations is typical of formalist mathematics.According to Israel and Gasca (1995, p. 65), the motto "less differential equations, more inequalities" perfectly describes the tendency of the new mathematics. 14rom his collaboration with Schlesinger, Wald produced three articles, presented at the Mathematical Colloquium between 1934 and 1936. 15Over the course of the various articles, the mathematician refined the mathematical conditions necessary for the demonstration of existence (the syntax aspect) and concentrated himself more particularly on the question of their economic significance (the semantic aspect).16von Neumann became aware of Wald's demonstrations thanks to Menger in 1934 and announced the proximity with a model of general equilibrium which he had presented a few times earlier at Princeton.Menger then made an offer to von Neumann to publish his article in Ergebnisse (1937).According to Arrow (1989, p. 17 Kaldor (1989, p. viii) said, from his conversations with von Neumann, that the dissatisfaction of the mathematician with regard to the Walrassian model had a double origin: the possibility of negative prices at equilibrium, and the disinterest in dynamic forces.The 1937 model answered these two criticisms appropriately by proposing a model of expansion in which the free goods rule, with the statute of axiom of the formal system, eliminated the possibility of negative prices in equilibrium. The 1937 model, however, also addressed a more general criticism to economists. 17PANOECONOMICUS, 2010, 2, pp.153-172 I have the impression that [economics] is not yet ripe (I mean is not yet fully enough understood, which of its features are the essential ones) to be reduced to a small number of fundamental postulates -like geometry, or mechanics… (von Neumann to Abraham Flexner, May 25, 1934, Faculty files, John von Neumann, folder 1933-35, VNIAS) According to Leonard (1995, p. 738), the fundamental criticism of von Neumann here related to the kind of mathematical instruments used since Walras in economic formalisation.However, if one replaced the 1937 contribution within the second part of the formalist programme of Hilbert (the imperialist aspect of the programme, with its project of extension of modern axiomatics to various fields), then, more than the type of tools used, it is the concept itself of scientific rigour which seems to be at the heart of von Neumann's criticisms on the state of the discipline.Walras used the mechanical analogy with the stated aim of giving economics the scientific rigour which was lacking till that point.Walrassian economics, like the other sciences based on the mechanical analogy, adopts as scientific criterion of rigour confrontation with reality.Accordingly, a model is an economy in miniature which is sufficiently simplified to allow mathematical treatment.The adoption of the mathematical analogy radically modifies this perception.Scientific rigour is defined according to internal criteria, mainly aesthetical (von Neumann 1947); rigour becomes synonymous with purity, abstraction, and consistency of the formal system.Certainly, scientific rigour is a relative and changing concept.Thanks to Gödel, von Neumann paid the price. 18In the ultimate analysis, Gödel's discoveries resounds like a bulwark against possible drift towards abstraction, of which Hilbertian formalism could be the thin end of the wedge. As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from "reality" it is beset with very grave dangers.It becomes more and more purely aestheticizing, more and more purely l'art pour l'art.This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste.But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities.In other words, at a great distance from its empirical source, or after much "abstract" inbreeding, a mathematical subject is in danger of degeneration.At the inception the style is usually classical; when it shows signs of becoming baroque, then of George and Edouard Guillaume, L'Économique Rationnelle, at the origin of these criticisms on the state of economics.The authors gave a mathematical representation of a production economy explicitly formalised on the basis of a strict analogy with physics.This episode is reported in detail by Leonard (1995, p. 736). 18"I have told the story of this controversy [on the foundations of mathematics] in such detail, because I think that it constitutes the best caution against taking the immovable rigor of mathematics too much for granted.This happened in our own lifetime, and I know myself how humiliatingly easily my own views regarding the absolute mathematical truth changed during this episode, and how they changed three times in succession!" (von Neumann 1947, p.195). PANOECONOMICUS, 2010, 2, pp.153-172 the danger signal is up.It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this, again, would be too technical.(von Neumann 1947, p.195) Of course, these critics are not concerned specifically with economics but with the most abstract practices of mathematicians, as, for instance, in the Bourbakist programme, the radical extension of formalism.But by substituting the term "mathematical" by "economical" in the preceding quotation, the criticism remains valid to some extent, testifying to the success of the imperialist incursion of formalism in economics. The thesis of this paper is that the 1937 article is a contribution to the mathematical formalist programme.We defined this programme around two finalities: the search for certainty, and the project of unifying sciences.After Gödel's discoveries, the first part of the programme has faded deeply, whereas the second aspect remains intact.At the end of our reflection, it seems to us that the 1937 article fully fits the second aspect of this programme and reflects to a certain extent its new pragmatic dimension.We indeed tried to show that, a posteriori, von Neumann's 1937 contribution fulfils a twofold motivation:  To find a new model of a formal system insofar as, if it is not possible to prove the consistency of a system, it is nevertheless possible to consolidate the certainty of scientists through the exhibition of a new adequate interpretation;  To replace the use in economics of the mechanical analogy by the mathematical analogy. Admittedly, much has already been written on the "most important paper done in mathematical economics" (Weintraub 1985, p. 27;and 2002, p. 95).It was disguised with the most various interpretations.Ours is a contribution to the more restricted set of comments which concentrate less on possible filiations of the model than on the range of the original methodological approach of the author, positioning the 1937 contribution in the formalist revolution in economics.On this subject, von Neumann was, in those days, an enlightened defender of modern axiomatics, conscious of the possible drifts of formalist practices towards "the baroque", towards "l'art pour l'art", and it seems that his warnings concern economists very much today. Kuhn and Albert W. Tucker (1958) who provide an analysis of the mathematics of von Neumann's proof, economists in the 1950s and 1960s mainly concentrated their comments on the economic filiations of this model.In 1959, the Kaldor-Solow debate that unfolded during the Corfù Conference on Capital was the starting point of a long controversy over the interpretation of the 1937 model.Kaldor insisted upon the classical underpinnings of von Neumann's growth model, whereas Solow emphasised the possibility of integrating this model into the neoclassical framework.The arguments advanced by the two economists set the tone of future debate. starts from a simplified version (with no joint production) in matrix form of von Neumann's model in order to propose a mathematical rehabilitation of the labour theory of value.The variety of the interpretations ultimately shows that von Neumann's growth model hardly fits into the traditional classical/neoclassical classification system.It is a characteristic of path-breaking contributions to upset the prevailing schemes.Interpreting the growth model in the light of the forthcoming formalist revolution of the 1950s means focusing on the nature of the mathematical innovations introduced by von Neumann in economics.These innovations may be appraised from different perspectives. Mayer's Economic Seminar with those of the Mathematical Colloquium run by Menger, son of the founder of the Austrian economic tradition.Collaboration between mathematicians and economists crystallised in the resolution of the problem of imputation as defined by Menger in 1871.It consists of deducing the prices of factors of production starting from the value of the consumption goods which they contributed to produce.The solution suggested in 1889 by Wieser encounters a problem of surdetermination.Schlesinger, asked by Mayer to harness himself with the question, radically modified the nature of the problem: he endogenised the prices of consumption goods that Menger and Wie-Introducing Formalism in Economics: The Growth Model of John von Neumann PANOECONOMICUS, 2010, 2, pp.153-172 ), it is extremely probable that the models of Schlesinger and Wald on one hand and of von Neumann on the other were independently inspired by Cassel.Whereas Schlesinger introduced inequalities in the static model of Cassel, and Wald showed the existence of an equilibrium solution, von Neumann's model axiomatises the verbal developments Cassel made of an economy of generalised interdependence in a situation of uniform growth.Nicholas
11,474.4
2010-06-14T00:00:00.000
[ "Economics", "Philosophy" ]
FastSpar: rapid and scalable correlation estimation for compositional data Abstract Summary A common goal of microbiome studies is the elucidation of community composition and member interactions using counts of taxonomic units extracted from sequence data. Inference of interaction networks from sparse and compositional data requires specialized statistical approaches. A popular solution is SparCC, however its performance limits the calculation of interaction networks for very high-dimensional datasets. Here we introduce FastSpar, an efficient and parallelizable implementation of the SparCC algorithm which rapidly infers correlation networks and calculates P-values using an unbiased estimator. We further demonstrate that FastSpar reduces network inference wall time by 2–3 orders of magnitude compared to SparCC. Availability and implementation FastSpar source code, precompiled binaries and platform packages are freely available on GitHub: github.com/scwatts/FastSpar Supplementary information Supplementary data are available at Bioinformatics online. Introduction Microbiome analysis, which aims to assay the bacterial communities present in a given sample set, is important in many fields spanning from human health to agriculture and environmental ecology. The current standard for investigating bacterial community composition is to deep sequence the total genomic DNA or the bacterial 16S rRNA gene and analyze the genetic diversity and abundance within each sample. Unique sequences or sequence clusters are taken to represent operational taxonomic units (OTUs) present in the original sample, and the frequencies of these across samples are summarized in the form of an OTU table (Ju and Zhang, 2015). In many studies, this data is then exploited to construct correlation networks of OTUs spanning sample sets, which can be used to infer or approximate interactions between taxa (He et al., 2017;Nakatsu et al., 2015). The calculation of OTU correlation values is complicated by the sparse and compositional nature of the data. OTU counts are typically normalized by dividing each observation within a sample by the total count for that sample, giving a measure of relative abundance. However this transformation introduces dependencies between normalized sample observations, such that calculating simple correlations from the resulting values is not statistically valid (Aitchison, 1982). To perform robust and unbiased statistical analysis of sparse compositional data, it is generally first transformed from the simplex to Euclidean real space. Returning compositional OTU data back to Euclidean real space can be achieved by taking the log ratio of OTU fractions. Utilizing log-ratios restores independence for each OTU and allows components to take on a positive or negative value. Building upon the use of log ratios, Friedman and Alm (2012) articulate an important and robust algorithm, SparCC, to estimate the linear Pearson Correlation between OTUs from variances of log ratios. Given that correlations cannot be calculated directly from log ratio variances, SparCC estimates the correlation statistic by using log ratio variances to approximate the true OTU variance on the assumption that the number of strong correlates is small (Friedman and Alm, 2012). A Python 2 implementation of the SparCC algorithm has been released by the authors with several ancillary scripts for P-value estimation. However, the performance of this implementation precludes analysis of large datasets such as those generated from longitudinal studies (Teo et al., 2017). Further, the P-value estimator used by SparCC has been demonstrated to be biased and overestimate significance (Phipson and Smyth, 2010). Here we present FastSpar, a fast and parallelizable implementation of the SparCC algorithm with an unbiased P-value estimator. We demonstrate that FastSpar produces equivalent OTU correlations as SparCC while greatly reducing run time and memory consumption on large datasets. We also show that FastSpar has superior performance to the unpublished re-implementations of SparCC available in the mothur and SpiecEasi packages ( Supplementary Fig. S1). Implementation FastSpar is written in Cþþ11, utilizing OpenBLAS and LAPACK via the Armadillo library (Sanderson and Curtin et al., 2016;Dongarra et al., 1992;Xianyi et al., 2012). The GNU Scientific Library (GSL) provides functionality for OTU fraction estimation and threading support is delivered by OpenMP (Dagum and Menon, 1998). In place of the P-value estimator used in SparCC, we utilized an estimator which corrects P-value understatement by considering the possibility of recalling repetitious permutations or original data during testing (Phipson and Smyth, 2010). Algorithm fidelity To demonstrate that FastSpar produces equivalent correlations as SparCC, correlation networks were constructed by both programs using random subsets of an OTU table generated from the American Gut Project 16S rRNA sequence data (www.americangut.org), comprising a total of 6068 OTUs and 7523 samples. For each OTU pair, the mean correlation values calculated across 20 replicate runs were near identical for FastSpar and SparCC ( Supplementary Figs S2 and S3). The observed OTU correlations calculated by SparCC and FastSpar are not reproduced exactly as there is a degree of randomness in the underlying algorithm. Specifically, OTU fractions are estimated by drawing from a Dirichlet probability distribution (parameterized using sample OTU counts with pseudocounts applied) and are therefore non-deterministic. Hence replicate runs of either program on the same input table produce similar but non-identical results ( Supplementary Fig. S2A and B). To allow direct comparison of the algorithms, OTU fractions were pre-computed and provided as an additional input to both SparCC and FastSpar [note that the behaviour of the pseudo-random number generators (PRNG) used by FastSpar (GSL) and SparCC (numpy) differ, thus seeding the PRNGs is insufficient to enable direct comparison]. When using the same pre-computed OTU fractions as input, FastSpar and SparCC returned identical results (Supplementary Fig. S2D). These comparisons can be reproduced by running the code at github.com/scwatts/ fastspar_comparison. Performance profiling Performance was compared by running FastSpar and SparCC on random subsets of the American Gut Project OTU table (Fig. 1). Ten random subsets of each combination of sample sizes (n ¼ 250, 500, . . ., 2500) and OTUs (n ¼ 250, 500, . . ., 2500) were generated, and subjected to analysis using FastSpar (with and without threading) and SparCC. Wall time and memory usage was recorded using GNU time. The analysis was completed in an Ubuntu 17.04 (Zesty Zapus) chroot environment with the required software packages (Supplementary Table S1). Computation was performed with an Intel(R) Xeon(R) CPU E5-2630 @ 2.30GHz CPU and 62 GB RAM. The performance profiling can be reproduced by running the code at github.com/scwatts/fastspar_timed. Using 16 threads, FastSpar was up to 821Â faster than SparCC, (mean 221Â faster; Fig. 1A). Using a single thread, FastSpar was up to 118Â faster than SparCC (mean 32Â faster; Fig. 1A). The memory usage of FastSpar was up to 116Â less than SparCC (mean 26Â less; Fig. 1B). Notably the memory performance of SparCC on datasets with more than 1000 OTUs improves considerably and is due to the conditional use of a more memory efficient calculation for the variation matrix (Fig. 1B). This conditional calculation appears to be beneficial for SparCC when analyzing datasets with 500 or fewer OTUs but causes a substantial performance degradation for datasets with 500-1000 OTUs (Supplementary Fig. S4). As expected, both run time and memory principally scale with OTU number rather than sample number (Fig. 1C). For large datasets, it is therefore essential to perform pre-processing of the OTU table in order to reduce the number of OTUs prior to calculating correlations. This can be achieved primarily using two approaches: (i) filtering poorly represented OTUs, or (ii) distribution-based clustering such as that used in dbOTU3. The latter approach aims to reunite OTUs derived from sequencing error with the parent OTU by clustering OTUs based on nucleotide edit distance and count distribution (Preheim et al., 2013). This has the advantage of retaining count information and thus improving statistical power. To simplify the workflow for large-scale correlation network analyses of microbiome data, FastSpar is packaged with an efficient Cþþ11 implementation of dbOTU3 (github.com/scwatts/ otudistclust) that has been optimized for analysis of large datasets by applying concurrency design patterns. FastSpar provides a more robust and efficient method for inferring correlation networks from large microbiome datasets, which was previously intractable yet is likely to become commonplace in modern cohort studies. Funding This work was supported by the National Health and Medical Research Council of Australia (Project #1062227, Fellowship #1061409 to K.E.H., Fellowship #1061435 to M.I. co-funded by the Australian Heart Foundation) and by the Australian Government Research Training Program (Scholarship to S.W and S.R.).
1,901.2
2018-03-23T00:00:00.000
[ "Computer Science" ]
FEM Approach for Transient Heat Transfer in Human Eye In this paper, a bio-heat transfer model of temperature distribution in human eye is discussed using appropriate boundary conditions for cornea and sclera. Variational finite element method with Crank-Nicolson scheme is used to calculate the transient temperature distribution in normal human eye. The temperature with and without the effect of blood perfusion and metabolism on retina is simulated and compared for various ambient temperatures, evaporation rates and lens thermal conductivities. The obtained results are compared with experimental results and past results found in literatures. The results show that the steady state corneal temperature is achieved in around 31 and 45 minute of exposure at ambient temperatures 10 ̊C and 50 ̊C respectively. Steady state eye temperature is achieved earlier at higher evaporation rate. Similar result is achieved for higher lens thermal conductivity and also for lower ambient temperature. Introduction In a human body the internal body temperature almost remains constant despite the fluctuation of environmental temperatures up to certain limits.The main organ that keeps core temperature constant is dermal part [1].There is no skin layer to keep the core temperature constant in case of human eye.The skin layer (eyelid) covers eye surface (cornea) for 3 seconds in a minute (in an average).But for 57 seconds in a minute,ocular surface (cornea) has to manage thermal stress of an environment.The human eye is relatively a small and complex organ, consists of several sub domains with different material properties and having complex geometry. The calculation of the temperature distribution in human eye when it is heated or cooled is an important aspect of the development of infrared and radiofrequency safety guidelines and for hyperthermia and thermo-therapy treatments of various ocular diseases [2].The severity of the physiological effect produced by small temperature increases can cause eyesight to worsen.Actually, a small temperature increase in the eye of 3˚C -5˚C leads to induce cataracts formation [3].Some researchers believe that thermal effect can induce cataracts; other believes that it is the result of other biological and genetic issues.One of the early theory suggested that heat exchanges within the anterior eye caused the cataract. Verhoeff and Bell argued that cataract formed on the posterior surface of the lens because the anterior surface was cooled by circulation of the aqueous humor and so the cornea was air cooled [4].Investigation in Germany showed that the cataracts were due to the raised temperature induced indirectly through heat absorbed by the iris, where a rich blood supply would be consistent with a high degree of heating.At the same time Salil noticed the rise in cataracts one year after a very hot, dry summer in Iowa, highlighting likely environmental causes of cataracts [4]. Due to convective heat transport of the blood vessels; the blood picks up energy from hot areas and deposits this at cooler area or vice versa.The difficulty of modeling in the eye is due to the impact of blood flow on the heat transfer; however, incorporating the impact of blood flow in heat transport calculation is very important [2].The temperature inside the human body depends on the degree of temperature, duration of exposure and the environmental conditions which cause heat gain/loss from tissues [1].Hence, blood flow and time are the main factors that affect temperature distribution in human eye. Lagendijk [5] used a finite difference method to calculate the temperature distribution in human and rabbit eyes during hyperthermia treatment.The heat transport from the sclera to the surrounding anatomy is described by a single heat transfer coefficient which includes the impact of blood flow in choroid and sclera.Scott [6] utilized finite element method to obtain the temperature profile based on heat conduction using various heat transfer coefficients given by Lagendijk.Amara [7] presented a numerical thermal model of laser-ocular media interaction.Ng and Ooi [8] studied the effect of aqueous humor hydrodynamics on heat transfer within human eye by neglecting blood perfusion and metabolism.They neglected the effects because they assumed that perfusion in iris/cilliary is sufficient to maintain their temperature at 37˚C.Li et al. [9] studied the bio-heat transfer in the human eye, neglecting the effect of perfusion and metabolism, using 3D alpha finite element method.They assumed the contribution due to perfusion and metabolism is very small because they occur only on small part in the human eye.Cvetkovic et al. [10] developed a thermal model and studied effects of pulsed laser in human eye.Narasimhan et al. [11] developed transient model to study heat transfer in human eye undergoing laser surgery.Flyckt et al. [2] studied the impact of choroidal blood flow and scleral convection heat transfer coefficient in human eye. All the previously developed models have neglected the effects of blood perfusion and metabolism on retina/iris/cilliary body.The significance of blood perfusion and metabolism on the temperature distribution in the eye is debatable, since they take place only on retina, choroid, iris and cilliary body, which constitute very small part of human eye.The blood flow in the iris/sclera part plays significant role to adjust eye temperature with the rest of the body [3].In [8] it is clearly mentioned that "Due to lack of literature data, the perfusion term is neglected.The effect of this assumption on the accuracy of the model however remaining unknown".The retina has perhaps the highest oxygen consumption rate (metabolism) of any tissue in the body [12].Hence it is necessary to investigate the effects of blood perfusion and metabolism in order to obtain a more accurate result.The objective of this paper is to determine the transient temperature distribution in human eye.Hence, this model provides the results for transient temperature distribution including the effect of blood perfusion and metabolism. Model Formulation The eye is assumed a perfectly bonded solid structure with each component homogeneous.The eye is considered having six major components: cornea, aqueous humor, lens, vitreous humor, retina, and sclera.As sketched on Figure 1, the thickness of cornea, aqueous humor, lens, vitreous humor, retina and sclera have been considered as 1 A one dimensional finite element human eye model has been developed to simulate its thermal unsteady state conditions.The governing differential equation used for heat flow in the human eye due to Pennes [13] is given by: where, ρ = tissue density (kg/m 3 ), c = tissue specific heat (J/kg˚C), ρ b = blood density (kg/m 3 ), c b = blood specific heat (J/kg˚C), k = tissue thermal conductivity (W/m˚C), ω = volumetric blood perfusion rate per unit volume (s −1 ), T b = blood temperature (˚C), T = tissue temperature (˚C), t = time (s), Q m = heat generation due to metabolism (W/m 3 ). The three terms on the right-hand side of bio-heat Equation ( 1) are blood conduction, blood perfusion, and metabolism.In this model, the effect of blood perfusion and metabolism is analyzed only on retina. Boundary conditions for the system can be defined as follows [9,14]: 1) In the back of the eye, heat is transferred from blood via ophthalmic vessels to the sclera: where  is the normal direction to the surface boundary, k s is the thermal conductivity of sclera, h b is the heat transfer coefficient between blood and eye (W/m 2 ˚C), and T b is blood temperature (˚C). 2) At the cornea, heat loss from the eye occurs through convection, radiation, and tear evaporation: where h  represents the convection heat transfer coeffi-cient between the cornea and ambient environment (W/m 2 ˚C), T  is the ambient room temperature (˚C), σ is the Stefan Boltzmann constant (5.67 × 10 −8 W/m 2 ˚C4 ), ε is the emissivity of the cornea, and E is evaporative heat loss (W/m 2 ).The nonlinear radiation term in the boundary condition (3) is treated by using simple iterative procedure as follows:   where, T represents an initial guess of temperature.The iteration is completed when the convergent condition is satisfied: where  is iteration tolerance. The inner body core temperature T b is assumed to be 37˚C.Therefore, the initial boundary condition is The partial differential Equation (1) together with boundary conditions (2) and ( 5) in one dimensional variational form is: We write I separately for the six layers in the following equation: Further, to optimize I, we differentiate I partially with respect to T i and equating to zero, we get, 0, 0,1, 2, , 5 The system of Equations ( 11) can be written in matrix form where and [K] are 6 6  matrices called capacity and conductivity matrices respectively.Now we apply Crank-Nicolson method to solve the system (12) with respect to time, we get the following relation where t  is time interval.The temperature increases from outer surface of cornea towards eye core when ambient temperature is less than 37˚C and vice versa.Hence, we consider the temperature increases/decreases in linear order towards body core with regard to thickness.For initial nodal temperatures   0 T at time 0 t  , we assume the following initial condition where   0, 0 T  20˚C and r  constant to be deter- mined.The Equation ( 13) is repeatedly solved to get the required nodal temperatures. Results and Discussion To solve equation ( 13), the following values of parameters are considered [14]: .The numerical calculation for unsteady state temperature distribution is carried out for different parts of human eye with and without taking blood perfusion and metabolism on the retina.The effects of different ambient temperatures, tear evaporation rates and lens thermal conductivities are studied and compared.The results obtained are compared with experimental and other results found in literatures.The overall thermal behavior of human eye is observed for 3600 seconds using one second time step size. Effect of Blood Perfusion and Metabolism The transient temperature distribution of several parts of human eye for It can be seen from Figures 2 and 3 that the anterior and posterior part of cornea, aqueous and lens temperatures begin to stabilize around 1870 seconds (approxi-mately 31.17minutes) and 2740 seconds(approximately 45.67 minutes) respectively.In human eye, heat gain occurs through conduction, perfusion, metabolism, blinking, tear flow, evaporation, and convection but heat loss occurs only through conduction, evaporation, convection and radiation.More factors are involved in heating eye components than cooling.Hence, eye is more vulnerable when it is exposed to high temperatures (high ambient temperatures, hyperthermia treatment, laser surgery etc.) than low (low ambient temperatures, cryosurgery treatment etc).The temperature difference obtained between anterior and posterior parts of cornea are 2.2˚C and 0.65˚C respectively at 10 C T    and T ∞ = 50˚C.As well the temperature differences between anterior and posterior parts of lens are 0.2˚C and 0.05˚C. The corneal temperature distribution with and without blood perfusion and metabolism in retina at T ∞ = 10˚C and T ∞ = 50˚C are shown in Figures 4 and 5 below. It can be observed from Figures 4 and 5 that when corneal surface temperature is attained steady state in both cases, the temperature differences with and without blood perfusion and metabolism obtained are 0.23˚C and 0.06˚C respectively.Steady state corneal surface temperature is reached earlier in case of having blood perfusion and metabolism in retina.Accordingly, steady state corneal surface temperature is reached earlier at T ∞ = 10˚C compared to that at T ∞ = 50˚C.This is due to better heating mechanism than cooling in eye. For further investigation, we now put the effect with blood perfusion and metabolism as Case I and without them as Case II. Effect of Tear Evaporation The corneal surface contains a three-layered structure-a mucoid layer, a thick aqueous layer and thin oily layer.The function of oily layer is to prevent evaporation of tear from the corneal surface.When the oily layer is destroyed, the evaporation rate increases dramatically [15].For different evaporation rates of 20 W/m 2 , 40 W/m 2 , 100 W/m 2 , 180 W/m 2 , the corneal surface temperature differences between Case I and Case II are obtained as 0.10˚C, 0.12˚C, 0.18˚C, and 0.27˚C respectively.The differences show that when evaporation rate increases the nodal temperature decreases in both cases but decreasing rate is slow in Case I than Case II.It means, steady state corneal surface temperature is reached earlier in Case I than Case II.This is due to the effect of perfusion and metabolism on retina.The corneal surface temperature is dropped in by 4.78˚C and 4.95˚C in Cases I and II respectively, when evaporation rate is reached from 20 W/m 2 to 180 W/m 2 .Also, steady state corneal temperature is achieved earlier at E = 180 W/m 2 than at 20 W/m 2 .The sudden decrease in corneal temperature at E = 180 W/m 2 increases the temperature difference between cornea and aqueous.It is because the greater the temperature difference between two surfaces, the faster is the rate of transfer of thermal energy.Hence steady state corneal temperature is reached earlier at higher evaporation rate. Effect of Ambient Temperatures Heat losses occur due to convection, radiation and tear evaporation at cornea.This loss is strongly related to ambient temperature.In addition, ambient temperature is one of the factors affecting the amount of tear in the eyes [9].Four sets of ambient temperatures 10˚C, 25˚C, 40˚C, and 50˚C are considered for analysis.The numerical results are presented in Figures 8 and 9. It can be seen from Figures 8 and 9 that, increase in ambient temperature from 10˚C to 50˚C increases corneal temperature from 27.69˚C to 39.69˚C in Case I and from 27.41˚C to 39.78˚C in Case II.It shows that increase in ambient temperature increases eye temperatures lower in case I compared to Case II and vice versa.This is due to the cooling effect of perfusion at retina.Also, the steady state corneal temperature is reached earlier at T ∞ = 10˚C than at T ∞ = 50˚C.This is because; more factors are involved in heating eye components than cooling. Effect of Lens Thermal Conductivities It is well known that the water content of lens decreases as age increases.Decrease in water level of lens increases its hardness.This process changes the thermal conductivity of lens due to age [14].Four sets of values It is found that, increase in lens thermal conductivity from 0.21 W/m ˚C to 0.54 W/m ˚C increases corneal temperature by 0.87˚C in Case I and by 0.85˚C in Case II but decreases posterior lens temperature by 0.23˚C in Case I and by 0.25˚C in Case II.More heat transfer occurs via conduction from the posterior region to anterior region, when thermal conductivity of lens increases.As a result corneal surface temperature is increased.The steady state corneal temperature is achieved earlier at K 3 = 0.54 W/m ˚C than at 0.21 W/m ˚C but the process is reversed in case of posterior part of lens.Similarly, steady state eye temperature is reached earlier in Case I compared to Case II due to the effect of perfusion and metabolism on retina. Conclusions In this model, a comparative study of temperature distribution with and without considering the effect of blood perfusion and metabolism on retinal part of human eye is presented.The steady state corneal temperature is found to be 32.17˚C in Case I and 32.05˚C in Case II respectively.Many authors [1,3,10,13,15] reported similar temperature distribution along the eye pupillary axis.The reported values range from 30.92˚C to 33.7˚C on corneal surface.Hence, our steady state results are in accordance with past results.In earlier studies [1,3,10,13,15], the effect of blood perfusion and metabolism is assumed negligible on retinal part.We are able to show that blood perfusion and metabolism play an important role in maintaining the eye temperature. The surface temperature of the cornea was measured using bolometer by Mepastone [16], the mean temperature variation on the cornea surface was obtained to be 0.8˚C under 33.2˚C -36˚C.Kessel et al. [17] found that steady state corneal temperature was achieved between 33˚C to 35˚C, when ambient temperature was increased.Also they found that 20˚C increase in ambient temperature, from 2˚C to 22˚C, was required to increase corneal temperature by 3˚C.In our model, the steady state corneal temperature is achieved between 32.17˚C to 35.58˚C, when ambient temperature is increased from 20˚C to 40˚C.Also, a 20˚C increase in ambient temperature is required to increase corneal temperature by 3.38˚C.Hence, the temperature obtained from our model agrees with the experimental results obtained by Mepastone [16] and Kessel et al. [17].The slight differences whatsoever obtained may be due to consideration of various parameter values at the layers of human eye. This model may help to understand the thermal behavior of eye, which can be very crucial in ocular diseases like corneal pain, presbyopia, cataracts etc.This may also prove to be valuable in several eye therapies and surgeries like hyperthermia using microwave, heating due to laser etc.Thus, the model could be useful for the researchers to study the effects of heat flux inside the human eye and medical scientists to improve the diagnosis and treatment. . Similarly, 0 1 2 3 4 5 , , , , , T T T T T T and 6 b T T  (body core temperature) are the nodal temperatures at a dis- Figure 1 . Figure 1.Finite element sketch of human eye. Four sets of data values for E equals 20 W/m 2 , 40 W/m 2 , 100 W/m 2 , 180 W/m 2 are used in this investigation.The temperature variations for different evaporation rates at 25 C T    , 3 0.4 W/m C K   are shown in Figures 6 and 7. Figure 6 . Figure 6.Corneal surface temperature for different evaporation rates (Case I). Figure 7 . Figure 7. Corneal surface temperature for different evaporation rates (Case II). Figure 8 . Figure 8. Corneal surface temperature for different ambient temperatures (Case I). Figure 9 . Figure 9. Corneal surface temperature for different ambient temperatures (Case II). Figure 10 . Figure 10.Anterior corneal and posterior lens temperature for different Lens thermal conductivities (Case I). Figure 11 . Figure 11.Anterior corneal and posterior lens temperature for different Lens thermal conductivities (Case II).
4,170.6
2013-09-30T00:00:00.000
[ "Engineering", "Physics" ]
Multi-Level of Feature Extraction and Classification for X-Ray Medical Image ABSTRACT INTRODUCTION The production and relatively straightforward management of digital visual content have been increasingly in demand over these recent years. Specifically in the medical domain for digital information, the continuous development of medical images such as X-ray, Computed Tomography (CT) scans, and Magnetic Resonance Image (MRI) scans contributed substantial amount of images daily. For example, the Department of Radiology in University Hospital of Geneva produced 12,000 to 15,000 images daily in 2002 [1]. The number of produced and stored images daily for this department continued to increase to 50,000 images in 2007 [2] and 114,000 images in 2009 [3]. Essentially, these images reveal critical information of visually inaccessible body parts, which are essential for medical diagnosis, medical education, and medical studies. Therefore, effective techniques to navigate and search substantial amount of medical images accurately are necessary. The conventional image retrieval system depends on keyword search, in which the keywords or annotated image descriptions are manually assigned for indexing purpose. Subsequently, 155 relevant images are retrieved using this indexing system, which is known as Text Based Image Retrieval (TBIR). However, the TBIR method is disregarded due to the presence of thousands or even millions of image in the database. The process of entering metadata to each of these images is costly and timeconsuming [4]. Consequently, rather than depending on TBIR, the Content Based Image Retrieval (CBIR) method is opted, in which the image retrieval process depends on features extracted from the image itself (the visual content of an image). Specifically, low-level features such as color, texture, and shape are considered as feature vectors, which are automatically extracted in the process of searching for specific images with respect to the query image. Accordingly, this technique is less time-consuming compared to the technique that depends on texts for the purposes of indexing and retrieving [5]. However, CBIR does not interpret data in the same way that a human does. Additionally, it is inexpedient for the system to elucidate high pixel images as how a human perceives images. Such limitation is known as semantic gap [6], which is specifically defined as the difference between how a human perceives an image based on a high-level semantic concept and how a computer classifies an image based on low-level features. Nevertheless, in practice, CBIR cannot be achieved based on only simple independent visual features. Various medical image classification methods using machine learning are developed to reduce the issues of semantic gap. With that, this study formulated an effective classification system for X-ray medical images based on multi-level feature extraction, feature reduction, and multi-classification techniques. The evaluation of this integration was performed using ImageCLEF2005 database. Attempts to utilize global or local features with either Support Vector Machine (SVM) classifier or k-Nearest Neighbor (k-NN) classifier for X-ray medical images were performed in various related studies, as summarized in Table 1 [7]- [9]. For this study, the evaluation was based on correctness rate. The correctness rate, as shown in Equation 1, is the result of dividing the number of correctly classified images by the total number of images. (1) ANALYSIS AND PROPOSED SOLUTION Realistically, it is a challenge to reduce the semantic gap because visual features of images do not present high-level semantic concepts and instead of utilizing the content of images, users opt for text-based query. With that, this has further instigated studies to develop effective medical image classification methods. However, the familiarization process with the semantic model in classifying images and enhancing the retrieval performance is complex. Conversely, the results obtained from the previous studies to classify X-ray medical images, as shown in Table 1, utilizing global or local features with either SVM or k-NN classifiers are not regarded as the finest solutions to the issue of reducing the semantic gap. These results remained vary from one another. For example, referring to Table 1, RWTH-i6 team achieved error rate of 12.6% while Montreal team achieved error rate of 55.0% for the same dataset. Meanwhile, Mueen [10] combined feature extractions of global, local, and pixel for X-ray medical image classification and annotation using both SVM classifier and k-NN classifier. The resultant outcome of this combined feature extraction, consisting of 57 classes (ImageCLEF2005 database) revealed that the performance of SVM exceeded the performance of k-NN in most of the classes (specifically, 48 classes) while the performance of k-NN exceeded the performance of SVM in the remaining nine classes only. With that, SVM was considered for annotation purpose. Three hierarchical levels of image annotation were applied to reduce the semantic gap. Apart from that, in another study on 4,937 X-ray medical images, Fesharaki & Pourghassem [11] achieved accuracy rate of 82.8% using feature extraction of shape and Bayesian classifier. Conversely, Ghofrani [12] achieved higher accuracy rate (90.8%) using feature extraction of shape and edges as well as SVM classifier on a dataset of 1,169 X-ray medical images. The accuracy rate increased to 94.2% with the integration of feature extraction of shape and texture and SVM classifier (rather than neural classification technique) on a dataset of 4,402 X-ray medical images [13]. Zare [14] utilized feature extractions of Gray Level Co-occurrence Matrix (GLCM), Canny, pixel, BoW, and LPBd as well as SVM and k-NN classifiers, where SVM achieved higher accurate rate (90%) based on ImageCLEF2007 database. In conclusion, there is a need to utilize an effective classification that integrates multi-level feature extraction (global and local features) and multi-classification techniques for X-ray medical image classification. METHODOLOGY This present study proposed a framework to classify X-ray medical images based on multi-level feature extraction using the ImageCLEF2005 database. In this study, the development of the proposed framework was based on feature extraction, combination and selection, and classification, which are specifically discussed in the following sections. Feature Extraction This study extracted, combined, and utilized various features to explore different aspects of X-ray medical images. As presented in Table 1, several feature extractions were utilized, where global feature and local feature were considered in certain studies. Meanwhile, for this study, the following feature extraction algorithms were considered: (1) global feature, (2) local feature, (3) pixel feature, and (4) speeded up robust features (SURF). In particular, global features were extracted from each image by applying feature techniques of shape and texture, which generated 282 features. These features included 130 dimensions of shape features and 152 dimensions of texture features. The local features, on the other hand, were extracted by segmenting the input image into four non-overlapping blocks of pixels, resulting to the extraction of 282 dimensions from each patch. The pixel feature was extracted after resizing each image to 15 x 15 pixels, which generated 225 features. SURF technique subsequently extracted 150 features from each image. Texture Feature Essentially, texture features refer to the underlying structural arrangement of the surfaces in the input image. There are two types of texture features, which are (1) Gray Level Co-occurrence Matrix (GLCM) and (2) Wavelet Transform (WT). GLCM was firstly introduced by [15]. It is mainly utilized to compute the second-order texture characteristics in solving the issues of categorization efficiently. For N x N image, it includes pixels with gray levels of 0, 1, 2, …. (G -1) and represented by matrix where each matrix element stands for the joint incidence of intensity levels and with prospects at a certain distance, d (which refers to the related distance between each pair of pixels and a related orientation angle) [16]. In order to obtain enhanced outputs, several co-occurrence matrices must be considered; one for each related location offers various texture features or similar features at various scales. Several texture measures of GLCM could be directly calculated [15] [23]. Generally, θ is quantized into four different directions: 0o, 45o, 90o, and 135o. Mean of Mean of px and py respectively They represent the "standard deviations of px and py", respectively" They represent "the entropies of px and py respectively" The following equations were utilized to compute the presented twenty-two texture statistics: Maximum Correlation Coefficient = (second largest eigen value of Q )0.5 (20) Meanwhile, one of the most commonly used methods for multi-resolution image description and analysis is the WT. It specifically offers an efficient set of tools for various applications such as compression of images or signals, detection of objects, improvement of images, and noise removal. Wavelets are functions, satisfying a linear combination of various conversion and scaling processes of a wave function. It utilizes wavelet transform, specifically the Haar wavelet, to extract texture feature. This first known wavelet is considered as the simplest wavelet basis, which was utilized for orthonormal wavelet transform with compact support [24]. Equation 24 represents the Haar function equation using a step function, . Entropy HXY HX HY The Haar wavelet was applied in this study since it is the most efficient technique to calculate the feature vector [25]. This was performed by applying the Haar wavelet for four times in order to divide the input image into 16 sub-images, as illustrated in Figure 2. Each image I of the size of was initially resized into 100 x 100 pixels. The Haar wavelet was subsequently applied to each image before dividing it into four sub-images, where each image has size -L10L10, L10H10, H10L10, and H10H10. In the sub-image of L10L10, low frequencies were present in both horizontal direction and vertical direction. Specifically, low frequencies were present in the horizontal direction while high frequencies were present in the vertical direction. However, in the sub-image of H10L10, high frequencies were present in the horizontal direction while low frequencies were present in the vertical direction. and in the H10H10 sub-image, there are high frequencies in both directions. Following that, the Haar wavelet was applied on the image of L10L10 with the size of to obtain four new subimages, where each image has size -L11L11, L11H11, H11L11, and H11H11. Similar process was repeated twice to obtain sub-images of and , respectively, as illustrated in Figure 2. Additionally, four features were computed for each of the presented four procedures, which are (1) entropy, (2) energy, (3) mean, and (4) standard deviation. With that, there were 64 features computed from all subimages. Figure 3 illustrates a sample image used as an input for the Haar WT, which was obtained from the ImageCLEF2005 database. The obtained sub-images after applying the second, third, and fourth Haar wavelet are depicted in Figure 4, based on the aforementioned. Haar wavelet was applied four times in order to divide the input image into 16 sub-images to get the most information about the image, applying Haar wavelet for the five time gets the sub images L13L13 equal to zero, therefore was applied only four times. Figure 4. The obtained sub-images after applying the second, third, and fourth Haar wavelet Shape Feature The shape feature offers geometrical information concerning an image object, which does not vary with the variations in the orientation, scale, and location. For this process, the shape information of an image was explored based on edges. Thus, the histogram of edge techniques and SURF technique were applied in this study to extract the shape feature of images. Histogram of edge was utilized to explore the shape feature for each image. In particular, both gradient histogram and edge orientation histogram were applied. The first edge histogram technique was utilized to extract 50 features from each image while the second edge histogram technique was utilized with a Canny filter to extract 80 features from each image [26]. The SURF technique has a scale and rotation invariance property, which facilitates object identification with no regards to the image's resize or representation of rotation around a certain axis [27]. Realistically, variance occurs because not all information could be captured from a specific recording. Invariance is an essential property of image since the similarity measurement is probable based on the feature between two images that cannot be duplicated. Thus, the SURF technique was applied to extract 150 features from each image. Combination and Selection Combined feature refers to the combination of global feature, local feature, pixel feature, and SURF into one vector. Figure 5 depicts the overall process of feature extraction as well as combination and selection. In order to extract pixel features, images were resized to 15 x 15, which contributed a vector of 225 pixel features. The global features refer to the features of shape and texture, which were extracted from the whole image; thus the resultant outcome of this combined vector was 282 features, specifically 130 features from the edge histogram, 64 features from the WT and 88 features from the GLCM. Conversely, the local features were extracted by segmenting the image into four non-overlap patches, which shared similar 282 features. This led to 1,128 features, combined in one local feature vector. Meanwhile, 150 features were obtained for the SURF. As a result, the overall feature vector dimensionality for each image equals to 1,785 feature vectors. Given the substantial number of feature vectors involved, a certain dimensionality reduction technique must be performed to decrease the feature vectors. The most commonly used dimensionality reduction technique is the principal component analysis (PCA) [28]. This simple technique effectively decreases the dimensionality of data. With the application of this technique, the feature vectors were reduced from 1,785 into 25, 50, and 100 one to study and choose the optimal precision outcomes. Classification The classification of image is presented as the main aspect of this present study with respect to the objectives of this study. Four distinct features were initially extracted from the input image, which were global feature, local feature, pixel feature, and SURF. Following that, these extracted features were combined into one feature vector. PCA was subsequently performed to decrease the dimensionality of feature vectors. The developed image classification system from this study was evaluated using the ImageCLEF2005 database [29]. This database was segmented into training set and testing set. The training set was categorized into 57 known classes, which were pre-defined. EVALUATION A series of experiments was conducted to evaluate the performance of the proposed method in this study. In particular, this is to validate the proposed method and its significance for X-ray medical image. The implementation of the proposed method included feature extraction, feature combination and reduction, and X-ray medical image classification using SMV and k-NN classifiers, which were evaluated to determine its performance based on the results of accuracy rate. As a result, four experiments were conducted. The specific methods and settings of these experiments are described and obtained results in this study are presented and discussed in the following sub-sections. Experiment 1 -Feature Reduction This experiment was conducted to evaluate and investigate the performance of the our proposed system after reducing the number of features using PCA, which was essential to determine the accuracy rate of the system. Feature vectors were reduced from 1,785 features into 25 features, 50 features, and 100 features, which was termed as PC1, PC2 and PC3, respectively. The results of accuracy rate with optimal feature reduction were obtained with and without a threshold for both datasets containing specific ratio of training set and testing set (80:20; 90:10) and were subsequently compared among these obtained values. The PCA is considered competent and effective in reducing the dimensionality of data. Both SVM (with RBF kernel) and k-NN (with k = 1) were employed for each evaluation stage in this experiment. Table 5 reveals the obtained results using k-NN classifier while Table 6 reveals the obtained results using SVM classifier. Consequently, PC2 (50 features) achieved the highest accuracy rate. Specifically, PC2 obtained the highest percentages, with and without the threshold using both classifiers. Based on this experiment, PC2 was considered for the subsequent experiments. It should be noted that undeniably, it is essential to have sufficient number of features for discrimination and for high accuracy rate. Having few features might lead to low accuracy rate and inadequate number of features subsequently affects the discrimination among the features of other images. Nevertheless, high accuracy rate is not warranted with high number of features due to the high occurrence of common features, which affects the discrimination among the features of other images as well. Consequently, PC2 was proven to achieve the highest accuracy rate, rather than PC1 (25 features) and PC3 (100 features). Experiment 2 -Feature Combination This experiment aimed to investigate the performance of a single feature extraction from each of the four features (global feature, local feature, pixel feature, and SURF) and the combination of these four feature extractions. The resultant outcome of this experiment was crucial in terms of accuracy rate and indexing. These results were compared with those of related previous studies in term of feature sets. Most of Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  the medical content-based image retrieval systems utilize global features. The main advantage of global features is the computation speed where the feature extraction and matching similarity are computationally faster. However, they may fail to identify pertinent visual characteristics. The classification process of global features includes two phases, which are training and testing. In the training phase of this study, global features were extracted from all training images and the classifier was subsequently trained on these extracted features to create a model. In order to classify the test images, features were initially extracted in the same way as in the training phase. The model was then utilized to classify test images. However, local features are inherently robust against translation. In this experiment, local features were extracted from four square images, which were taken from original ones after dividing the image into four blocks. Similar classification process that was applied for global features was subsequently applied for local features, except that local features were extracted from each sub-image. The pixel value comparison is also an effective approach to seek similar images in the database. For most applications, this approach is not feasible because the difference between the pixels of one image to another is not evident. However, it is feasible for the pixel value comparison to identify only one specific object of equal size and located at similar position (similar row and column of an image matrix) between images with small resolutions. For this study, Experiment 2 also utilized pixel information. The SURF, a descriptor feature, is also a scale and rotation invariant detector. The scale and rotation invariance denotes that an object could be identified even when it is scaled in size or rotated. The SURF was applied in this experiment as well, but it was not utilized as one of the local features given that the extraction of these features is a time-consuming process. In the training phase, all images were resized to 100 x 100 pixels, where the resultant large feature vector containing 1,785 features was reduced to 25 features, 50 features, and 100 features using PCA. Referring to the obtained result of Experiment 1, PC2 (50 features) was considered for Experiment 2. For the generation of model, both SVM classifier and k-NN classifier were compared. The SVM is widely used for statistical learning and classification. Primarily, the SVM deals with binary classification issues. There are presently two multiple classification approaches in use, specifically one-against-one approach and one-against-all approach. The one-against-all approach was specifically considered for this experiment because it is computationally faster than the other approach. Accordingly, the RBF kernel was applied with g = 0.0625, and a trade-off between the training error and margin, c = 8. It should be noted that these values were obtained from an empirical study. The second most widely used classification method is the k-NN (k = 1), which was used for further comparisons (details on the parameters of SVM and k-NN are further discussed for Experiment 4). Results were calculated after performing random sampling on the dataset for 10 times in order to produce reliable results. The results shown in Table 7 and Table 8 refer to the correctness rate of different feature sets using both SVM classifier and k-NN classifier, respectively. It could be observed in Table 8 that in the XMIAR prototype, the combined features of all four features using the SVM classifier achieved the highest accuracy rate (95.368%) by applying the second set of evaluation (90% of training images and 10% of testing images) without a threshold. The combined features of all features contained pixel information, global features (features of shape and texture), local features (features of shape and texture), and SURF. Therefore, the application of the SVM using combined features outperformed the other applications using each of the feature sets separately as follows: (1) global feature set, (2) local feature set, (3) pixel value set, and (4) SURF set. The comparison of these distinct features also revealed that the use of pixel features outperformed the uses of both global features and local features for all evaluation sets while the local features provided results of higher accuracy rate than the global features for all evaluation sets. Meanwhile, the combined features achieved the highest correctness rate (95.368%) by applying the SVM and a slightly lower correctness rate of 99.202% by applying the k-NN (90:10) without a threshold. In practice, different features of images reflect different attributes, which explained why the combined features provided results of higher correctness rate. Figure 6 shows the accuracy rate for each class using both SVM classifier and k-NN classifier with the evaluation set of 90:10 without a threshold. It could be observed that the SVM classified images more efficiently than the k-NN for various classes, such as classes 15, 23, 29, 37, and 51 while the k-NN outperformed the SVM for other classes such as classes 21 and 44. On the other hand, both classifiers achieved almost similar accuracy rate for classes 50 and 52. For the remaining classes, both SVM and k-NN classifiers provided convergent results. There were certain classes with substantial amount of training images such as class 12 while there were also other classes with few training images such as classes 51, 52, and 55, with only eight samples, as shown in Table 4. The k-NN classifier performed more efficiently when the objects in images were distinctly clear in contrast with the backgrounds and when all gray pixels were in one part of the images. The results from this study revealed improvement in comparison with the results obtained in previous related studies using the same dataset. In fact, the proposed method in this study provided higher accuracy rate in comparison to the winner teams of the ImageCLEFmed2005 task; RWTH -i6. Additionally, the proposed method provided higher accuracy rate in comparison to the previous related studies [28], [10]-] [14]. Classification and Parameters The main two classifiers in this study were the SVM with RBF kernel and the k-NN with Euclidean distance metrics to locate the nearest neighbor. Thus, this subsequent experiment was conducted to compare the SVM classifier with the k-NN classifier using different parameters. Additionally, optimal parameters for each classifier were identified with its respective accuracy rate. For this comparison experiment, the k-NN was considered due to its popularity and classification performance shown in previous related studies. Moreover, compared to SVM, the implementation of k-NN is simpler because there is no offline training. For this study, the SVM was applied from the Library for Support Vector Machine (LIBSVM) [29]. The LIBSVM is generally defined as incorporated software to support vector classification and to sustain multiclass classification. Its main features included effective multi-class classification, cross validation for model choice, and various kernels. For this experiment, the k-NN was examined with different values of kernel (k). In addition, a comparison was conducted for the SVM using RBF kernel of different values. It should be noted that this experiment was an empirical study of trial and error to select the optimal kernel function. Hence, this empirical study revealed that the values obtained using k-NN (k = 1) and SVM (-Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  t = 2, -c = 8, -g = 0.0600) were optimal based on the classification of ImageCLEF2005 database. For the parameters of SVM, -t represents the type of kernel, -c refers to the tradeoff between the training error and margin, and -g denotes gamma, which refers to how far the influence of a single training example could be achieved. In this context, low values of -g indicated far while high values of -g indicated close. According to the previous experiment, the dataset was divided into 90:10 to calculate the results obtained in this particular experiment. Similar feature vectors of global, local, pixel, and SURF were utilized with the application of PCA. As illustrated in Figure 6, results revealed that when c equals to 8, improved classification was achieved for the SVM classifier. Moreover, the default value of gamma (g) is obtained using the following: . Given that both training images and testing images contained 50 features per image, the value obtained equals to 0.02. Conducted empirical study revealed that increased value of gamma provided higher stability and accuracy rate. The results using k-NN revealed that the highest accuracy rate was achieved when k = 1, which is, in fact, the default value of k. A comparison between this value and other values (k = 2, k = 3) was shown in Figure 7 and 8. One drawback of using the SVM is the time required for offline training. For this study, using the processor of Intel(R) i7-4500U with 8GB RAM and MATLAB 2012a version for coding specifically, the SVM took approximately five hours while for the k-NN, it performed rather instantaneously. DISCUSSION These experiments validated the significance of classifying the X-ray medical image for meaningful image retrieval. Experiment 1 was specifically conducted to obtain optimal number of features for subsequent experiment using PCA. The resultant outcome of Experiment 1 is that PC2 (50 features) achieved the highest accuracy rate for both classifiers. Meanwhile, the obtained results from Experiment 2 distinctly revealed that combined features yielded higher accuracy rate compared to the application of single feature. Given the complexity of medical images, this study revealed that utilizing all available features was the optimal approach to enhance the performance of retrieval and classification. Typically, the classification of an image depends on low-level features while the annotation depends on the accuracy rate of classification. Additionally, in Experiment 3, the classification techniques and its parameters were conducted. Based on the optimal performances of both SVM classifier and k-NN classifier, these two classifiers were utilized in the proposed system, where the SVM and k-NN are used for better accuracy results.
6,181
2018-04-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Preparation and Performance Evaluation of a Zinc Oxide-Graphene Oxideloaded Chitosan-Based Thermosensitive Gel This study aimed to develop and assess a chitosan biomedical antibacterial gel ZincOxide-GrapheneOxide/Chitosan/β-Glycerophosphate (ZnO-GO/CS/β-GP) loaded with nano-zinc oxide (ZnO) and graphene oxide (GO), known for its potent antibacterial properties, biocompatibility, and sustained drug release. ZnO nanoparticles (ZnO-NPs) were modified and integrated with GO sheets to create 1% and 3% ZnO-GO/CS/β-GP thermo-sensitive hydrogels based on ZnO-GO to Chitosan (CS) mass ratio. Gelation time, pH, structural changes, and microscopic morphology were evaluated. The hydrogel's antibacterial efficacy against Porphyromonas gingivalis, biofilm biomass, and metabolic activity was examined alongside its impact (MC3T3-e1). The findings of this study revealed that both hydrogel formulations exhibited temperature sensitivity, maintaining a neutral pH. The ZnO-GO/CS/β-GP formulation effectively inhibited P. gingivalis bacterial activity and biofilm formation, with a 3% ZnO-GO/CS/β-GP antibacterial rate approaching 100%. MC3T3-e1 cells displayed good biocompatibility when cultured in the hydrogel extract.The ZnO-GO/CS/β-GP thermo-sensitive hydrogel demonstrates favorable physical and chemical properties, effectively preventing P. gingivalis biofilm formation. It exhibits promising biocompatibility, suggesting its potential as an adjuvant therapy for managing and preventing peri-implantitis, subject to further clinical investigations. Therefore, the addition of ZnO-GO composites onto GO lamellae may result in enhanced antibacterial properties in comparison to using either ZnO-NPs or GO alone. To provide precise and efficient distribution of ZnO-GO near the implant, a CS/β-GP hydrogel was employed as a delivery system.This hydrogel possesses the ability to provide a moist environment for drug delivery, absorb exudates from tissues, be administered through injection, exhibit sensitivity to temperature, prevent bacterial growth, and deliver numerous medications simultaneously [17].The features of CS/β-GP hydrogel make it a promising carrier for delivering ZnO-GO into periodontal pockets or surrounding implants [15]. Synthesis of ZnO-GO ZnO-NPs were first prepared by drying and dispersing them in a solution of absolute ethanol and deionized water.Next, the PFOTS silane coupling agent was introduced, and the pH was adjusted to a range of 4-5 using acetic acid.This mixture was subjected to magnetic stirring in a thermostatic water bath at temperature of 80°C for 6 h.After settling, the upper layer was decanted, and the ZnO-NPs were retrieved, washed, and dried.Then use the transmission electron microscopy after modification of the morphology of ZnO-NPs (TEM, 100KV, HT7700, Japan). To synthesize the ZnO-GO composite, modified ZnO-NPs and GO were separately dispersed in absolute ethanol using ultrasonication.These dispersions were combined in a 3:1 mass ratio and magnetically stirred at 80°C for 4 h in a constant-temperature water bath.The supernatant was removed after the mixture was settled, and the remaining composite was dried for 12 h to yield ZnO-GO.The product was characterized by scanning electron microscopy (SEM, 6KV, FEI QUANTA FEG250, USA) to examine its structural attributes.Furthermore, X-ray diffraction analysis was performed using an X-ray diffractometer (XRD, Smart Lab 3KW, Japan) to elucidate the composite's crystalline nature and compositional details. Preparation of Thermosensitive Gel Solution-gel systems were prepared by first weighing three 200 mg aliquots of Chitosan at room temperature.Each aliquot was dissolved in 6 ml of 0.1 mol/l acetic acid solution.Subsequently, three separate 2 ml aliquots of deionized water were enhanced with 0 mg, 2 mg, and 6 mg of ZnO-GO, respectively, followed by a 1-min sonication process.In a parallel preparation, three 600 mg aliquots of β-GP were weighed and dissolved in 2 ml of deionized water each.The chitosan solutions, ZnO-GO dispersions, and β-GP solutions were combined using magnetic stirring to ensure uniform mixing. This process resulted in the formation of three distinct solution-gel systems.The systems differed based on the ZnO-GO content relative to Chitosan: a CS/β-GP group with no ZnO-GO, a 1 wt% ZnO-GO/CS/β-GP group, and a 3 wt% ZnO-GO/CS/β-GP group.The specific mass ratios of ZnO-GO to Chitosan in each group are detailed in Table 1. Determination of Gelation Time and pH The gelation time of each hydrogel was assessed using the tube inversion approach [16].For this, 1 ml of hydrogel from each group was placed in a test tube and set in a water bath at 37°C.The time taken for the hydrogel to turn from liquid to solid was recorded as the gelation time.Additionally, the pH value of the hydrogel sample was measured using a pH meter before and after gelation, with each measurement repeated five times to obtain an average value. SEM Observation Post-gelation, hydrogels from each group underwent freezing at -80°C for 48 h, followed by drying at -60°C for another 48 h.These freeze-dried samples were then sputter-coated with gold and examined using SEM (FEI QUANTA FEG250) to assess their microstructure. Infrared Characterization The freeze-dried gel powders were subjected to Fourier-transform infrared spectroscopy (FTIR, Thermo Nicolet iS10, USA).Analyses were conducted in the wavenumber range of 4,000 to 400 cm -1 to identify the organic functional groups present. In vitro Antibacterial Performance Test P. gingivalis (sourced from the Laboratory of Qingdao University) cultures were prepared on BHI medium and incubated at 37°C (80% N 2 , 20% CO 2 ) for 48 h.The bacterial concentration was adjusted to 1 × 10 8 CFU/ml using the 600 nm standard optical density (OD). Inhibition Zone Test 100 μl of the bacterial culture was evenly spread over the surface of the blood agar medium.Antibacterial susceptibility disks (8 mm diameter) were saturated with each group's gel solution for 10 min and then placed on the agar surface.After 48-h incubation at 37°C, the antibacterial effect was assessed by measuring the inhibition zones' diameters around the disks.This process was replicated three times to ensure statistical reliability, followed by analysis. Assessing Thermogel's Bacterial Activity After a 24-h co-culture with the thermogels, bacterial suspensions from each group were harvested.For comparative analysis, 100 μl of these suspensions were spread onto blood agar plates, and the bacterial solution without hydrogel treatment was used as the control. SEM Observation for Antibacterial Mechanism After a 6-h co-culture with the thermals, the bacterial suspensions were collected and centrifuged to remove the supernatant, and the resultant biofilms were fixed in 2.5% glutaraldehyde.These samples were refrigerated at 4°C overnight, followed by a dehydration process using an ethanol gradient of 20%, 40%, 60%, 80%, and 100% for 30 min.Subsequently, a 10 μl aliquot of the dehydrated bacterial suspension was placed on a silicon wafer and left to air-dry under sterile conditions.After gold sputter-coating, SEM imaging was conducted, randomly capturing three fields of view per sample. Crystal Violet Staining for Biofilm Assessment The gels were fixed with poly-formaldehyde for 20 min and stained with crystal violet for 30 min to evaluate the impact on bacterial biofilm growth.The solution's absorbance was measured at 590 nm using a microplate reader, providing quantitative data on biofilm presence. MTT Assay for Biofilm Removal Efficacy The assay was conducted after a 2-day bacterial culture period.The gels were applied directly to the bacterial biofilms for 6 h and then removed.Each biofilm sample underwent MTT staining and was incubated for 4 h at 37 o C, 5% CO 2 and the resulting formazan crystals were quantified by measuring the absorbance at 490 nm. Cytotoxicity Evaluation Using Mouse Preosteoblasts In this study, the cytotoxicity of thermosensitive gels was evaluated using mouse preosteoblasts (MC3T3-e1).Initially, the gel solution was sterilized and allowed to set in a 37°C incubator, after which extracts were collected for 24 h.For the cell tests, a 0.5 ml suspension of MC3T3-e1 cells, at a density of 5-10 × 10 4 cells/ml, was seeded into 24-well plates and incubated for 24 h.The original culture medium was discarded after the initial incubation, and the wells were exposed to the gel extracts for continued culturing.A standard MEM medium enriched with 10% FBS served as the control.Subsequent assessments were conducted at 24, 48, and 72 h, wherein the cultures were washed with PBS and introduced to a medium containing 10% CCK-8.After a 2-h dark incubation, the OD at 450 nm was measured using a microplate reader to determine cell viability. Statistical Analysis Using IBM SPSS Statistics software (IBM, USA), the data collected from the free bacteria, biofilm formation assays, and CCK-8 cytotoxicity tests were statistically examined following the experiment.A single-factor analysis method was employed, and findings yielding a p-value < 0.05 were considered statistically significant (Paired sample t-test). Observation of ZnO-NPs before and after Modification Utilizing TEM, the study provided visual evidence concerning the nanostructure of ZnO-NPs before and after the modification process.As illustrated in Fig. 2A, TEM images revealed a distinct uniform dispersion of the modified ZnO-NPs, indicating a successful alteration at the nano-level that likely contributes to their enhanced performance in subsequent applications.Transmission electron microscopy showed that the modified ZnO-NPs were uniformly dispersed . Structural Examination of ZnO-GO Composite SEM was employed to study the ZnO-GO composite's morphology, particularly how ZnO-NPs were distributed over the GO sheets.Fig. 2B depicts that ZnO-NPs are consistently dispersed, coating the GO layers uniformly.This uniformity is crucial as it signifies a well-structured composite, which may translate to improved properties, like increased surface area for reactions, in its practical applications. XRD Analysis of ZnO-GO The ZnO-NPs exhibited characteristic diffraction peaks, (100), ( 002), ( 101), ( 102), ( 110), (103), and (112), corresponding to various crystal planes, confirming their crystalline nature.Notably, the diffraction pattern for GO highlighted a peak around 11.6 degrees, associated with its unique structure.In the composite ZnO-GO material, the XRD pattern-essentially consistent with that of ZnO-NPs-also featured the distinctive GO peak, signifying the integration of GO in the ZnO-GO composite.This result (Fig. 2C) is pivotal as it confirms the successful synthesis of ZnO-GO, suggesting that the composite material maintains the inherent structures of its constituents, potentially harnessing their collective properties. Temperature-Sensitive Properties of Hydrogels The study observed a transition in the physical state of the hydrogels in response to temperature changes.The ZnO-GO/CS/β-GP thermosensitive gel demonstrates consistent thermosensitive properties, a rapid gelation transition at 37 o C, and optimal fluidity at or below room temperature [17].At room temperature, the CS/β-GP hydrogel exhibited a transparent liquid form.Introducing ZnO-GO into the hydrogel matrix (ZnO-GO/CS/β-GP) altered its properties, reducing transparency and darkening the color, as seen in Fig. 3A.Upon exposure to a constant 37°C environment, all hydrogel formulations underwent solidification, highlighting their thermosensitive behavior crucial for applications requiring a response to body temperature, such as in biomedical fields. Determination of Gelation Time and pH The gelation times were significantly reduced by incorporating ZnO-GO into the hydrogels.As seen in Fig. 3B, the 3% ZnO-GO /CS/β-GP hydrogel solidified the fastest at 1.16 ± 0.1 minutes, compared to 1.64 ± 0.38 min for 1% ZnO-GO/CS/β-GP and 2.13 ± 0.09 minutes for CS/β-GP (P < 0.05).AdditionaPlly, the pH measurements revealed a slight increase for the ZnO-GO/CS/β-GP hydrogels, compared to the CS/β-GP baseline, maintaining a neutral range.The 1% ZnO-GO/CS/β-GP had a pH of 7.1, and the 3% gel showed 7.15.The pH levels varied from 7.06 to 7.19, remaining suitable for biological applications.The pH value of hydrogel samples was measured using a pH meter before and after gelling.It was observed that there was no significant change in the pH value postgelling (Fig. 3C).A shorter gelation time was precipitated by the addition of ZnO-GO, which may have been caused by increased cross-linking between CS and GO resulting from their hydrophobic interactions [21].The mechanical properties and structural integrity of the gel did not change substantially following the integration of ZnO-GO.The pH spectrum of ZnO-GO, which spans from 7.06 to 7.19, is in close proximity to the physiological conditions found in the periodontal environment. SEM Observation SEM analysis revealed that all hydrogel samples maintained a consistent three-dimensional mesh structure with pore sizes ranging between 100 μm and 250 μm (Fig. 4A).Despite the introduction of ZnO-GO, the structural integrity, specifically the pore size and shape within the CS/β-GP hydrogels, remained unaffected.More importantly, within the ZnO-GO/CS/β-GP variants, ZnO-GO nanoparticles were discernibly well-dispersed throughout the hydrogel's porous network (indicated by white arrows in Fig. 4A).Notably, a higher concentration of these integrations was evident in the 3% ZnO-GO/CS/β-GP composition compared to its 1% counterpart, suggesting a correlation between ZnO-GO concentration and nanoparticle distribution within the hydrogel matrix.The SEM analyses validated that the incorporation of ZnO-NPs significantly enhanced dispersion, resulting in consistent compositing onto GO lamellae.The uniformity observed can be attributed to the robust interaction that occurs between ZnO-NPs and GO, which promotes intimate contact [22].Moreover, through electrostatic forces and coordination reactions, zinc ions interact with oxygen atoms in negatively charged functional groups, which acts as a nucleation site for ZnO-NP growth and ensures their uniform distribution on GO sheets [18]. FTIR Analysis The FTIR analysis revealed complex details regarding the composition of GO and the successful formulation of the ZnO-GO/CS-GP hydrogel.In the GO spectrum, distinctive peaks were identified, with a significant absorption at 3,444 cm -1 indicative of the hydroxy-OH group.Another prominent feature was the carbonyl C=O stretch observed at 1,629 cm -1 and the alkoxy C-O stretch at 1,050 cm -1 .These findings confirmed the presence of various oxygen-rich functional groups, such as carbonyl, hydroxyl, and epoxy groups, on the surface of the samples.In the ZnO-GO spectrum, an absorption peak characteristic of the Zn-O stretching vibration was discerned around 437 cm -1 , verifying the integration of ZnO within the composite.The composite ZnO-GO/CS-GP spectrum maintained the characteristic peaks of CS-GP, confirming its synthesis.However, the formation of coordination bonds within the composite induced a notable red shift in the spectrum.Specifically, the N-H stretching vibration near 3,400 cm -1 in the amino groups migrated to a lower frequency around 3,200 cm -1 .This shift was attributed to the perturbation of the electron cloud of nitrogen in the amino groups, diminishing the vibrational energies of N-H and consequently driving the absorption peak towards a lower frequency with the incorporation of inorganic particles.The spectral modifications, especially the redshift, highlighted the successful synthesis of the ZnO-GO/CS-GP hydrogel.There is a clear interaction between the organic and inorganic components of the composite, underscoring the formation of a new substance with altered chemical properties (Fig. 4B). Colony Counting Method The colony counting method demonstrated the potent antibacterial properties of ZnO-GO/CS/β-GP hydrogels.Compared to the control and CS/β-GP groups, plates with ZnO-GO/CS/β-GP showed markedly fewer bacterial colonies.This decline became more pronounced with increasing ZnO-GO concentrations.Notably, the 3% ZnO-GO/CS/β-GP plate exhibited no bacterial growth, indicating an antibacterial rate nearing 100% (Fig. 5C and 5D). Antibacterial Effects of Gels The SEM images reveal distinct differences in bacterial morphology and integrity contingent upon the treatment applied.In the control group, bacteria maintained their typical coccobacillus form, exhibiting a clear, smooth morphology (Fig. 6A.cg).Contrastingly, exposure to CS/β-GP instigated.disruptionsaround some bacterial cell membranes, although the general spherical and rod-shaped morphology remained intact (Fig. 6A.a).A more pronounced effect was evident with the 1% ZnO-GO/CS/β-GP treatment, where bacteria underwent significant morphological deformation and displayed visible surface wrinkling (Fig. 6A.b, white arrow), suggesting impaired cell membrane integrity.This detrimental impact escalated further with the 3% ZnO-GO/CS/β-GP treatment, where bacteria exhibited collapsed cell membranes with apparent cytosol exudation (Fig. 6A.c, white arrow), indicating severe structural compromise and loss of cellular contents. The Inhibition of the Metabolic Activity of Bacterial Biofilm The metabolic activity within the bacterial biofilms was notably inhibited by introducing ZnO-GO into the CS/ β-GP hydrogel.Specifically, biofilms exposed to 1% and 3% ZnO-GO/CS/β-GP formulations showed significantly reduced metabolic activity compared to those treated with standard CS/β-GP.The reduction was especially pronounced with the 3% ZnO-GO/CS/β-GP composition, which curtailed the biofilm's metabolic activity by over 50%, underscoring the enhanced efficacy of ZnO-GO in disrupting biofilm vitality (Fig. 6C). In our formulation, the 3% ZnO-GO/CS/β-GP variant demonstrated near-total antibacterial efficacy due to the ZnO-GO composite enhancing material-bacterial film contact.GO further augments ZnO-NP dispersion, modulates the ZnO-NP dissolution rate, and ensures a sustained zinc ion release [19].This close interaction with P. gingivalis cell membranes significantly contributes to the antibacterial action [20,21], as ZnO-NPs can eliminate bacteria through electrostatic interactions [22], underlining the ZnO-GO/CS/β-GP gel's potent antibacterial attributes. Our observations indicated that the bacteria were coated with gel materials, promoting direct ZnO-GO.and bacterial cell membrane contact [23].ZnO-GO/CS/β-GP gel application induced morphological changes in P. gingivalis and its biofilms.While untreated samples maintained a rod-shaped morphology with unscathed surfaces, exposure to 1% ZnO-GO/CS/β-GP gel precipitated significant morphological changes, with extensive bacterial cell membrane damage and wrinkling.The 3% ZnO-GO/CS/β-GP treatment resulted in more significant effects, including complete cytoplasmic leakage and membrane collapse, inhibiting bacterial proliferation and subsequent bacterial eradication. Cytotoxicity Assay The cytotoxicity assessment utilizing MC3T3-e1 cells demonstrated that none of the hydrogel groups showed substantial toxic effects.After incubating cells for 24, 48, and 72 h with extracts from each hydrogel type, viability was determined using the CCK-8 assay.As illustrated in Fig. 7, the outcomes for all time intervals and hydrogel formulations demonstrated high cell survival rates, which were comparable to those of the control group.The survival rate was notably close to 100%, indicating that the hydrogels were biocompatible. The 1% and 3% ZnO-GO/CS/β-GP gels developed in this research demonstrated excellent cytocompatibility.CCK-8 assays indicated a progressive increase in cell growth across all groups, evidencing no substantial toxicity to mouse preosteoblasts.This favorable outcome is potentially attributed to the even distribution of ZnO-NPs facilitated by GO.Research studies have demonstrated that GO can enhance preosteoblast proliferation [24] and increase the biological activity and osteogenic differentiation potential of stem cells [25]. Furthermore, preosteoblasts cultivated in a GO-enriched medium exhibited elevated alkaline phosphatase (ALP) activity, a primary indicator of osteogenic differentiation, implying GO's role in promoting osteogenic differentiation [26].Intriguingly, during our experiments, we observed that both 1% and 3% ZnO-GO/CS/β-GP gels contributed to the proliferation of mouse preosteoblasts, which merits in-depth investigation in future studies.However, the long-term biocompatibility of 3% ZnO-GO/CS/β-GP was not explored in this experiment.Although 3% ZnO-GO/CS/β-GP showed superior antibacterial effects, its impact on the broader oral microbiota remains unexplored.Future studies should investigate the long-term biocompatibility and effects of 3% ZnO-GO/ CS/β-GP on a wide range of oral microbiota.Additionally, its therapeutic effects and potential should be explored outside the in vitro environment, including in clinical trials. Conclusion Specifically, the 3% ZnO-GO/CS/β-GP formulation exhibited significant antibacterial activity against P. gingivalis.Importantly, this concentration of ZnO-GO/CS/β-GP hydrogel maintained high biocompatibility, showing no significant toxicity toward mouse preosteoblasts.Due to the synergistic effect of ZnO-NPs and GO , Zn 2+ released by ZnO-NPs is selectively accumulated on the GO sheet, thereby enhancing the probability of bacteria coming into contact with Zn 2+ .The electrostatic interaction between ZnO-NPs and the bacterial surface leads to direct bactericidal activity.The close proximity of P. gingivalis to ZnO-NPs on GO promotes increased permeability of the bacterial membrane and localized concentration of Zn 2+ around the bacteria, ultimately resulting in bacterial death.Furthermore, incorporation of ZnO-GO/CS/β-GP may impede bacterial nutrient uptake from surrounding media.In future applications, 3% ZnO-GO/CS/β-GP holds promise as a disinfectant for various substrate coatings due to its high antibacterial efficacy and low toxicity formula, effectively inhibiting growth, reproduction, and survival of bacteria near implants or other medical devices. Therefore, the 3% ZnO-GO/CS/β-GP thermosensitive hydrogel emerges as a potent biological agent for preventing and treating diseases associated with P. gingivalis plaque biofilms, potentially serving as a Fig. 2 . Fig.2.TEM images of ZnO-NPs and XRD analysis , SEM images of ZnO-GO.(A) TEM images of ZnO-NPs before and after modification.a: ZnO-NPs before modification at 5000 X; m b: ZnO-NPs before modification at 10000 X; c: ZnO-NPs before modification at 20000 X; d: ZnO-NPs after modification at 5000 X; e: ZnO-NPs after modification at 10000 X; f: ZnO-NPs after modification at 20000 X; (B) a: SEM diagram of ZnO-GO, 2000 X; b: SEM diagram of ZnO-GO, 5000 X. (C) XRD pattern of ZnO-GO.
4,492.6
2024-04-22T00:00:00.000
[ "Materials Science", "Medicine" ]
Impact of mini-driver genes in the prognosis and tumor features of colorectal cancer samples: a novel perspective to support current biomarkers Background Colorectal cancer (CRC) is the second leading cause of cancer-related deaths, and its development is associated with the gains and/or losses of genetic material, which leads to the emergence of main driver genes with higher mutational frequency. In addition, there are other genes with mutations that have weak tumor-promoting effects, known as mini-drivers, which could aggravate the development of oncogenesis when they occur together. The aim of our work was to use computer analysis to explore the survival impact, frequency, and incidence of mutations of possible mini-driver genes to be used for the prognosis of CRC. Methods We retrieved data from three sources of CRC samples using the cBioPortal platform and analyzed the mutational frequency to exclude genes with driver features and those mutated in less than 5% of the original cohort. We also observed that the mutational profile of these mini-driver candidates is associated with variations in the expression levels. The candidate genes obtained were subjected to Kaplan–Meier curve analysis, making a comparison between mutated and wild-type samples for each gene using a p-value threshold of 0.01. Results After gene filtering by mutational frequency, we obtained 159 genes of which 60 were associated with a high accumulation of total somatic mutations with Log2 (fold change) > 2 and p values < 10−5. In addition, these genes were enriched to oncogenic pathways such as epithelium-mesenchymal transition, hsa-miR-218-5p downregulation, and extracellular matrix organization. Our analysis identified five genes with possible implications as mini-drivers: DOCK3, FN1, PAPPA2, DNAH11, and FBN2. Furthermore, we evaluated a combined classification where CRC patients with at least one mutation in any of these genes were separated from the main cohort obtaining a p-value < 0.001 in the evaluation of CRC prognosis. Conclusion Our study suggests that the identification and incorporation of mini-driver genes in addition to known driver genes could enhance the accuracy of prognostic biomarkers for CRC. INTRODUCTION Colorectal cancer (CRC) is a significant global health problem, ranking as the third most diagnosed neoplasia and the second leading cause of cancer-related death worldwide (Bray et al., 2018;Sung et al., 2021). In 2020, there were approximately 1.93 million new cases of CRC, representing 10% of all cancer cases (Sung et al., 2021). Despite advances in detection and treatment, CRC incidence and mortality rates have increased by 7.2% from 2018 to 2020. Therefore, it is essential to continue research into the underlying mechanisms driving CRC progression (Bray et al., 2018;Sung et al., 2021). KRAS, NRAS, BRAF, TP53, and APC are considered driver genes for CRC since a few pathogenic mutations in any of these genes are sufficient to develop a tumor (Thierry et al., 2014). However, defining a single group of genes as the drivers of all tumors is challenging and only explains a small number of cancer cases (Thierry et al., 2014). This challenge arises because the description of any genetic variation as a pathogenic mutation depends on factors such as its impact on the translated protein (missense, nonsense, etc.) or the number of nucleotides affected (single-nucleotide, insertion, deletion, etc.). To differentiate deleterious variations from polymorphisms, current genetic definitions consider mutations to be any genetic variations with a reduced frequency (<1%) in a healthy population (Al-Koofee & Mubarak, 2020). However, these definitions do not account for the possibility of new transcriptional switches that can support novel consequences of previously known polymorphisms, as recently described in CRC (Abdi, Latifi-Navid & Latifi-Navid, 2022). The identification of coding genes, lncRNA, circRNA, and miRNA through transcriptomic studies has broadened the concept of mini-driver genes to genes or genomic regions that may collectively be associated with poor prognosis in cancer (Yang et al., 2018;Wu et al., 2020). Nevertheless, considering the large group of genes associated with cancer features and prognosis whose function is not elucidated, it is essential to propose a strategy that includes all types of genetic variations as mutations for analyzing candidate genes capable of supporting current driver genes. To this end, we adopt the concept of mini-driver genes, which refers to low-frequency genetic alterations with a relatively weak tumor-promoting effect (Castro-Giner, Ratcliffe & Tomlinson, 2015). There are established criteria for identifying mutated genes that may be considered mini-driver genes. Firstly, individual mutations in a mini-driver gene should provide a growth advantage for cancer cells compared to normal cells, although this is not necessarily critical for tumor development. Secondly, mini-driver genes must be present in a small proportion of tumors. Thirdly, they may be present in subclones because they have a relatively weak selective advantage, resulting in less probability of selective sweeping. Fourthly, mini-driver genes should show parallel or convergent evolution between cancer subclones and between cancers of the same type. Finally, they should be involved in processes such as gene expression regulation, mRNA stability, transcriptional changes, DNA methylation, and other non-coding genomic features (Castro-Giner, Ratcliffe & Tomlinson, 2015;van Ginkel, Tomlinson & Soriano, 2023). Mini-driver genes play a significant role in tumor diversification, but the mechanisms underlying their effects are not well understood. One possible mechanism is that mini-drivers may help to maintain tumor homeostasis by counteracting the deleterious effects of some passenger mutations (Li & Thirumalai, 2016;Cuykendall, Rubin & Khurana, 2017). In some cases, non-coding mutations could be called "mini-drivers" because they alter transcriptional regulation, mRNA translation and stability, splicing control, and chromatin structure, leading to altered gene expression that favors tumor progression (Elliott & Larsson, 2021). Another possible mechanism by which mini-driver genes contribute to tumor development is through working together with driver genes/ mutations. Large-scale genome-wide association studies (GWAS) have shown that even the most significant loci explain only a fraction of the predicted genetic variation for typical traits (Boyle, Li & Pritchard, 2017). Therefore, mini-driver genes may explain how polygenic effects provide a means by which heterogeneous mutation patterns can generate distinctive changes consistent with the phenotype observed in tumors (Bennett et al., 2018). Clinical studies have identified mini-driver genes as prognostic biomarkers in cancer (Bennett et al., 2018) and our group evaluated mini-driver features in selected genes (Campos Segura, 2022) to focus on their contribution to CRC progression. In this study, we propose a strategy to identify potential mini-driver genes in CRC using Next Generation Sequencing (NGS) data and determine whether they could serve as prognostic markers, providing insights into their role in CRC progression. Database filtering and mutational frequency analysis We retrieved data from Next Generation Sequencing (NGS) experiments and clinical information of colorectal cancer (CRC) patients using the cBioPortal platform (https:// www.cbioportal.org/) ("cBioPortal for Cancer Genomics"; Cerami et al., 2012). Genomic data was only utilized to provide mutational information, while transcriptomic data, when available, was used to obtain mutational status, prediction of copy number alterations, and gene expression levels. Three cohorts of colorectal adenocarcinoma were selected for analysis, including the Dana-Farber Cancer Institute (DFCI) cohort (n = 619) (Giannakis et al., 2016), the Pan-Cancer Atlas from The Cancer Genome Atlas (TCGA) cohort (n = 594) , and the Memorial Sloan Kettering-Cancer Center (MSKCC) cohort (n = 138) (Brannon et al., 2014) datasets. All data sets were generated using Illumina HiSeq sequencers, including information for 18,215 genes. Clinical data are summarized in Table 1. Characterization of somatic mutational profile in CRC The cBioPortal platform was used to generate a report that displays the number of genetic variants detected in each gene, the number of patients with at least one variant per gene, and the total number of patients with available mutational information per gene. For this study, we included all somatic genetic variations, encompassing both SNPs (≥1% frequency in populations) and pathological mutations (<1% frequency in populations), as mutations. We then calculated the mean number of mutations per gene and patient (Table S1). Using these values, we established four groups of genes. For that, we consider the power of the sample (number of patients), the percentage of participating genes, and the ranking of common driver genes in CRC. As result, we classified the genes as rarely mutated (<7%), low-mutated levels (7-10%), moderately mutated (11-50%), and highly mutated (>50%) genes. To visualize these results, we designed a scatterplot using the R software v.4.2.0 (R Core Team, 2022) with the ggplot2 package (Wickham, 2011). Association between mutational status and tumor mutational burden After filtering the prior data, we selected a putative group carrying mini-driver genes, consisting of low-mutated genes in patients (7% and 10%). Based on these genes, we compared the total number of variations (tumor mutational burden, TMB) per patient according to their mutational status per gene. In our comparison, we considered any patient with at least one somatic variation in a specific gene as "mutated". To compare TMB, we utilized the Mann-Whitney test and calculated the fold change between mutated and wild-type groups per gene. We then adjusted the p-values using the Benjamini-Hochberg (BH) method (Benjamini & Hochberg, 1995). Finally, we established a mean-value threshold (10 −5 ) for selecting the top genes whose mutational status was associated with changes in TMB. To visualize our findings, we plotted the results using the Enhanced Volcano package in the R software ("EnhancedVolcano"; Blighe, Rana & Lewis 2018). Association between mutational status and gene expression/copy number variations We investigated the expression levels of genes with mutational status and copy number variations. To accomplish this, we used the TCGAretriever package ("TCGAretriever: Retrieve Genomic and Clinical Data from TCGA"; Fantini, 2019) to verify the normalized expression levels of genes and putative copy number alterations predicted by Gistic (Mermel et al., 2011) in colon and rectal adenocarcinoma samples. We applied the same criteria as in the previous step to dichotomize the patient population for each selected gene (mutated or wild-type) and compared the expression levels of genes or copy number alterations. We then used the Mann-Whitney test to compare these values, taking into account the distribution of expression levels. Finally, we represented all samples in boxplots for each pair of genes. The Y-axis displays expression levels or the number of altered copies of the first gene, while the X-axis shows patient categories based on the mutational status of the second gene. Gene enrichment Gene lists were obtained after applying the Enrichr tool (Xie et al., 2021). We used all available databases from Transcription (17 databases), Ontology (25 databases), and Pathway (eight pathways) modules. After loading gene lists, we only considered relevant pathways where more than three genes were associated with p < 0.001. Survival analysis To investigate the prognostic value of the mini-drivers, we employed Kaplan-Meier curves and Cox proportional hazard regression analysis (Borgan Rnulf, 2001). Then, we identified the mini-drivers that potentially play a role in disease progression, including DOCK3, FN1, PAPPA2, DNAH11, and FBN2. Then, we established a gene panel based on these genes. Cox regression analysis was performed, including age (as continuous), sex, and tumor stages (AJCC, American Joint Committee on Cancer) as potential confounding variables. To perform these analyses, we utilized the survival and survminer packages of R software version 4.2.0 (R Core Team, 2022). RESULTS Identifying mini-drivers in CRC based on the mutational frequency As summarized in Fig. 1, to identify potential mini-drivers in CRC, we analyzed the gene mutation frequency, and the statistical power. Then, we classified the genes into four groups (Table S1, Fig. 2): rarely mutated (≤7%, 17,993 genes), lowly mutated (7-10%, 159 genes), moderately mutated (11-50%, 62 genes), and highly mutated (>50%, two genes). The rarely mutated group includes genes with less than one variation in at least 59 patients, which represents less than 5% of the patients with available information for mutational profiling (980 patients from three cohorts with available information). To avoid non-representative results in further analyses, we excluded this group of genes. The moderately and highly mutated groups include genes considered as drivers, such as APC with 66%, TP53 with 56%, KRAS with 35%, BRAF with 16%, and ARID1A with 11%, among others. We focused on exploring the genes in the lowly mutated group (159 genes) as potential mini-drivers. We analyzed these genes with the Enrichr database and found that they are mainly associated with hsa-miR-218-5p regulation (miRTarBase 2017 database, adjusted p-value = 1.9 × 10 −4 ), extracellular matrix organization (Reactome 2022 database, adjusted p-value = 9.1 × 10 −11 ), and epithelial-mesenchymal transition (EMT, MsigDB Hallmark 2020 database, adjusted p-value = 2.5 × 10 −8 ). Putative mini-driver genes are associated with high mutation rates In order to assess the impact of mini-drivers on tumoral progression, we analyzed the polygenic effect of 159 genes selected from the previous step. Our goal was to identify which of these genes were associated with high mutation rates. As shown in Fig. 3, we found that 60 genes had a significantly higher TMB when mutated compared to their respective wild-type group, with an increase ranging from 5.4-8.7-fold (p-value < 10 −5 ). Among the most statistically significant genes associated with high TMB were MUC5B, DNAH7, DOCK3, and BMPR2. Meanwhile, we identified SYNE2, COL7A1, NOTCH3, and SPEG as the genes with the highest mutation counts. Mini-drivers are associated with specific gene signatures in CRC After identifying the genes associated with high TMB (Fig. 3), we compared the expression levels and copy number alterations with the mutational status of these genes. We discovered 46 genes whose mutational status was linked to changes in the expression levels of other genes (p < 0.05, Table S2). Remarkably, the mutated group of the BMPR2 gene alters the expression of nine genes (ACVR2A, DOCK3, MUC5B, FLNA, DNAH8, SYNE2, FBN2, ITPR3, and DLC1), whereas mutated groups of FLNB, ACVR2A, SIPA1L3, and CELSR1 genes altered the levels of other six genes. Table 2 provides a summary of the 43 relevant gene pairs (log 10 (FC) > 0.4, p < 0.01), while Fig. 4 displays a selection of these findings. Among them, we found that the subexpression of MYH11 was related to its mutational profile (p = 0.0043), UBR5 levels were decreased in patients with mutations in FLNB (p = 0.0012) or COL7A1 (p = 0.0075), whereas FBN3 mutated was associated with low expression of FBN2 (p = 0.0079). Additionally, we observed reduced expression levels of ZNF536 and CUX1 genes in samples mutated for FLNB and TMEM132D genes, respectively (p < 0.008). Furthermore, we compared the mutational status of these 60 genes with copy number variations (CNVs). Specifically, we compared the number of altered copies (gains or losses) between mutated and WT groups for these genes. We identified 12 genes whose mutational status was linked to copy number alterations in other genes (p < 0.05, Table S3). Interestingly, the mutated SLITRK5 gene cluster was related to copy number alterations in five genes (NOTCH3, FBN3, SIPA1L3, PTPRS, and CUX1), particularly in NOTCH3 and FBN3, where the SLITRK5 mutated group showed large ratios (greater than 4) compared to WT patients (Fig. 5). Overall, our results demonstrate that MUC5B, TMEM132D, SYNE2, POLE, CUX1, and NOTCH3 play critical roles as effectors with altered expression levels (Table S2) or copy number variations (Table S3) in the context of mini-driver genes. Mini-driver genes contribute to CRC prognosis We used the 60 genes obtained to analyze their performance as biomarkers for survival in CRC (Fig. 3). We applied this analysis to 201 CRC patients that had complete information on overall survival rates and mutational profiles of the 60 genes. The mutational status of five genes, DOCK3, FN1, PAPPA2, DNAH11, and FBN2, were associated (p < 0.05) with poor survival in CRC patients (Fig. 6A). We observed a survival rate of~25% or less for the mutated group, while the wild-type (WT) group retained more than 75% of survival after 5 years of follow-up. Then, when we constructed a panel with these genes, we observed an improvement in the survival rates differences between both groups (median OS mutated = 38.5 months vs. median OS WT = Not reached; Log-rank, p < 0.0001; Fig. 6B). Interestingly, we were able to identify a gene expression signature constituted of 16 genes that characterize the mutated group (p < 0.05 and absolute FC > 1.5). Between these genes, DOCK3, FN1, ADAMTS2, AHNAK, AHNAK2, DNAH7, NBEA, SACS, SMAD4, and VWF, were upregulated in the mutated group; whereas AMER1, DIDO1, LRP1, LRP1B, RNF43, and TG, were downregulated (Fig. S1). Finally, we tested the ability to predict the outcome of our mini-driver gene panel, in the presence of possible confounding variables (Fig. 6C). Our multivariate Cox regression demonstrated that our gene panel maintains its usefulness in predicting prognosis independent of age, sex, and AJCC tumoral staging (HR = 2.92, p = 0.002). Finally, Fig. 7 presents a visual abstract to summarize all our findings in this study and how our strategy could propose mini-driver genes as an additional set of markers to support current driver genes in CRC. DISCUSSION In this study, we utilized three reliable databases of colorectal cancer patients to propose novel perspectives on the analysis of putative mini-driver genes. These studies provide crucial and representative information on colorectal cancer. For instance, DFCI's study focused on molecular characterization utilizing whole exome sequencing (WES) to gather tumor genomic data alongside detailed pathological and clinical information (Giannakis et al., 2016). TCGA's comprehensive analysis using around 10,000 samples representing 33 cancer types was employed for our interest in gene expression analysis corresponding to CRC samples . Finally, MSKCC's study analyzed inter-and intratumoral heterogeneity as evidence in the development of CRC (Brannon et al., 2014). As mini-driver genes have a low mutational frequency, their impact alone is insufficient to generate a significant advantage for tumor cells. Badr et al. (2022) state that multiple accumulated weak mutations can combine to form a polygenic conductor (main driver) with enough impact to modify cellular function and patient prognosis (Badr et al., 2022), as depicted in Fig. 6. Our findings indicate that patients with mutations in DOCK3, FN1, PAPPA2, DNAH11, and FBN2 genes had a shorter survival rate compared to patients without mutations. These results suggest that these alterations lead to cell dysregulation, as seen in other cancer types (Irmak-Yazicioglu, 2016;Wilk & Braun, 2018;Furuya et al., 2021). The DOCK3 gene has been reported to participate in various processes related to invasion, migration, and metastasis in cancer cells (Hofer et al., 2017;Kotelevets & Chastre, 2020;Lu et al., 2020). However, there is limited research on the involvement of the MUC5B gene in CRC, with recent evidence showing high expression of this gene in elderly CRC patients, particularly in poorly differentiated tumors (Iranmanesh et al., 2021). Extracellular vesicles with high levels of phosphorylated and expressed FN1 have been identified as potential prognostic factors and therapeutic targets in CRC (Qi et al., 2020;Zheng et al., 2020). It has been postulated that FN1 could function as a promoter gene in non-canonical pathways of mini-driver genes and their mutations, with upregulation of FN1 by the HMGA2 gene contributing to a metastatic profile in CRC cells . Mutations in the PAPPA2 have been associated with tumor progression and treatment of digestive tumors, although its role in CRC is not yet understood (Miao et al., 2022). Similarly, the rs2285947 polymorphism in the DNAH11 gene has been linked to an increased risk of several cancer types, suggesting its potential contribution as a mini-driver gene in carcinogenesis (Wang et al., 2015). The FBN2 gene is hypermethylated in CRC tissue and serum samples from patients with CRC and liver metastases, and its expression is directly correlated with shorter survival rates in colon cancer patients, suggesting a possible role as a tumor suppressor gene (Leygo et al., 2017;Wang et al., 2022). Although methylation data for all genes in a representative number of CRC samples were not available for this study, we anticipate that upcoming omics data will provide more information on methylation levels in CRC patients, allowing Figure 7 A visual abstract of the study. Since mutations in driver genes use canonical criteria to be characterized and could be not sufficient to explain all cancer cases, we evaluated a strategy for proposing additional genes using the mini-driver hypothesis. Full-size Other genes, such as NOTCH3 and SLITRK5, have clear contributions to tumor development. NOTCH3 promotes tumor cell survival and proliferation, induces EMT and cancer stem cell (CSC) properties, and has been linked to various clinical and pathological features, including larger tumor size, advanced TNM stage, higher pathological grade, and tumor metastasis (Pastò et al., 2014;Aburjania et al., 2018;Xiu et al., 2021). Frequent genetic, epigenetic, and transcriptional changes have been observed in the SLITRK5 gene in colorectal neoplasias (Hesson et al., 2016). Nevertheless, genes such as CUX1 have controversial reports. It has been recently discovered that CUX1 is a tumor suppressor paradoxically overexpressed in tumor cells (Cancer Genome Atlas Network, 2012;Jo et al., 2017;Liu et al., 2020). Overall, a mini-driver gene approach could be a useful tool to support further analysis (sense and antisense strands) of these controversial regions to understand their involvement in tumor growth. According to Dressler et al. (2022), when a driver gene mutates, it significantly impacts cancer cell growth, providing them with a fundamental advantage in their development (Dressler et al., 2022). However, our classification of patients as mutated, which includes at least one mutation in one of five genes, has resulted in decreased survival time. The universe of mutations present in these genes may have different pathways to favor tumor proliferation. Nonetheless, the classification of somatic mutations is affected by the initial analysis. Typically, researchers detect all somatic mutations but exclude those highly present in large populations (Timmermann et al., 2010;Ma et al., 2020). This can limit the analysis to a reduced number of targets (Leedham & Tomlinson, 2012;Lee-Six et al., 2019) based on the pathological effect related to variations rarely present in healthy individuals. However, in cancer, aberrant expression levels and unexpected pathways (Hanahan, 2022) may support new functions for these polymorphisms that are usually discounted when found as somatic variations. Therefore, our study suggests analyzing all somatic mutations to assess cancer prognosis, combining traditionally evaluated driver genes and mutations with additional tumor-promoting regions (mini-drivers). We believe that even silent mutations may have additional functions related to the expression of non-coding RNA expressed from the same genomic locations. Li & Thirumalai (2016) argue that when the main drivers lose their mutagenic capacity, mini-drivers help normalize the physical condition advantage of the drivers. Additionally, mini-drivers could confer a physical fitness advantage on cancer cells, especially when they accumulate during tumor progression due to stochastic genetic mutations (Li & Thirumalai, 2016). To increase the number of therapy proposals and biomarkers discovered for colorectal cancer (CRC), we suggest exploring candidate mini-driver regions in a gene-panel strategy. We used NGS-based information to analyze somatic mutational profiles from a broader perspective to evaluate a new strategy for predicting biomarkers for CRC. However, our study has limitations. We have no control over the data collection, information, and permissions declared by the patients. Nonetheless, the databases used are supported and validated by accredited public and private health institutions. Furthermore, many of the proposed biomarkers for CRC have limited specificity and require further experimental research. Nevertheless, our methodology uses critical and rigorous settings to determine putative mini-driver genes, providing deeper insights into colorectal carcinogenesis to mitigate the risk of presenting biased information. Therefore, we expect that our findings will support future research to find prognostic markers for CRC using solid and liquid biopsies by testing panels of driver and mini-driver genes. ADDITIONAL INFORMATION AND DECLARATIONS Funding This work was supported by the Universidad Nacional Federico Villarreal (Lima, Peru) through Resolution No. 9343-2021-UNFV conceded to Anthony Vladimir Campos Segura and Prof. Ana Isabel Flor Gutiérrez Román as part of the incentives program for undergraduate thesis. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
5,514.6
2023-05-16T00:00:00.000
[ "Medicine", "Biology" ]
Localization of Alternating Magnetic Dipole in the Near-Field Zone with Single-Component Magnetometers Tri-axis magnetometers are widely used to measure magnetic field in engineering of the magnetic localization technology. However, the magnetic field measurement precision is influenced by the nonorthogonal error of tri-axis magnetometers. A locating model of the alternating magnetic dipole in the near-field zone with single-component magnetometers was proposed in this paper. Using the vertical component of the low-frequency magnetic field acquired by at least six single-component magnetometers, the localization of an alternating magnetic dipole could be attributed to the solution for a class of nonlinear unconstrained optimization problem. In order to calculate the locating information of alternating magnetic dipole, a hybrid algorithm combining the Gauss–Newton algorithm and genetic algorithm was applied. The theoretical simulation and field experiment for the localization of alternating magnetic dipole source were carried out, respectively. The positioning result is stable and reliable, indicating that the locating model has better performance and could meet the requirements of actual positioning. Introduction Magnetic positioning technology, with characteristics of allweather, low power consumption, and simple signal processing, has gradually drawn people's attention. Due to the lower noise level of the magnetic sensor and the higher measurement accuracy, it would be easier to detect weak magnetic signals. Using the magnetic field signal of the target detected by a magnetic sensor or magnetic sensor array, the position information and motion state of the target were obtained by data inversion, which could be widely used in identification of vehicles [1], monitoring of magnetic field [2,3], prediction of earthquake [4], diagnosis of pipeline failure [5], and exploration of crude oil [6]. Because the positioning algorithm of the magnetic target based on the static magnetic field is greatly influenced by the interference of the geomagnetic environment and other magnetic sources, some researchers have studied the localization of alternating magnetic dipole sources. In 2001, Paperno et al. proposed a method for magnetic position and orientation tracking. Based on two-axis generation of a quasi-static rotating magnetic field and three-axis sensing, two mutually orthogonal coils fed with phase-quadrature currents comprise the excitation source could be equivalent to a mechanically rotating magnetic dipole [7]. In 2006, Nara et al. presented a simple reconstruction formula for localization of a magnetic dipole. In order to calculate the locating information, the dipole position is expressed in terms of the magnetic field and its spatial gradients at a single place [8]. In 2010, Plotkin et al. developed a new scleral search coil (SSC) to track the target. e theoretical deduction and numerous simulations have shown that the proposed method could obtain the orientation and location information of SSC [9]. In 2013, Sheinker et al. proposed a locating method in 3D using beacons of low-frequency magnetic field. e method could be used in many applications, such as the navigation of indoor robot and the mapping of underground cavity [10]. Using beacons of lowfrequency magnetic field, the authors proposed a method of remote tracking a year later [11]. In 2016, Pasku et al. described a positioning system based on low-frequency magnetic field. e system could accommodate an arbitrary number of users without any additional infrastructure [12]. In 2015, Li et al. proposed an approach based on the genetic algorithm to search the location of the dipole. Only an electric field sensor in seawater is needed to measure the modulus of electric field intensity at the corresponding positions. en, the position of the dipole could be determined accurately [13]. In 2017, the author proposed a positioning method for moving objectives with alternating magnetic fields using coherent demodulation. However, the magnetic fields were measured by using a tri-axis magnetometer. e magnetic field measurement precision is influenced by the nonorthogonal error of tri-axis magnetometers [14,15]. In 2018, Dai et al. proposed a new 6D tracking method using the 3D linear motion, 2D rotational motion, and 3D orientation tracking. e hybrid method of magnetic tracking and inertial sensing verified that the full 6D pose could be used to track the target accurately [16]. In 2020, Song et al. proposed a positioning method of lowfrequency magnetic beacons based on the genetic algorithm. In a wide-range measurement, the theoretical simulation and field experiment had been tested to show the accuracy of localization for the target [17]. A positioning method of alternating magnetic dipole in the near-field zone with single-component magnetometers was introduced in this paper. A measuring array consisting of at least six single-component magnetic sensors was used to collect the magnetic field emitted by the alternating magnetic dipole. rough the process of coherent demodulation, the varying curve of alternating magnetic field could be obtained. A hybrid algorithm combining the Gauss-Newton algorithm and genetic algorithm was applied to obtain the track of a moving target, which showed a good agreement with the actual motion information [18][19][20][21][22]. e Vertical Component of Alternating Magnetic Dipole. e alternating magnetic dipole source is a transmitting coil that radiates a low-frequency sinusoidal electromagnetic signal, and the working frequency of the signal is set as a fixed frequency. For example, the working frequency of the signal ranges from 100 Hz to 1000 Hz, and the corresponding wave length is between 3 × 10 5 meters and 3 × 10 6 meters correspondingly. e geometry of the radiation coil is much smaller than its working wavelength so that the radiation coil could be equivalent to a magnetic dipole. e schematic diagram of magnetic dipole in the cylindrical coordinate system is shown in Figure 1. e radiation magnetic moment of the magnetic dipole is expressed by the formula as follows: In formula (1), μ 0 is the magnetic permeability of the medium I is the current intensity in the coil S is the cross-sectional area of the coil, whose direction is the normal direction of the right-handed spiral Using Maxwell's equations and boundary conditions, the electromagnetic field expression of magnetic dipole radiation could be expressed as In formula (2), ω is the angular frequency of the alternating electromagnetic field R is the distance from the magnetic dipole source to the observation point K is the number of complex waves, which is plural in the conductive medium Formula (2) contains the item labelled as KR. According to the distance labelled as R between the magnetic dipole source and the receiving point, the wavelength of the radiated electromagnetic wave in the propagation medium is labeled as λ. e electromagnetic field magnetic transmitted by the magnetic dipole could be divided into three regions. (1) When KR ≪ 1, it is called the near zone, also known as the quasi-stationary zone or the zone of stability (2) When KR ≫ 1, it is called the far zone (3) e region between the near zone and the far zone is called intermediate zone Usually, R ≪ 0.1λ. Considering of the target's working frequency, the positioning region of the target is the nearfield zone of the magnetic dipole source. e distribution of the electromagnetic field in the near zone of the alternating magnetic dipole approximates that of the static magnetic dipole (ω � 0, promptly, K � 0), which is similar to the constant stability field. It is assumed that the magnetic moment of an alternating magnetic dipole source located at the point labelled as P 0 (x 0 , y 0 , z 0 ) could be recorded as Mathematical Problems in Engineering e magnetic vector potential and magnetic fields at the receiving point labelled as P(x, y, z) are as follows: e three component magnetic fields acquired by a triaxis magnetometer could be described as follows: From a strictly mathematical point of view, at least six single-component sensors are required since there are six unknown quantities: the three position coordinates labelled as P 0 (x 0 , y 0 , z 0 ) and the three moment components labelled as M 0 (M x0 , M y0 , M z0 ), and each sensor provides only one equation. e Static Locating Method Based on Single-Component of Magnetic Field. Assume that the measuring array consisting of six single-component magnetic field sensors is shown in Figure 2, and their coordinates are labelled as P n (x n , y n , z n ) where 1 ≤ n ≤ 6. e alternating magnetic dipole source is at the point labelled as P 0 (x 0 , y 0 , z 0 ). e vertical component of the magnetic field generated by the alternating magnetic dipole at the point labelled as P 0 (x 0 , y 0 , z 0 ) was recorded as Using the coherent demodulation, the alternating magnetic field labelled as H n could be transformed to the varying curve signed as H zn [14]. en, the formula could be described as follows: Mathematical Problems in Engineering where E 0 is the objective function of the nonlinear unconstrained optimization problem. which is called the coefficient matrix of magnetic moment parameters. which is the varying curve of the vertical component for the alternating magnetic dipole in the near-field using the coherent demodulation [14,15]. which is the coefficient matrix of positions for the target. Simulations e measuring array consisting of six single-component magnetic field sensors is in the plane labelled as xOy of the Cartesian coordinate system and is shown in Figure 3, and the origin is signed as O. e magnetic target at the point P moves along a straight line from the point P(−20, −20, 2) to the point Q (20,20,2). e velocity is a constant of 10 m/s in the x-axis and 10 m/s in the y-axis. e coordinates of the six sensors are labelled as P 1 (−2, 1, 0), P 2 (0, 1, 0), P 3 (2, 1, 0), P 4 (−2, −1, 0), P 5 (0, −1, 0), and P 6 (2, −1, 0). e vertical component of the alternating magnetic fields is acquired by a measuring array consisting of six single-component magnetic field sensors. As shown in Figure 6, the locating results show a good agreement with the actual values as predetermined in the simulation. It could also be seen that the target moved from −20 m to 20 m in the X-axis, and the average velocity is 10 m/ s. e result in the Y-axis is the same as that in the X-axis. e result in the Z-axis is a constant value of 2 m at the time from 1 s to 5 s. From the results of above simulation, the position information calculated by the model is completely consistent with the predetermined position information of the moving target. ese verify the feasibility of localization for alternating magnetic dipole source using the single-component magnetic field. Figure 7(a)). e magnetic field is acquired by a measuring array consisting of eight single-component magnetometers (see Figure 7(b)). e frequency of the sinusoidal signal emitted by the solenoid is set as 500 Hz. e measuring array consisting of eight single-component magnetometers collected the vertical component of the magnetic field, which is transferred to a PC via a data acquisition card. e sampling rate is set as 5000 Hz. Because the experimental environment is not ideal, there is strong interference of power frequency and other frequencies. It is impossible to directly use the signals collected by the single-component inductive magnetic field sensors. e signals collected by the single-component inductive magnetic field sensors were passed through a band-pass filter with a cut-off frequency from 480 Hz to 520 Hz. Taking the signal collected by sensor #2 as an example, Figure 9 shows the time domain signals before and after filtering. It also showed that the signals emitted by the signal source collected by sensor #2 were well extracted. Figure 10 shows the magnetic signals collected by the induction magnetic field sensor #1 to #4 processed after filtering. Figure 11 shows the magnetic signals collected by the induction magnetic field sensor #5 to #8 processed after filtering. Experiment As shown in Figure 12, the varying curves of alternating magnetic fields collected by the induction magnetic field sensor labelled from #1 to #4 were obtained by coherent demodulation [14,15]. As shown in Figure 13, the varying curves of alternating magnetic fields collected by the induction magnetic field sensor labelled from #5 to #8 were obtained by coherent demodulation [14,15]. Since the radiating rod inevitably has the problem of swaying during the movement, the curves obtained by Mathematical Problems in Engineering coherent demodulation have a certain amount of shaking compared with the smooth curves in the simulation. e peak values of sensor #1 in Figure 12 are significantly greater than those of sensor #2, sensor #3, and sensor #4 located in the same line. At the same time, it could be found that the peak values of sensor #7 in Figure 13 are significantly smaller than those of sensor #5, sensor #6, and sensor #8 located in the same line. ese were caused by the different sensitivities of each sensor. In order to reduce the impact of different sensitivities of the sensor, the signals collected by sensor #7 and sensor #1 were excluded in the final positioning solution. Using the hybrid algorithm combining the Gauss-Newton algorithm and genetic algorithm, the positioning results of the target in the X direction are shown in Figure 14. e location result is about −3 m in the X direction from the time of 20 s to 55 s. ese show a good agreement with the actual value. is is the reason that the magnetic field signal gradually increases as the distance between the target and the sensor becomes close. Figure 15. It could be found that when the target passed through the array, the positioning error was very small and the positioning effect was very good. e average error of positioning is 0.17 m from the time of 20 s to 55 s. However, as the target moved away from the array, the positioning error became larger and the positioning effect became poorer. As shown in Figure 16, the average velocity was about 0.2 m/s in the Y direction from the time of 20 s to 55 s. ese also showed a good agreement with the actual value and a disagreement in the other times. e error curve between the positioning result and the actual movement trajectory of the target in the Y direction is shown in Figure 17. It could be found that when the target passed through the array, the positioning error was very small and the positioning effect was very good. e average As the simulation result in the Z direction was the same as that in the X direction, the analysis would not be repeated in this paper. Conclusions Most of the traditional research studies on magnetic positioning technology are based on the magnetic target location of static magnetic anomalies, and the positioning effect is easily affected by geomagnetic anomalies and other magnetic interference noise. e magnetic field positioning methods of the alternating magnetic dipole model are studied, which have strong anti-interference ability. e methods could overcome the interference of geomagnetic environmental interference and reduce the influence of other frequency interference signals on the positioning through some signal processing methods. Using single-component magnetometers could reduce costs and avoid the steering differential calibration of tri-axis magnetometers. e theoretical analysis of simulation and experimental results showed that the position information agreed well with the actual moving state of the target, which verified the feasibility and practicability of the localization algorithm. It is of great significance in the application of engineering. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
3,651.2
2021-07-06T00:00:00.000
[ "Engineering", "Physics" ]
Palatini-Born-Infeld Gravity, Bouncing Universe, and Black Hole Formation We consider the Palatini formalism of the Born-Infeld gravity. In the Palatini formalism, the propagating mode is only graviton, whose situation is different from that in the metric formalism. We discuss about the FRW cosmology by using an effective potential. Especially we consider the condition that the bouncing could occur. We also give some speculations about the black hole formation I. INTRODUCTION Motivated with the accelerating expansion of the present universe, we are considering many kinds of gravity theories beyond the Einstein gravity (for review, see [1]). In this paper, we consider about the Born-Infeld gravity [2] in the Palatini formulation [3] and cosmology has been considered [4]. In the metric formulation of the Born-Infeld gravity, theory includes a ghost in general and we need to tune the action so that the ghost does not appear [2]. In the Palatini formulation, however, as we explicitly show that there does not appear ghost [3] and only propagating mode is massless graviton. Even in the Palatini-Born-Infeld gravity, the Schwarzschild and Kerr black hole space-time are exact solutions. Then we calculate the entropy of the Schwarzschild black hole and we find the entropy is not changed from that in the Einstein gravity. We consider the FRW cosmology by including dust as a matter and show that the bouncing universe can be realized, whose behavior is, in some sense, similar to the loop quantum gravity [5][6][7]. We also consider the formation of the black hole by considering the collapse of the sphere of the dust. We should note that in the previous works, there were too strong constraints on the variables but we consider more general treatment in this paper. Because the pressure of the dust vanishes, we can regard the inside of the sphere as the FRW universe. Then by using the results in the FRW universe, we show that the small black hole could not be formed by the bouncing although the large black holes might be created. In the next section, we show that the only propagating mode is massless graviton. In Section III, we calculate the entropy of the Schwarzschild black hole. In Section IV, we consider the FRW cosmology. After that in Section V, we investigate the formation of the black hole. The last section is devoted to the summary. II. ABSENCE OF GHOST In this section, we show that any ghost does not appear in the Palatini formulation of the Born-Infeld gravity [3,4]. The action is given by Here S matter is an action for the matters and R µν is given by R µν = −Γ ρ µρ,ν + Γ ρ µν,ρ − Γ η µρ Γ ρ νη + Γ η µν Γ ρ ρη and we regard the connection Γ ρ µν is a variable independent from the metric g µν . In the Palatini formulation, the propagating mode is only graviton, whose situation is different from that in the metric formulation [2]. In [2], the following action was investigated: Here X µν is rank two tensor, which is the sum of the products of curvatures and R µν is defined by the metric tensor g µν as in the Einstein gravity. In [2], Deser and Gibbons have shown that X µν cannot be arbitrary but constrained for the consistency. It is well known that the model including higher derivative terms generates ghost in general. The ghost appears from the terms which is the sum of the products of two curvature like α 1 R 2 + α 2 R µν R µν + α 3 R µνρσ R µνρσ . What Deser and Gibbons have shown is that we can choose X µν to avoid the ghost but X µν is not uniquely determined. An important point is that if X µν = 0, there always appears a ghost. We now show that the action (1), where the curvature is given in terms of the connection which can be regarded as a variable independent of the metric, does not generate ghost. By the variations of the metric g µν and the connection Γ λ µν , we obtain the following equations, respectively, Here P µν is defined by It is clear that the Minkowski space-time is a solution of Eqs. (3) and (4). Eqs. (3) and (4) tell that the connection is given by that in the Einstein gravity, which we show now. By multiplying Eq. (3) with ∇ λ and using (4), we obtain the following equation, which can be solved with respect of Γ λ µν as follows, which is identical with the expression in the Einstein gravity. It is clear that the Minkowski space-time is the solution of Eqs. (3) and (4). We now consider the perturbation from the Minkowski background: By using (8) and keeping only the terms linear to h µν , we obtain Then we obtain By substituting Eqs. (11) and (12) into (3) and keeping only linear terms, we find which is identical with the equation for the graviton in the Einstein gravity. Therefore the only propagating mode is graviton and any other propagating mode like ghost does not appear, whose situation is different from that in the metric formulation [2]. III. BLACK HOLE ENTROPY Eqs. (3) and (4) tells that the vacuum solutions of the Einstein gravity are also exact solution of the Palatini-Born-Infeld gravity. Especially the Scwarzschild and Kerr space-time are solutions. Then we may consider the entropy of the Schwarzschild black hole. The prescription to obtain the entropy is given in, for example, [8]. For the technical reasons, instead of (1), we consider the following action, 1 Hereg ij is the metric of two dimensional sphere and M = µ 16πκ 2 is the mass of the black hole. The parameter l is the length parameter of the anti-de Sitter space-time and given by By Wick rotating the signature to the Euclidean one and substituting the solution (15) , we now evaluate the action (15): Here T is the Hawking temperature given by and r H is the radius of the horizon given by We introduce r ∞ in order to regularize the expression (17), which diverges when r ∞ → ∞. The divergence can be renormalized by subtracting the contribution from the background without the black hole: The factor e ρ(µ=0)−ρ(µ =0) is introduced so that the periodicity of the Euclidean time in the background coincides with the periodicity of the Euclidean black hole space-time. Then by taking the limit of r ∞ → ∞, we find We identify the free energy F by F = T S. Then by using (19) etc., we find By considering the limit λ → 1, that is, l → ∞, we find that is, Therefore the entropy S is given by Here A is the area of horizon A = 4πr 2 H and κ 2 = 8πG. Then the entropy is not changed from that in the Einstein gravity. Especially, we should note that the entropy does not depend on the parameter b. IV. FRW UNIVERSE WITH DUST We consider the FRW space-time with flat spacial part: and assume that the non-vanishing components of the connection is given by 1 In the Einstein gravity, we have A = 0, B = C = H ≡ȧ/a. Then the Ricci tensors are given by and the matter is given by the dust whose pressure p vanishes and the energy density is denoted by ρ. Then we obtain the following equations: By using (27), (30), (31), and (33), we find We may delete A by using (34) and obtain 1 In the previous works, the FRW metric was assumed for Pµν in (5), and the connnection Γ λ µν is given by Pµν : which, however, reduces the degrees of freedom in Γ λ µν so that A = 0 and B = C. When b < 0, Eq. (29) or (35) tells that there is a maximum ρ max : When we consider the shrinking universe where H < 0, does the energy density ρ go to the maximum ρ max asymptotically or bounces at ρ = ρ max ? If the energy density ρ goes to the maximum ρ max asymptotically, there should be a static solution where H = 0 and B and C are constant. If we assume that H = 0 and B and C are constant, however, Eqs. (37) and (38) tells B = C = 0 and therefore Eq. (35) tells ρ = 0, which contradicts with the assumption. Therefore the energy density ρ does not go to the maximum ρ max asymptotically. Furthermore by using (35) and (36), we obtain Then when ρ = ρ max , we find 1 + b Ḃ + 3HB = 0, which tell thatĊ + 2C 2 − CH diverges positively, due to Eq. (36) and therefore R tt in (28) Here we used (34). Therefore ρ could not reach ρ max . We now delete B and C in (35), (36), (37), (38) and obtain a single equation with respect to the scale factor a. We now assume which can be obtained from the conservation lawρ Then by combining (36) and (38), we obtain Furthermore by combining (40) and (43), we find Hρ . On the other hand, Eqs. (35) and (36) give By using (37) and (45), we obtain A single equation with respect to the scale factor a can be obtained by deleting B in (40) by using (44) as follows, Because H =ȧ/a andḢ =ä/a −ȧ 2 /a 2 , by using (41) Eq. (47) can be rewritten as which is a single equation with respect to the scale factor a. If we use the e-foldings N defined by a = e N , we obtain By an analogy with the Newton equation in the classical mechanics, the first term in the r.h.s. may be regarded as a drag force and the second term could be a force by a potential, which we denote by F (N ) as We should note that the potential force F (N ) is positive, which does not depend on the sign of the parameter b and therefore the force act so that the e-foldings N increases. Then even if the universe is shrinking, it turns to expand. Since Therefore when b > 0, there is a maximum F (N ) = 2/3b at bκ 2 ρ 0 e −3N = 8 and when b < 0, we find F ′ (N ) < 0 and therefore there is a maximum F (N ) = −4/3b at bκ 2 ρ 0 e −3N = −1. We should also note that F (N ) → 0 when N → +∞ independently to the sign of b and if b > 0, F (N ) → 0 when .N → −∞. Eq. (50) can be further rewritten in the following form: which tells that there is a conserved quantity E, which corresponds to the total energy in the classical mechanics. Here V (N ) is given by When N is positive and large, V (N ) behaves as In case b > 0, when N is negative and large, we find On the other hand, in case b < 0, there is a maximum in V (N ) when 1 + bκ 2 ρ 0 e −3N = 0: We now assume that the universe may have started from N → +∞ and after that they have started to shrink. Then from the above results, by the analogy with the classical mechanics, we find the followings: • In case b > 0, if E < 0, the shrinking of the universe will stop and turn to expand. On the other hand if E > 0, the universe will continue to shrink and the scale factor a vanishes in the infinite future. • In case b > 0, if E < V max , the shrinking of the universe will stop and turn to expand. On the other hand if E > V max , the universe will reach the singular point at 1 + bκ 2 ρ 0 e −3N = 0. In order to estimate E, we now solve (50) by assuming N ≫ 1. Then (50) can be rewritten as Then in the limit b → 0, we find Then for the finite b, by writing N = 2 3 ln t t0 + δN and by using (59), we find Then we find Here C ± are arbitrary constants. Because the first and the second terms do not depend on b, we may put C ± = 0. If we keep C + , we find E diverges and therefore physically not acceptable. Even if keep C − , this term does not contribute to E. Then for the large N , by using the expression of E in (54) with (55), we find Therefore when b > 0, the shrinking of the universe will always stop and turn to expand, that is, we obtain the bouncing universe. On the other hand, when b < 0, the shrinking universe always reaches the singular point at 1 + bκ 2 ρ 0 e −3N = 0. When b > 0, we may estimate N when the shrinking universe turns to expand and therefore V (N ) = E. When bκ 2 ρ 0 ≫ 1, by using the expression of V (N ) in (56) and E in (63), we find On the other hand, when bκ 2 ρ 0 ≪ 1, by using (57), we find e 3N ∼ 9 64 bκ 2 ρ 0 . We should note that Eq. (54) can be identified with the first FRW equation because H =Ṅ and rewritten as 3 For large N , the r.h.s. in (66) can be expanded as a power series with respect to e −3N and we find 3 The above structure is similar to the loop quantum cosmology [5][6][7] although the critical energy density ρ c is not given by ρ l but by using (66) or (67), we find Therefore the obtained behavior of the bouncing is similar to that in the loop quantum gravity, there is quantitative difference. V. BLACK HOLE FORMATION BY THE COLLAPSE OF DUST Now we consider if black hole can be formed by the collapse of dust. We now assume there is a spherically symmetric and uniform ball made of dust and consider the collapse of ball. This assumption is valid because the pressure of the dust vanishes nor the density of ball cannot be uniform because the pressure should vanish at the boundary between the ball and bulk, which is assumed to be vacuum. Inside the ball, the space-time can be regarded with the shrinking FRW universe as in the last section. The results in the previous section tell that there could be a bouncing. If the radius of the ball at the bouncing is larger than the Schwarzschild radius, the black hole cannot be formed. We assume the ball of dust with radius R at N = N 0 . We choose N 0 to be large enough. Then the total mass M is given by We now consider the case that b > 0. First we assume Then by using (64), we find N = N b at the bouncing is given by which give the radius R b at the bouncing by On the other hand, the Schwarzschild radius R s is given by Then we find Therefore large black hole, where M 2 ≫ b κ 4 , can be formed because R b ≪ R s and therefore the bouncing can occur after the formation of the horizon. Instead of (70), we may also consider the case Then by using (65), we find that the bouncing occurs when and the radiusR b at the bouncing is given by and we obtain Therefore small black hole, where M 2 ≪ b κ 4 , cannot be formed because R b ≫ R s . We now consider the case that b < 0. In this case, there is a maximum in the energy density ρ given by (39). We now consider the meaning of the density in the black hole formation. We now assume that the black hole is formed by the collapse of the star made of the dust with radius r. Then the energy density ρ is given by Hereρ 0 is a constant. Then the mass M and the Schwarzschild radius R s of the star is given by Then Eq. (39) tells that the minimum of r is given by The black hole cannot be formed if r min > R s , that is Therefore small black holes may be prohibited if b < 0 but large ones are not prohibited. This result may tell that the creation of the primordial black holes might be prohibited. VI. SUMMARY We have shown that the Born-Infeld gravity in the Palatini formulation has several interesting properties, especially, the only propagating mode is massless graviton and no ghost appears. We investigated the entropy of the Schwarzschild black hole and we have shown that the entropy is identical with the entropy in the Einstein gravity. We also investigated the FRW cosmology where the matter is dust and when b > 0, there occurs the bouncing. The cosmology in the Palatini-Born-Infeld gravity has been investigated in several papers, but in the most of the previous works, the connections are assumed to be given by regarding P µν with the metric of the FRW universe but this requirement is too strong and we considered more general case. By applying the results in the FRW universe, we also investigated the collapse of the sphere of dust and the black hole formation. Then we have shown that although the large black hole might be formed but the small black holes are prohibited to be formed.
4,200
2014-09-05T00:00:00.000
[ "Physics" ]
Benefits and Challenges of Virtual-Reality-Based Industrial Usability Testing and Design Reviews: A Patents Landscape and Literature Review With the introduction of new devices, industries are turning to virtual reality to innovate their product development processes. However, before the technology’s possibilities can be fully harnessed, certain constraints must be overcome. This study identifies the benefits and challenges of virtual-reality-based usability testing and design reviews in industry through a patents and articles review. We searched Derwent Innovation, Scopus, and Web of Science and identified 7 patent filings and 20 articles. We discovered an increase in patent filings since 2016 and strong development in the technology space, offering opportunities to enter an area while it is still young. The most frequently researched field is the automotive industry and the most used device is the HTC VIVE head-mounted display, which is frequently paired with motion capture systems and Unity 3D game engines. Virtual reality benefits design reviews and usability testing by providing the visualization of new angles that stimulate novel insights, increasing team engagement, offering more intuitive interactions for non-CAD specialists, saving redesign cost and time, and increasing participants’ safety. The challenges faced by virtual-reality-based prototypes are a lack of realism due to unnatural tactile and visual interactions, latency and registration issues, communication difficulties between teams, and unpleasant symptoms. While these constraints prevent virtual reality from replacing conventional design reviews and usability testing in the near future, it is already a valuable contribution to the industrial product development process. Introduction Given the complexity of products that are under development, designers must perform a number of procedures to guarantee that the finished product fits customer demands and is accepted by the market [1]. As a result, technological advancements are continually assisting the improvement of product quality [2]. The strategies employed to ensure the usability of the product include elements of competitiveness, distinctiveness, and good practice [3]. In standard ISO-9241 of the International Organization for Standardization (ISO), usability is defined as "the extent to which a system, product or service can be used by specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specified context of use" [4]. The usability level assesses how enjoyable and simple the product is to use based on the customer's experience. The usability attribute is influenced by the customer's perception of how the product is used. An example of this would be a satisfactory level of perceived experience when a simple operation, such as turning on a radio system, followed by an attempt to set up a specific radio station, is easily completed as expected by the Planning The knowledge bases that will be investigated are determined during the planning step [22]. In the case of patents, a search for records was conducted in the Derwent Innovation Index database. Derwent Innovation was selected because it has 39.4 million patent families and 81.1 million patent records, with coverage from 59 international patenting authorities and two journal sources. A database must be evaluated using certain key criteria for a patent search, and commercial tools, such as Derwent Innovations, offer unique resources that improve the database's capacity to retrieve information. The "Smart Search" tool, for example, makes use of artificial intelligence to improve keyword discovery [23]. Another feature is Derwent World Patents Index (DWPI), the world's most comprehensive database of enhanced patent information, with expanded patent titles and abstracts, and English abstracts of the original patents. It has a sophisticated classification system and patent family information with non-convention equivalent identification [24]. Section 2.3 provides details on "Smart Search" and the DWPI features. Regarding articles, the search was conducted in the scientific databases Scopus and Web of Science. These databases were chosen because they are reliable and multidisciplinary scientific databases of international scope with comprehensive coverage of citation indexing, providing the best data from scientific publications. Scopus now includes 81 million curated documents [25], whereas Web of Science covers more than 82 million entries [26]. Defining the Scope Defining the scope relates to properly stated research questions [22]. Three pertinent research questions were selected for this systematic review, namely: Q1: How are patents in industrial virtual reality-based usability testing and design review characterized? Q2: In terms of application fields, methods, hardware, and software involved, how is current knowledge on the application of virtual reality in usability testing and design review in industry defined? Q3: What are the benefits and challenges of using virtual reality for usability testing and design review in industry? Literature Search The literature search step entails investigating the database defined in the planning step with a particular string depending on the research questions posed in defining the scope step [22]. The candidate search phrases were collected from the titles, abstracts, and keywords sections of two previously published articles. The prospective search phrases were then , 12, 1755 peer-reviewed by five other members of our research team, who have experience of utilizing virtual reality for product design in industry. The following candidate peer-reviewed search terms were provided by the Derwent Innovation database: ("USABILITY EVAL-UATION" or "USABILITY TESTING" or "USABILITY ASSESSMENT" or "USABILITY ENGINEERING" or "DESIGN REVIEW") and ("MIXED REALITY" or "VIRTUAL RE-ALITY" or "IMMERS*" OR "IMMERS* PROTOTYP*" or "VIRTUAL PROTOTYP*") and (INDUSTR* or "PRODUCT DEVELOPMENT" or "PRODUCT DESIGN"). Next, to identify relevant keywords, we submitted these search phrases into the Derwent Innovation database's "Smart Search" tool. The "Smart Search" engine examined it to generate significant words about the technology mentioned in the text, and then extended those key terms to include synonyms. Next, using the expanded search criteria, toll scanned all Derwent Innovation patent databases and showed the most relevant patents linked to that technology. More information on the technology behind the "Smart Search" resource may be found at [23]. Similar search phrases were used for article retrieval, with slight modifications to accommodate the search engine specifications of the Scopus and Web of Science knowledge databases. The search was carried out in January 2022, and the preliminary identification yielded 11 individual patent records and 180 articles. Assessing the Evidence Base The assessment step applied inclusion and exclusion criteria filters to reduce the number of related records identified during the literature search step [22]. By combining the following exclusion criteria, we were able to limit the number of records returned during the previous step: • E1: Exclude patents filed or articles published before 2016; • E2: Exclude articles not written in English language; • E3: Exclude patent applications that are no longer alive; • E4: Exclude patents and articles not related to the industrial domains, such Medicine, Social Sciences, Physics, and Environmental Science. According to research, the advent of technologically advanced virtual reality headsets in this year represented a breakthrough for virtual reality applications; therefore, we excluded documents filed or published before 2016 [27,28] and practitioners [29,30]. We applied the E2 criterion exclusively on the articles, since the Derwent Database Enhanced Patent Data includes patent documents that have been translated into English. After applying exclusion criteria, we screened nine individual patent records and 78 articles. Synthesizing and Analyzing The retrieved documents were then combined with project-related features [22]. The documents were subjected to a single screening, in which a reviewer with expertise in usability testing and virtual reality technology inspected each record in order to locate relevant patents and articles linked to the Q1, Q2, and Q3 research questions. The papers were chosen based on their title, abstract, and author keyword fields, as well as their connection to the project's purpose: Virtual reality providing users with immersive experiences (because some researchers or database-automated mechanisms correlate the terms "mixed reality", "augmented reality", or "virtual environment" with immersion properties); 3. Studies of the usability of virtual reality devices and the equipment itself (rather than a usability evaluation of the industrial product being developed). Finally, after excluding duplicate entries from the Scopus and Web of Science databases, we included 7 patent records and 20 articles for examination. The patents were analyzed using Derwent analytical and insights tools. The retrieved documents were uploaded to the Mendeley Reference Manager tool, and spreadsheets and visualizations were created in Microsoft Excel. Figure 1 depicts the flow of the systematic review from searching the published research to synthesizing processes. 2. Virtual reality providing users with immersive experiences (because some researchers or database-automated mechanisms correlate the terms "mixed reality", "augmented reality", or "virtual environment" with immersion properties); 3. Studies of the usability of virtual reality devices and the equipment itself (rather than a usability evaluation of the industrial product being developed). Finally, after excluding duplicate entries from the Scopus and Web of Science databases, we included 7 patent records and 20 articles for examination. The patents were analyzed using Derwent analytical and insights tools. The retrieved documents were uploaded to the Mendeley Reference Manager tool, and spreadsheets and visualizations were created in Microsoft Excel. Figure 1 depicts the flow of the systematic review from searching the published research to synthesizing processes. Results and Discussion The research questions Q1, Q2, and Q3 were addressed in order to identify the opportunities, benefits, and constraints of using virtual reality for usability testing and design reviews in industry. In the sections that follow, we analyze our findings. Patents Landscape The seven patent records retrieved by the search strategy are shown in Table 1. Results and Discussion The research questions Q1, Q2, and Q3 were addressed in order to identify the opportunities, benefits, and constraints of using virtual reality for usability testing and design reviews in industry. In the sections that follow, we analyze our findings. Patents Landscape The seven patent records retrieved by the search strategy are shown in Table 1. To address the first research question, these patents were analyzed to answer frequent concerns and uncover patterns in assignees, filings per year, the International Patent Classification (IPC), and benefits. Q1: How are patents in industrial virtual-reality-based usability testing and design review characterized? In terms of assignees, their identification may help the discovery of industry leaders, the evaluation of possible rivals, and the identification of niche players. The widely fragmented dispersion of patent applications across many assignees is intriguing: just one assignee, CCB Fintech and China Construction Bank, submitted two patents, whilst the other patents were filed by different assignees. Rather than a relatively equal-sized, but large, portfolio held by a few companies, which indicates an active competitive market with strong investments by multiple companies, suggesting that the market is difficult to enter, we discovered a large number of assignees, each with a small number of records, demonstrating a developing technology space. This domain may be entered by acquisition or rapid development. There are several companies, each with a small number of patents, indicating an opportunity to enter this area while it is still young, either by licensing existing technology, purchasing one of the players, or developing new technology that is not already patented. It is also worth noting that three of the six assignees are major financial institutions, while one university filed one of the seven patents. In terms of patent filings per year, Figure 2 illustrates the yearly filing of patents from 2016 to 2020. We omitted 2021 patents from the analysis since they were still being filed at the time of this study. We observed an initial period spanning the years 2016 and 2017, with no patents registered. This trend changed in 2018, followed by growth in 2019 and 2020. These results support the claim in [28] that the year 2016 marked a technological breakthrough in the domain of virtual reality. Before 2016, commercial virtual reality systems required users to connect a headset, controllers, and sensors to an external high-end computer, which was an expensive, bulky, and inconvenient setup. Thus, the current all- We observed an initial period spanning the years 2016 and 2017, with no patents registered. This trend changed in 2018, followed by growth in 2019 and 2020. These results support the claim in [28] that the year 2016 marked a technological breakthrough in the domain of virtual reality. Before 2016, commercial virtual reality systems required users to connect a headset, controllers, and sensors to an external highend computer, which was an expensive, bulky, and inconvenient setup. Thus, the current all-in-one virtual reality systems are a significant step forward from only a few years ago [38]. The upward trend slowed in 2020, although one likely explanation is the 18 month patent legal secrecy restriction. A important trend is that not only is the global virtual reality market projected to grow from $6.30 billion in 2021 to $84.09 billion in 2028 [18], but this rise may be much greater, given that the COVID-19 epidemic boosted the usage of virtual reality even further [39]. As a consequence, this scenario suggests that the exponential increase in patenting filings related to virtual reality and design review and usability testing will continue for a few more years. The International Patent Classification (IPC) is an approach to determining a standard classification for registered patents, thereby enabling the search for and access to technical information accessible in documents connected to the same subject. The system is a hierarchical patent classification system used in over 100 countries to uniformly classify patent material. It creates a separation into classes and subclasses that are applicable to various technical domains and aids in the standardization of patent classification. Figure 3 illustrates the IPC classifications that we found in our review. Section G (Physics), which covers all physics-related material, is designated to all seven patent filings. As the class level progresses and technological information is described in more depth, the most frequent class is G06 (computing calculating or counting), with subclasses G06F (electric digital data processing), G06Q (data processing systems or methods), and G06T (image data processing or generation, in general). Other patents grouped under G Section are G02B (optical elements, systems, or apparatuses), G09G (arrangements or circuits for control of indicating devices using static means to present variable information), and G10L (speech analysis or synthesis; speech recognition; speech or voice processing; speech or audio coding or decoding). Section A (Human Necessities) was also used to classify one patent, grouped under the A61 class (medical or veterinary science; hygiene), with subclasses A61B (diagnosis; surgery; identification) and A61N (electrotherapy; magnetotherapy; radiation therapy; ultrasound therapy). In terms of benefits, we examined the seven patents in relation to the advantages of each invention, as described by its authors, and the novelty of each invention, i.e., the unique innovative feature introduced by the inventor that is not conventional and constitutes an improvement on existing technology. The JP2021068278A patent [33] proposes to provide a design review system and Section G (Physics), which covers all physics-related material, is designated to all seven patent filings. As the class level progresses and technological information is described in more depth, the most frequent class is G06 (computing calculating or counting), with subclasses G06F (electric digital data processing), G06Q (data processing systems or methods), and G06T (image data processing or generation, in general). Other patents grouped under G Section are G02B (optical elements, systems, or apparatuses), G09G (arrangements or circuits for control of indicating devices using static means to present variable information), and G10L (speech analysis or synthesis; speech recognition; speech or voice processing; speech or audio coding or decoding). Section A (Human Necessities) was also used to classify one patent, grouped under the A61 class (medical or veterinary science; hygiene), with subclasses A61B (diagnosis; surgery; identification) and A61N (electrotherapy; magnetotherapy; radiation therapy; ultrasound therapy). In terms of benefits, we examined the seven patents in relation to the advantages of each invention, as described by its authors, and the novelty of each invention, i.e., the unique innovative feature introduced by the inventor that is not conventional and constitutes an improvement on existing technology. The JP2021068278A patent [33] proposes to provide a design review system and method that reduce the time required for computer-aided design (CAD) data conversion. The proposed design review system includes a CAD apparatus, into which CAD data are produced or edited, and a data conversion unit that performs the data conversion from CAD data to virtual reality data. The advantage of the invention is that the design review system shortens the time required for data conversion, such as data conversion from CAD data to virtual reality data. The US20210011593A1 patent [34] proposes a system for producing applications based on real-time accessibility assessments. The system identifies that a user is accessing an application on a device (such as a virtual-reality device), captures the real-time accessibility data, inputs it into a machine learning model, generates an accessibility score, and renders the application based on this score. The advantage of the invention is that the system identifies whether the user is accessing another application on the same user device and utilizes the accessibility score stored in a data repository to render the other application; if the amount of data stored in the data repository increase over a period of time, this improves the efficiency with which they are processed. The method enables the calculation of a real-time accessibility score to comprise the operating configurations of the user device, as well as the status of its hardware components, to establish internet connectivity in an effective manner. The CN111414084A [32] and CN111414083A patents [31] are related. The former proposes a space usability testing laboratory comprising a test area for displaying a 3D model of the space based on immersive virtual reality, and the experience data of test users are collected during the test process [32]. The latter proposes a method for the usability testing of the space, which involves obtaining a 3D model of the space to be tested, and the test task is performed by a user wearing a virtual-reality-based wearable device [31]. According to the inventors, the advantages are that the user's understanding of the user experience may improve this experience, space usability testing is more convenient, and its results are more accurate. Furthermore, the manner of implementation could truly display the design appearance, the user experience is real, the test result is accurate, and the test consistency and efficiency are high. The US20210090343A1 patent [35] proposes a method for providing design reviews using virtual-reality devices, involving the processing of interactions of users in a virtualreality format and the generation of output on the basis of users' actions. The advantage of the invention is that the method would make it possible to generate a complete design review cycle in an easy manner, so that time consumption could be reduced effectively. The inventors claim that the technology would open the door to a range of new applications that have not been possible until now and that the invention would revolutionize design reviews with a radically new experience using virtual reality. Multiple users might collaborate remotely and perform design reviews within the virtual reality environments. The US20190227626A1 patent [36] proposes a system for personalizing a humanmachine interface (HMI) device based on the mental and physical state of a user. During the performance of a task in a simulation environment (such as training in virtual reality), the system extracts biometric features from data collected from body sensors and brain entropy features from electroencephalogram signals. Both data are correlated to generate a mental-state model. The mental-state model is deployed in a, HMI device during the performance of a task in an operational environment for the continuous adaptation of the HMI device to its user's mental and physical states. The advantage of the invention would be that the continuous adaptation of the HMI to the mental and physical states of the user would reduce the workload and enhance decision-making. The need for unnecessary modifications in the interfaces would be eliminated, and the designs would be more user-centered and customized to the real needs of users. Finally, the KR20190088710A patent [37] proposes a method for assessing the usability of automotive infotainment systems that comprises facilitating interactions between numerous virtual infotainment systems and drivers in a virtual driving environment while investigating the usability of vehicle infotainment systems. The usefulness of a virtual infotainment system is determined by evaluating both the execution time of the operation command and the running condition of a virtual car as a result of the execution of the operation command. The virtual car's running state contains the virtual vehicle's speed, its distance from the preceding vehicle, the distance between the lane and the virtual vehicle, and the steering angle during the execution of a drive operating instruction. The novelty of the invention is its evaluation of the cognitive load of a driver by using the vehicle infotainment system and its use of the driving state of a virtual vehicle in accordance with the performance time of the operation command and the performance of the operation command to evaluate usability for each of virtual infotainment system. Table 2 shows the twenty articles selected by the search strategy. These articles were reviewed in order to answer the research questions Q2 and Q3. It is worth noting that the attributes addressed by Q2 and Q3 are not abstracted at the same level and are not always mutually exclusive in the studies reviewed; furthermore, some studies did not mention some of these aspects. Our findings are provided in the subsections below. Scientific Mapping Q2: In terms of the application fields, methods, hardware, and software involved, how is current knowledge on the application of virtual reality in usability testing and design review in industry defined? Application Fields and Methods The application field of an article was defined similarly to [56]'s approach as the industry and/or technical environment targeted by the study. One of the twenty articles selected is a literature review [49] that assessed some of the studies under investigation. As a result, we omitted it from our quantitative analysis of application fields in order to avoid counting the same study twice. The application fields observed can be divided into three main categories: (1) automotive industry, (2) industrial machinery manufacturing, and (3) laboratory environment or field undetermined. Figure 4 shows the distribution of papers in each category. Appl. Sci. 2022, 11, x FOR PEER REVIEW 11 of 2 Q2: In terms of the application fields, methods, hardware, and software involved how is current knowledge on the application of virtual reality in usability testing and de sign review in industry defined? Application Fields and Methods The application field of an article was defined similarly to [56]'s approach as the in dustry and/or technical environment targeted by the study. One of the twenty articles se lected is a literature review [49] that assessed some of the studies under investigation. A a result, we omitted it from our quantitative analysis of application fields in order to avoid counting the same study twice. The application fields observed can be divided into three main categories: (1) auto motive industry, (2) industrial machinery manufacturing, and (3) laboratory environmen or field undetermined. Figure 4 shows the distribution of papers in each category. The automotive industry was the subject of seven articles (37%) [43,46,48,50,52,54,55] It was expected that a substantial number of virtual reality studies would be conducted in the vehicle industry. These findings are congruent with those of [56]. The importanc of the automobile industry may be related to the fact that virtual technologies have long been utilized in this sector in a range of specialties and applications, such as manufactur ing, training, and maintenance, to mention a few [13,57,58]. Departments such as design engineering, maintenance, and assembly are already using the technology to suppor Automotive industry 37% Industrial machinery production 26% Laboratory environment or field unspecified 37% The automotive industry was the subject of seven articles (37%) [43,46,48,50,52,54,55]. It was expected that a substantial number of virtual reality studies would be conducted in the vehicle industry. These findings are congruent with those of [56]. The importance of the automobile industry may be related to the fact that virtual technologies have long been utilized in this sector in a range of specialties and applications, such as manufacturing, training, and maintenance, to mention a few [13,57,58]. Departments such as design, engineering, maintenance, and assembly are already using the technology to support practically everything in the automotive sector, from product development to task assistance in machine assembly or maintenance procedures. Furthermore, the automotive industry is one of the most mature manufacturing industries, with cutting-edge technologies being used for the first time on a regular basis [56]. Application Fields Regarding the evaluated systems, three studies examined the virtual interaction with an automobile multimedia system, utilizing distinct basic functionalities and, as a result, producing diverse usability testing scenarios. Among the functions are a navigation system, air conditioning, a phone, a radio, a driving assistance, and car parking. Other systems undergoing design review or usability testing include a BMW vehicle's exterior design and Audi [46] and Volvo multimedia stations. Some usability tests guided interaction operations in which participants were immersed in dynamic virtual environments, such as driving a vehicle in a virtual city [46]. For dynamic testing [46,52], participants drove on public roads in a real-world setting, interacting with the vehicle's panel controls and performing tasks as assigned by the moderator. Each participant interacted with the digital system while driving around virtual streets in the virtual usability testing environment, while a moderator observed the participant's behavior and collected metrics. Two studies did not consider how individuals interacted with a product prototype. The authors of [43] focused on design issues, asking their participants to utilize a variety of assessment techniques to determine the best way to interact with a virtual automobile model. The following exchange possibilities were investigated in the study: voice command, gestures, first-person vision, and physical controllers. Due to the variety of systems studied, several operations, such as air conditioning, navigation, telephone, and audio controls, were used by various studies in diverse contexts of use, command design, and visual representation. This limits the evaluation of the connection between the results obtained in the research [46,48,52], since the use of standard methods for usability testing would benefit from a thorough examination of the data and conclusions gathered from various investigations. Five studies (26%) reviewed industrial machinery manufacture. Three of them [15,16,45] were published by the same research group and focused on industrial power units, while the other two looked at hydraulic pump production [41] and specific machines for automation technologies [53], respectively. Seven (37%) of the studies did not indicate the field of application [19,20,40,42,44,47,51]. It is common for virtual-reality applications developed by an academic research team to be tested in their own laboratory with prototypes and artifacts [59]. Figure 5 is an example of a study conducted in a laboratory environment. In terms of the methods and metrics used, we identified that the studies focused on the accuracy of experimental results when compared to testing with conventional CAD models and physical prototypes. The articles under consideration spanned a broad range of methods and metrics. The main approaches used by the studies under examination are listed in Table 3. Despite the fact that the reviewed studies utilized a wide variety of methods for collecting qualitative and quantitative data, some of them were referenced in several studies. Three studies used the Time to Complete the Task metric. The Time to Complete statistic is used to calculate how much time each participant spent on each task. [15,16,45] were published by the same research group and focused on industrial power units, while the other two looked at hydraulic pump production [41] and specific machines for automation technologies [53], respectively. Seven (37%) of the studies did not indicate the field of application [19,20,40,42,44,47,51]. It is common for virtual-reality applications developed by an academic research team to be tested in their own laboratory with prototypes and artifacts [59]. Figure 5 is an example of a study conducted in a laboratory environment. Figure 5. A study conducted in a laboratory environment and using RULA score [44]. In terms of the methods and metrics used, we identified that the studies focused on the accuracy of experimental results when compared to testing with conventional CAD models and physical prototypes. The articles under consideration spanned a broad range of methods and metrics. The main approaches used by the studies under examination are listed in Table 3. Table 3. Methods and metrics used in the studies reviewed. Ref. Metrics [46,48] Time to Complete the Task [43,46,52] Scale Usability Scale (SUS) [46,48] Number of Mistakes [44,54] Rapid Upper Limb Assessment (RULA) Figure 5. A study conducted in a laboratory environment and using RULA score [44]. The qualitative data were often gathered by providing a standardized questionnaire to respondents after they completed the tests. The System Usability Scale questionnaire was often used to measure the perceived user-friendliness of the system with which the participant interacted. The SUS approach is frequently used in industry and is effective for system comparison [46]. Another often-used metric is the Number of Mistakes, in which moderators utilize visual observation to quantify the mistakes made by participants while performing tasks other than those for which they were trained. The Rapid Upper Limb Assessment score was used in two studies. The RULA is one of the most frequently used measures for assessing employees' exposure to ergonomic risk while performing manual upper body activities such hand, neck, and limb twisting [46]. The Sense of Presence Inventory (ITC) and User Experience Questionnaire (UEQ) are two methodologies worthy of note. The Sense of Presence Inventory is a metric used to examine how people perceive physical space (defined as "a sense of physical placement in the mediated environment and interaction with and control over parts of the mediated environment") [52]. The User Experience Questionnaire is used to obtain evaluations from participants on the following aspects: attractiveness (overall impression of the product), perspicuity (ease with which the product can be understood and used), efficiency (ability to use the product efficiently), dependability (the feeling of being in control of interactions), stimulation (excitement and motivation to use the product), and novelty (perception of the innovation and creativity of the product). Other methods and metrics were only referenced in one study, such as FMEA (failure modes and effects analysis), criticality analysis (CA), and expected final distance [53]; intuitiveness and task weight [45]; readability of information, command display, and function controls [44], as well as several others, such as heat maps, pupil diameter, heart rate (hr), breathing rate (br), activity (vmu), posture, heart beats per minute, breath per minute, etc. Table A1 lists the methods and metrics specified in each of the articles reviewed. Some studies had methodological limitations that may have influenced their findings, specifically the relationship between the usability testing results in physical prototypes and virtual prototypes. In certain cases, participants had prior interactions with the systems being assessed, which influenced the evaluation of the system's ease of use in the virtual environment [45]. Other methodological limitations that we observed relate to the quantity and profile of the participants in usability testing. The authors of [60] caution against involving people who have a link to the product under development, such as corporate executives. Employees' involvement in usability testing may be unconsciously impacted due to their relationship to the business, affecting the dependability of the data acquired. As a consequence, the authors recommend that the external target group, or future customer, participate in usability testing at some point throughout the development process. However, regarding the respondents' profiles, the usability testing included personnel from Volvo [52], BMW [43], and Audi [46], as well as university students [48]. Additionally, the quantity and profile of participants in usability testing may affect the accuracy of collecting specific metrics, thereby reducing the repeatability of the usability testing findings when repeated with different participants. For the SUS metric, for example, thirty-five individuals are recommended for experiments in order to obtain satisfactory results [61]. Table 4 shows the number of participants in the examined studies. Hardware Because virtual reality is a complex technology that combines interactive media, sensors, displays, human-machine interactions, simulations, computer graphics, and artificial intelligence technologies to expand human perception, virtual reality systems frequently require the use of multiple devices [56]. Depending on the application, virtual reality hardware may range from a basic computer to specific display devices, motion capture equipment, and interactive gadgets, such as wearable devices, cameras, headmounted displays (HMD), and so on. Table 5 lists the hardware, gadgets, and apparatuses referenced in the evaluated publications. We found a vast variety of technologies, both in terms of the devices and the models employed in each category. Some studies did not specify which model was used. We discovered that the most frequently utilized hardware may be classified into seven basic categories: (1) head-mounted displays (HMD), (2) motion capture systems, (3) cockpits, (4) sensors, (5) automatic virtual environments (CAVEs), (6) interaction devices, and (7) glasses. These categories are not mutually exclusive, and the arrangement is often built by combining various technologies. Table 5. Hardware, equipment, gadgets, and apparatuses utilized in the examined research. Ref. Hardware Category [15,16,19,20,45,48,[51][52][53]55] HTC Vive HMD [46] Oculus Rift [43,50] Microsoft Kinect [41] Three-walled room (two walls and a floor) CAVE-like systems/immersive rooms [44] four-wall room [40] Power wall projection setup [41] Nintendo Wii Remote Interaction devices [20] Xbox controller [44] Flystick [46,52] Optical tracking Leap Motion [54] Tobii Pro Glasses [40] Active shutter glasses (model unspecified) [41] Stereo glasses (model unspecified) Both visualization and tracking technologies are required in a virtual-reality environment. Head-mounted displays and projection-based systems are the most frequently utilized virtual reality visualization technologies in the industrial sector. Head-mounted displays are devices that are affixed to the head of a virtual reality user and generally feature one or two screens as the image source, as well as a collimating lens between the eyes and the display [62]. Projection virtual-reality systems, on the other hand, include single or multiple projector-based powerwalls, as well as surrounding, walk-in installations based on numerous projection screens (e.g., CAVEs). The authors of [56] noted that the usage of CAVEs and head-mounted displays is unusual in industry, which they attribute to high costs. However, since head-mounted displays were the most widely used devices in the research we considered, we discovered a distinct situation. One likely reason is that, until recently, the use of head-mounted displays was restricted due to their high cost and technological restrictions. However, there was a technical breakthrough in 2016, with the first public release of technologically mature virtual-reality head-mounted displays, such as the HTC Vive and the Oculus Rift [27,53]; since then, there has been an increase in worldwide research on virtual [28]. Not only academics, but also practitioners agree that the equipment launched in 2016 was a "very big breakthrough" for virtual reality applications [29,30]. Therefore, although the cost of these devices remains relatively expensive and technical limits exist, the price of virtual reality equipment has reduced year on year, technological constraints have decreased and new features have been developed, resulting in the increased use of head-mounted displays. The HTC VIVE was the most commonly reported model in the studies that employed head-mounted displays. Ten of the studies examined used HTC VIVE, well ahead of the second most frequently mentioned model, the Oculus Rift. The authors of [15,16,45] stated that they chose HTC Vive because its tracking sensors work reliably and its controllers allow multimodal hand inputs. Furthermore, HTC Vive supports development in the Unity3D game engine, which has become the standard for the development of virtual reality. The HTC VIVE is a virtual reality headset that consists of a head-mounted display, two wireless handheld controllers, and two lighthouse base stations that emit pulsed infrared lasers. It allows the user to move about and interact with a 3D world using motion-tracked handheld controllers. The VIVE system has two 1080 × 1200 resolution displays, one for each eye. The headset and controls also have 70 infrared sensors, a gyroscope, and an accelerometer. These sensors, together with the two lighthouses, track the operator's motions with millimetric precision. The operating system is SteamVR, which runs on Microsoft Windows. A USB connection attaches the VIVE system to the computer [63]. Figure 6 is an example of a study in which HTC VIVE head-mounted displays were employed. Appl. Sci. 2022, 11, x FOR PEER REVIEW 16 of 28 Figure 6. Power unit based on CAD data visualized in HTC VIVE head-mounted display [16]. Three of the studies examined employed immersive rooms and CAVE-like systems. It is worth noting that all three were published in 2017, shortly after the previously mentioned launch of accessible head-mounted display releases in 2016. Prior to the popularity of head-mounted displays, CAVE equipment was the most frequently utilized technology [18]. However, since CAVEs are expensive, have poor immersion, and are not especially portable, they have been increasingly replaced with head-mounted displays. These claims are supported by the authors of [64], who found that in 2022, 28% of industrial presentations used a virtual reality system, whereas just 4% used a CAVE setting. Motion capture devices are also frequently mentioned. Because sensing in single pos- Figure 6. Power unit based on CAD data visualized in HTC VIVE head-mounted display [16]. Three of the studies examined employed immersive rooms and CAVE-like systems. It is worth noting that all three were published in 2017, shortly after the previously mentioned launch of accessible head-mounted display releases in 2016. Prior to the popularity of head-mounted displays, CAVE equipment was the most frequently utilized technology [18]. However, since CAVEs are expensive, have poor immersion, and are not especially portable, they have been increasingly replaced with head-mounted displays. These claims are supported by the authors of [64], who found that in 2022, 28% of industrial presentations used a virtual reality system, whereas just 4% used a CAVE setting. Motion capture devices are also frequently mentioned. Because sensing in single postures while handling a virtual object is currently inadequate, motion capture devices are utilized to supplement the detection of virtual world players and to reduce occlusion problems. Microsoft Kinect and Vicon models were utilized, with Kinect being used in two different studies by the same research group. Cockpits are regularly cited devices in research on automotive applications. During testing, physical structures equipped with steering wheels, benches, and pedals were mixed with virtual reality elements. A cockpit with a head-mounted display device and sensors to detect hand motions was a common combination. In some experiments, vehicles with their original multimedia systems were employed in comparative evaluations of usability testing in real environments. Participants traveled along simulated routes while engaging with the vehicle's interior controls. The use of physical device, such as seats, steering wheels, pedals, and gear shifters, allows the user to interact with the virtual environment in a similar way to haptic devices, which are capable of increasing the participant's sense of immersion during the test and, in some ways, supporting the tactile feedback of the virtual-reality experience. We observed the use of other hardware with specialized functions, such as Nintendo Wii remotes, Xbox controllers, and Leap Motion to improve interactions. A wide range of other gadgets, including 3D printed rigid bodies with markers [54], GoPro cameras [54], and 3D laser scanners [55], were only mentioned in one study. Table A1 lists the hardware specified in each of the articles reviewed. Software Similarly to what we identified on hardware devices, we discovered the use of a broad variety of editors, programs, engines, and frameworks. The software adopted, like the hardware, is not mutually exclusive; rather, we found that there is typically a combination of various complementary solutions. The software referenced in the articles examined is shown in Table 6. Table 6. The software and engines employed in the research under consideration. Unity 3D, a game production platform that incorporates a game engine, was mentioned by ten studies and was the most widely used product. The authors of [15,16,45] declared that they chose Unity3D because it allows the integration of scripted behavior and offers a simple import workflow for 3D data. Only one study used the game engine 3DVIA Virtools rather than Unity 3D. The authors explained that 3DVIA Virtools was chosen to develop the virtual-reality application because it can be built on top of a sophisticated BMW shader library and production data import workflow, and it has a flexible graphical scripting language that allows the programming and refining of interaction logic at run-time. In terms of CAD and modeling tools, three studies produced by the same research group used CATIA [15,16,45], and Siemens JACK was used in the experiment in [54] for product digitization. A wide variety of software, such as plugins, editors, and libraries, among many others, were cited by only one study. Table A1 lists the software specified in each of the articles reviewed. Q3: What are the benefits and challenges of using virtual reality for usability testing and design review in industry? The examined articles highlight different perspectives in which virtual reality might overcome the constraints of conventional design reviews and usability testing processes. According to the findings, virtual reality has two key advantages: benefits over conventional CAD models on screens and benefits over physical prototypes. In terms of advantages over CAD on screens, the advantages highlighted include the visualization of data from different angles and on a true scale, increased team collaboration and feelings of engagement, and more intuitive and natural interactions for non-CAD specialists. The benefits over physical prototypes include cost and time savings on redesign and increased safety. These advantages are discussed in depth below. Visualizing Products from Different Viewpoints and on a True Scale Stimulated Novel Insights The most frequently cited benefit of virtual-reality reviews of CAD on a screen is enhanced 3D visualization and manipulation. The conventional review process is often carried out on a computer with the use of CAD tools, on a flat screen. The visualization of CAD on a screen may not always satisfy all of the criteria for functional and ergonomic validations of complicated 3D models [16]. Virtual reality allows more novel modes of visualization and interaction to enhance engineering design reviews in this situation [16]. Several studies reported that the opportunity to observe a product from different angles and with more detail generated innovative discoveries [55]. The authors of [41] observed that visualizing planes in a virtual reality environment gave participants a better understanding of the spatial relationships between product components, as well as the interaction space around the assembly line, allowing the design team to understand the operating clearances in real size. The authors [42] found that relevant information about transport routes or kinematic properties, which is either not modeled in the CAD data or is lost during conversion processes, played an important role in reviews and provided an observation of machine mechanics (e.g., motor-driven mechanisms). The authors of [41] observed that the possibility of viewing and interacting with the geometry of the product (a pump) and the surroundings (an assembly line) on a true scale benefited the team in understanding important viewability problems during a subassembly engagement. The team received kinesthetic and ergonomic information on operator movement via natural engagement with the Wii remote [41]. On average, we found a consensus that this new visualization facilitated overall mistake detection. The authors of [16] found that, when compared to a standard CAD software approach on a flat computer screen, participants are more likely to spot faults in a 3D model inside an immersive virtual reality environment. In [41], the team uncovered design flaws and possible solutions that could not be detected or verified using conventional computer tools. Virtual-reality-enabled design review enables users to detect significantly more flaws in a 3D model than a CAD-software-based approach on a PC screen [16]. The benefits of combining fully digital CAD models with physical components of hybrid prototypes, such as the cockpit gear utilized in the automotive industry, are also emphasized. With the addition of the physical model, designers can evaluate their designs both visually and tactilely. This adds a further physical dimension, allowing designers to not only "see" their designs, but also "touch" them, providing designers with simulated interaction solutions in the early stages of design [48]. In [19], it is argued that virtual reality's benefits contribute to the Industry 4.0 and cyber-physical system issue of linking the physical and digital worlds. Increased Team Collaboration and Feeling of Engagement The advantages of improved collaboration and engagement were also cited frequently in the research reviewed. Industrial design reviews and usability testing are complex processes involving a variety of stakeholders, including designers, engineers, and end users. Computer-aided design is utilized as a communication tool in a conventional review process to transmit design ideas and enable a better common understanding among diverging needs and perspectives [15]. Because virtual reality decreases the possibility of some groups being excluded from the review process, it has the potential to foster collaboration among stakeholders [16]. According to some studies, the high focus that virtual-reality-based reviews provide [52] increases a feeling of team engagement. In [41], it is asserted that in the conventional process, design teams cluster around conference tables with laptops, mobile phones, and paper notes while, at best, one person manipulates the design on a giant 2D screen. Maintaining team engagement and attention becomes more challenging when battling with technological device distractions. The virtual environment allowed the design team to move away from the traditional conference room and into a creative area with fewer distractions. During design discussions in the immersive environment, the team observed increased engagement. Thus, the immersive virtual reality environment enhanced team engagement, which resulted in better discussions and fuller participation from team members in decision-making [41]. Figure 7 shows a design review on a screen and a VR-based process. Appl. Sci. 2022, 11, x FOR PEER REVIEW 19 of 28 engagement, which resulted in better discussions and fuller participation from team members in decision-making [41]. Figure 7 shows a design review on a screen and a VR-based process. Figure 7. Design review on a screen and a VR-based process [16]. More Intuitive and Natural Interactions for Non-CAD Specialists According to many studies, virtual-reality-based design evaluations and usability tests are more friendly and intuitive. While CAD software does not allow the intuitive analysis and manipulation of 3D models by users without a CAD or computer science background [15], interaction in virtual reality environments is generally simple and intuitive [16,19], and 3D engineering data can be visualized in virtual reality. Because of the high level of immersion provided by virtual-reality head-mounted displays, conducting design reviews and interacting with 3D models is regarded as more intuitive and "natural" for non-CAD specialists [16]. The authors of [16] observed that the intuitiveness of interactions in a virtual-reality system enabled a considerably faster entry into the design review. Cost and Time Savings for Redesign When compared to the conventional review process of physical prototypes, the most frequently noted advantages of virtual reality are its cost and time savings. According to the research analyzed, industries may employ virtual prototypes to save money [43,46,48], minimize redesign time, and expedite time-to-market. Prototyping is an essential step in the product development process. However, after building a product model, testing its design and functionality requires time and money. In this context, a virtual prototyping system based on virtual reality technology has the potential to overcome these shortcomings [48]. More Intuitive and Natural Interactions for Non-CAD Specialists According to many studies, virtual-reality-based design evaluations and usability tests are more friendly and intuitive. While CAD software does not allow the intuitive analysis and manipulation of 3D models by users without a CAD or computer science background [15], interaction in virtual reality environments is generally simple and intuitive [16,19], and 3D engineering data can be visualized in virtual reality. Because of the high level of immersion provided by virtual-reality head-mounted displays, conducting design reviews and interacting with 3D models is regarded as more intuitive and "natural" for non-CAD specialists [16]. The authors of [16] observed that the intuitiveness of interactions in a virtual-reality system enabled a considerably faster entry into the design review. Cost and Time Savings for Redesign When compared to the conventional review process of physical prototypes, the most frequently noted advantages of virtual reality are its cost and time savings. According to the research analyzed, industries may employ virtual prototypes to save money [43,46,48], minimize redesign time, and expedite time-to-market. Prototyping is an essential step in the product development process. However, after building a product model, testing its design and functionality requires time and money. In this context, a virtual prototyping system based on virtual reality technology has the potential to overcome these shortcomings [48]. The authors of [54] also describe the benefits of lowering the time for product design reviews and the amount of design and engineering design changes, as well as reducing time-to-market and optimizing costs. Immersion in virtual reality may result in cost savings as well as better and/or quicker design review processes. Because physical prototypes and mock-ups may be replaced by their virtual counterparts, virtual reality improves design verifications and the review process, which might contribute to cost savings for manufacturers [13,19]. In [46], it is claimed that the principle of "simultaneous engineering", in which elements are designed and tested virtually concurrently with vehicle development, is achievable with, and potentially strengthened by, the use of virtual reality [46]. The authors use the automotive industry as an example to demonstrate that a type of control could be created early and parallel to the development of a new car model and tested within a virtual car model worldwide. Therefore, the cost of redesigning a model can be reduced if the type of control can be changed in the course of development. This capability enables creative departments to produce and test novel concepts without disrupting the conventional flow of product development. Products in development may benefit from rapid design updates, which save significant time and costs. Virtual reality allows therefore a novel, concrete, and resource-saving design evaluation method with significant application potential, since designers only need to produce the models that need to be tested, which greatly reduces time and costs [48]. Instant feedback and design modification could further improve product quality by making it possible to detect issues at an earlier stage [55]. In [54], the possibility of anticipating potential problems and design changes and having precise feedback about human-machine interactions thanks to the virtual simulation, before products are produced, is discussed. One study [55] focused on the cost savings associated with reduced travel frequencies, since virtual reality allows the replacement of physical review meetings and interaction for immersive technical discussions. A frequent worry when using virtual-reality-based prototypes is whether the correlation with conventional review and testing methods is maintained, as well as the degree of accuracy for usability testing. The reviewed studies indicate that virtual-reality prototypes do not compromise correlation, i.e., virtual-reality-based usability testing applies to the same degree and may provide results equal to conventional testing with physical prototypes. Several studies [48,51,66] reported that virtual reality usability testing has a significant connection with physical testing in terms of the metrics collected. The variance of quantitative data, such as operation errors and the time spent completing tasks, was statistically analyzed, and their correlations were validated and reported with sufficient correlational data between physical and virtual prototype testing outcomes. It was found [52] that there were no significant differences in UX questionnaire data between virtual reality and the physical prototypes in the field, but that there were correlations between rated presence in the virtual reality system and UX ratings, particularly for reported stimulation. The authors of [48] found that the data and experience from a mixed-reality prototype vehicle were equivalent to those from a fully physical prototype vehicle. It was observed [46] that a physical prototype automobile and a mixed-reality prototype have comparable rated usability and that there is no significant difference between the metrics they collect. Although the authors of [43] do not provide comparable testing methodologies for virtual systems and real-world environments, they also conclude that virtual reality might reduce costs and shorten the time taken for the virtual testing of products in development, based on metrics gathered during supported testing and interviews with participants. Increased Safety for Participants For dynamic usability testing, such as driving a car, virtual testing may provide a substantial benefit, since participants are not subjected to any actual risks. Therefore, situations such as a vehicle collision while driving, or a car accident involving passengers, would not occur in the virtual environment [52,66,67]. Despite all of these benefits, virtual reality has certain drawbacks. According to the research, there are various ways in which virtual reality might impose additional constraints on conventional design reviews and usability testing processes, limiting its widespread adoption. In fact, the technology still needs to evolve [16] and implementing virtual reality technology may be difficult. The challenges we identified are related to a lack of realism as a result of unnatural tactile and visual senses, latency and registration issues, communication difficulties between teams, and motion sickness and other unpleasant symptoms. In the sections that follow, we analyze these problems. Lack of Realism as a Result of Unnatural Tactile and Visual Senses The difficulties of non-natural interactions between individuals and virtual prototypes owing to visual and tactile constraints were addressed frequently in the research reviewed. The complicated interactions in virtual reality have proven to be major drawbacks in industrial settings [50]. The authors of [15] reported that haptic feedback and multimodal interactions are still problematic and concluded that there is a lack of visualization and interaction techniques that fully harness virtual reality's potential. The authors of [48] found that the movement of participants' hands could not be well simulated, and that positioning offset occurred frequently, while [41] identified that interacting with the geometry using the Wii Remote game control was too awkward and unnatural to fully investigate participants' assembly inquiries. The authors of [41] found that the collision detection experience was insufficiently robust to be helpful in their study. Several users reported issues with virtual object feedback when compared to external device motions, such as steering-wheel twisting. Another issue mentioned was that the sense of reach and the dimensions of virtual items do not correspond to physical interaction features, such as buttons or flat surfaces that mimic multimedia screens [46,52]. Given the significance of natural human connection with a virtual interface, haptic devices such as gloves, suits, and others may strengthen the sensation of immersion. Including haptic technology to allow human interaction modeling of natural human senses and motion would result in a significant improvement in usability testing. In [48], for example, it is suggested that replacing the handle with data gloves could simulate hand movement. As a result, the effectiveness of incorporating haptic devices to improve usability testing is connected to the purpose of the testing, the complexity of the interaction, and the maturity of the virtual item under consideration. Haptic devices should be avoided for usability testing on products that are still in the early stages of development and do not have accurate hand-and-finger interaction. The lack of realism is caused not just by touch sense issues, but also by visual sense issues. In [46], it was observed that readability and representation in a virtual prototype system were troublesome, which the authors attributed to the head-mounted display's low display resolution. Some participants in the tests conducted in [46] complained that they could not view the virtual environment clearly. The study discovered that the image in the virtual environment is not fine enough, and that it is limited by the hardware devices, requiring more dynamic movement behavior and improved graphics resolution from the virtual prototype. Due to complex interaction systems or surroundings with a substantial amount of visual information, high graphical representativeness is essential. Otherwise, only systems with a limited number of interactions may effectively correlate with conventional usability testing [67]. A major issue is that employing virtual-reality technology without an auxiliary device, such as hand-tracking sensors, may influence user perceptions and, as a result, test findings. The adoption of a physical property, such as a flat wood table emulating the screen of a multimedia system, is one strategy for providing tactile feedback to a user engaged in a virtual environment [66]. Each participant's perception of a product's depth, reach, and dimensions is spontaneous and instinctual during usability testing with physical prototypes. However, in virtual reality, the user desires touch with certain physical devices in order to handle objects in the virtual world; thus, visual calibration and positioning algorithms are necessary to modify the environment and virtual objects in order to provide an accurate user experience. Virtual prototypes pose a challenge to the industry when human interaction that goes beyond visual verification is required. When creating functional virtual prototypes that are designed to offer visual, tactile, and aural feedback, the difficulty is to produce a highfidelity virtual prototype that has the same features as a physical prototype. Geometric component qualities, such as high-fidelity colors and textures, part structure animations, such as when a vehicle's door opens or when a refrigerator door handle is pulled, or even functional touchscreen displays require powerful hardware to process data within the virtual reality to ensure a reliable, immersive experience for the immersed individual. Besides unnatural tactile and visual senses, occlusion issues also need to be addressed. It is critical to assess the participant's emotional reaction while designing the qualitative metrics of a usability test. The authors of [52] found that virtual surroundings diminish some sensations throughout an activity, such as user happiness or dissatisfaction. The authors discovered that while engaged in the virtual environment, the attention of some participants became predominately focused on the need to accomplish tasks. Due to the inability to watch the participants' facial behavior, which the head-mounted display was partially hiding, the moderator did not fully observe the participants' emotions. As a result, physical prototypes are advised for some tests in which participant behavior must be assessed by face observation [46]. Latency and Registration Issues Another issue that is commonly cited is the time between head movements and viewing the scene. The authors of [46] identified registration issues between virtual and physical elements of the environment, which they linked to hand and finger optical tracking controller calibration or tracking errors. The authors sought to improve tracking by instructing participants on how to adjust the head-mounted display before beginning the trial; however, the adjustment cannot be controlled from the outside by the experimenter. The resolution of the head-mounted display, according to the authors, might play a role in this scenario. When interacting with mixed physical-virtual prototypes, several study participants mentioned a dimensional discrepancy between the virtual and physical aspects of the prototypes (for example, the cockpit's physical wheel and the virtual air conditioning control). The necessity of calibrating the position and visibility of virtual objects with physical prototypes was identified as a barrier throughout the research. The main difficulty is the amount of time it takes to calibrate the system, given that calibration modifications are dependent on the user's participation. Communication Difficulties between Teams The authors of [52] identified that virtual-reality-based design reviews of complex CAD data often suffer from communication issues between virtual reality users and team members who observe the virtual reality scene from an outside perspective (e.g., a TV screen). As a consequence, spoken descriptions are often insufficient to express a specific detail about a machinery component. The authors of [52] also discovered that in the virtual environment, participants voiced less affect, and [20] found that the social exclusion of virtual reality users sharing the same physical space as colleagues during a design review session has a negative impact on communication and cooperation among team members. Motion Sickness and Unpleasant Symptoms While immersed in virtual-reality environments, several people reported unpleasant symptoms, such as nausea or headache. The source of these symptoms, according to the participants, was the lack of shadows and reflections of objects, as well as a delay in the virtual environment related to body movements and gaze [43,52]. Virtual-reality-based usability testing needs careful consideration of the method carried out. Immersion in a virtual world may be the first such experience in some individuals' lives, and the strangeness of wearing a head-mounted display and handheld controllers may impair their confidence in interacting with virtual items and interfaces. When a virtualreality-based usability testing method is designed, it is crucial to evaluate the participants' health condition. Conclusions Industry is under pressure to shorten the time taken for new products to enter the market. Our study found that virtual-reality technology is a powerful tool for enhancing the redesign process in industrial product development. When compared to conventional usability testing and design reviews, virtual-reality technology improves the process by visualizing new angles that stimulate novel insights, increasing team engagement, providing more intuitive interactions for non-CAD specialists, saving redesign costs and time, and increasing participant safety. Virtual-reality-based prototypes have to address technological challenges, such as a lack of realism owing to unnatural tactile and visual interactions, latency and registration issues, communication difficulties across teams, and unpleasant symptoms. However, a significant technological breakthrough occurred only a few years ago, with the first public release of technologically mature virtual reality equipment. The devices remain relatively expensive and technical constraints exist, but the price of virtual reality equipment decreased year on year, technological constraints have been reduced, and new features have been developed, resulting in the increased development of VR applications. There has since been a significant growth in global research on virtual reality. In terms of inventions, we observed a scenario in which patent applications have boomed since 2016, in a technology space that is rapidly evolving, offering opportunities to enter an area while it is still young. Thus, this exponential growth in patent applications for virtual reality, design review, and usability testing should continue for a few more years. Previous forecasts estimated that the worldwide virtual reality market would expand from $6.30 billion in 2021 to $84.09 billion in 2028, but the COVID-19 outbreak encouraged the use of virtual reality even more. Therefore, this increase might be considerably larger. As a result, an increasing number of companies are contemplating implementing virtual-reality-based design reviews and usability tests without fully comprehending the overall advantages and restrictions. Our findings on inventions, current application fields, methods, and hardware and software used-as well as the benefits and challenges of combining virtual reality with conventional design reviews on flat screens or physical prototypes in usability testing-may serve as a reference for decision-makers and researchers as they continue to develop novel solutions for the industrial product development process. Funding: The authors would like to thank for financial support from the National Council for Scientific and Technological Development (CNPq). IW is a CNPq technological development fellow (Proc. 308783/2020-4). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Methods, hardware, and software referred to in each article reviewed. Ref. Objectives, Methods, Hardware and Software [41] Described an industry case study of the use of immersive VR as a general design tool with a focus on the decision-making process; Nintendo Wii remote; three-walled immersive environment (3 projectors + 2 walls (4 m 3 m and 3 m 3 m) and a floor (4 m 3 m); infrared-based optical tracking; stereo glasses; surround sound system; TEAMCENTER LIFECYCLE VISUALIZATION 9.1 Siemens PLM Software [15] Proposed a low cost multimodal VR-supported tool for design review; HTC Vive; PC; GeForce GTX 1080 8 GB GPU; CATIA; Unity3D; 3dsMax [16] Described the development and evaluation of a VR-based tool to support engineering design review; HTC Vive; PC; GeForce GTX 1080 8 GB GPU; CATIA; Unity3D; 3dsMax [45] Discussed a set of application areas for VR in industry and modes of visualization and interaction and described the implementation of a light-weight VR-system for industrial engineering applications; HTC Vive; PC; GeForce GTX 1080 8 GB GPU; CATIA; Unity3D; 3dsMax [40] Prototyped and tested a potential knowledge engineering capture and reuse solution, demonstrating real-time user-logging using virtual design environments focused on team-based design reviews; Scale Usability Scale (SUS); full-HD (1920 × 1080) 3D projector; 3.2 m × 1.8 m power wall projection; active shutter glasses; UbiITS framework; microphones; cameras [47] Derived the factors for evaluating usability of virtual reality (VR) contents; unspecified [42] Presented a set of algorithms to automatically determine the geometrical properties of machine parts based only on their triangulated surfaces/Intel Core i7-3770 CPU 3.4 GHz; Platform for Algorithm Development and rendering (PADrend 1.0); Escript [19] Addressed the design review process for CPS by introducing a VR-driven concept, taking CPS characteristics into account, like the use data of (previous) product instances in the field as an additional source of information; workstations, HTC VIVE; 3D Unity; Autodesk Forge; Autodesk Fusion 360; Google Firebase; [20] Presented approaches to counteract this issue in a shared VR space for industry purpose; Xbox-Controller; HTC Vive Pro [51] Proposed a virtual product prototyping system based on interaction of consumer and producer in terms of user experience and design; HTC Vive; motion capture; pupil tracer; ECG/GSR sensor; Space UI; 3D 360-degree virtual space; Unity 3D; 360 VR images method [44] Evaluated two new operating design modes and their collaborative metaphors enabling two actors, a design engineer and an end user, to work jointly in a collaborative virtual environment for workstation design; RULA; large four-wall immersive room, size was 9.60 m long, 3.10 m high and 2.88 m deep; flystick device; a desktop computer with two windows; Table A1. Cont. Ref. Objectives, Methods, Hardware and Software [48] Utilized the currently popular virtual reality technology to solve the contradiction between the increasingly complex technologies applied in the automotive and the gradual shortening design and development cycle of the automotive due to market pressure; Time to Complete the Task; Number of Mistakes, T-test; 1:1 cockpit 2018 Mercedes-Benz E200L; PC; HTC vive; Logitech G29 steering wheel kit; Unity [54] Proposed a mixed-reality set-up to support human-centered product and process design, where systems and humans interacting with them are monitored and digitalized to easily evaluate human-machine interactions, with the scope to have feedback for design optimization; Dreyfuss 3D; OWAS/RULA/REBA; human joint angles; Ergonomic ratings (Factory operations); Eye fixation; Pupil diameter (PD); Gaze plot, heat maps; pupil diameter; heart rate (HR); breathing rate (BR); activity (VMU); posture; heart beats per minute; breath per minute; magnitude of resultant vector of mean; acceleration in three directions; stooping angle on sagittal plane; Siemens JACK; VICON tracking; VICON Bonita cameras; 3D-printed rigid bodies with markers; Tobii Pro; Zephyr BioHarness; GoPro; XSensor IX500; New Holland T5.120 tractor model cabin [55] Explored the feasibility of developing VR technologies to reduce environmental impact, drawing from a case study in an automotive company; 3D laser scanner; HTC Vive; Unity3D [49] Reviewed the Usability Evaluation Methods practiced by Industrial researchers while building VR Products; Systematic literature review [53] Discussed the use and the potential of the virtual reality technology in the industrial environment; FMEA (failure modes and effects analysis); criticality analysis (CA); completion time per trial; expected-final distance; HTC Vive; Unity 3D [43] Reported insights of their approach aiming at appropriate VR interaction techniques supporting designers, engineers, and management executives optimally in design assessment; Scale Usability Scale (SUS); intuitiveness; task weight; 55" LCD Full HD; MS Kinect; Apple iPad; 3DVIA Virtools, MS Kinect SDK; MS Speech API [46] Investigated whether the usability evaluation of a car entertainment system within an MR environment provides the same results as the evaluation of the car entertainment system within a real car; time to complete the task; number of mistakes; System Usability Scale (SUS); readability of information; command display; function controls; driver's seat, a steering wheel, three pedals and individual control panel with gearshift lever knob and RPB from the center console of an Audi A4; Oculus Rift; Leap Motion; Unity3D [50] Reported insights of their user-centered approach aiming at appropriate VR interaction techniques to support designers, engineers, and management executives optimally in design assessment; Scale Usability Scale (SUS); intuitiveness; task weight; 55" LCD with Full HD resolution; Microsoft Kinect Sensor; Apple iPad; game engine 3DVIA Virtools, Microsoft Kinect SDK; Microsoft Speech API [52] Investigated how a VR study context influences participants' user experience responses to an interactive system with an UX evaluation of the same in-vehicle systems; System Usability Scale (SUS); User Experience Questionnaire (UEQ); Sense of Presence Inventory (ITC); Volvo S90, semi-assisted driving system and a parking camera; 9 inch touch-based infotainment system; 12.3 inch digital driver information display, HTC Vive; system for semi-autonomous driving; screen; Dell Precision 5000; LeapMotion; Unity3D
15,940.4
2022-02-08T00:00:00.000
[ "Engineering", "Computer Science" ]
A STUDY OF THE THIRD-ORDER NONLINEAR SUSCEPTIBILITY AND NONLINEAR ABSORPTION OF INAS IN THE MIDDLE INFRARED REGION Nonlinear susceptibilities of the third order χ (3) and the coefficient of nonlinear absorption in n-type InAs with a different degree of doping are measured at room and nitrogen temperatures. The values of the third-order nonlinear susceptibilities χ (3) ⋍ 10 – 7 esu derived from these measurements essentially exceed the values calculated on the basis of the model featuring the nonlinear susceptibility of the electrons, being in conduction-band nonparabolicity. It is shown , that the observed discrepancy is eliminated, if to consider a dissipation of energy of electrons in the calculation. The growth of efficiency in four-wave mixing in narrow-gap semiconductors is restricted to nonlinear absorption of interacting waves. It has been found that , nonlinear absorption in InAs is due to free holes that arise as a result of three-photon absorption. The breakdown threshold on the surface and constant of the nonlinear absorption in InAs were measured. Introduction Nonlinear optical materials have numerous applications, including photodynamic therapy, nonlinear photonics, 3D optical data storage, frequency upconverted lasing, and fluorescence imaging [1][2][3][4][5]. One of the most important problems of applied nonlinear optics is the search for media with possibly large values of nonlinear susceptibilities. In this regard, semiconductors, as experiments have shown, are among the most promising media [6,7]. The large nonlinearity of semiconductors basically comes from the fact that they, with their relatively small bandgap E g , are characterized by sufficiently low internal fields, which determine the couple forces acting on optical electrons. Therefore, even not too high laser fields should already provide a large contribution to the susceptibility of nonlinear electronic polarization. Literature review and problem statement The study of cubic susceptibilities is the central problem of nonlinear spectroscopy [8]. The effects due to the cubic susceptibility are the basis of such methods of nonlinear spectroscopy as two-photon spectroscopy, saturation spectroscopy and also allow solving such an important practical problem as the correction of phase distortions by the four-wave mixing method (FWM) [9]. In the mid-infrared (IR) spectrum, where the most powerful and efficient lasers, including the CO 2 molecule, operate, the implementation of phase distortion correction by the wave front reversal method in four-wave mixing (WFR-FWM) with high efficiency is of particular interest. While nonlinear properties of most materials are studied in the visible and near infrared regions of the spectrum [5,[10][11][12]. In the work [10], using large-sized single crystals of high optical quality, the optical properties of Ba 2 TiSi 2 O 8 were systematically investigated, including transmission spectra, refractive indices and nonlinear absorption properties over a wide wavelength range from 340 to 2500 nm. In this experiment, the magnitude of χ (3) was about 10 -13 (esu), it is most likely related to electron cloud distortion [13]. The nonlinear optical properties of two graphene derivatives, graphene oxide and graphene fluoride, are investigated by means of the Z-scan technique employing 35 ps and 4 ns, visible (532 nm) laser excitation [14]. A study of nonlinear absorption in the infrared region was also carried out in narrow-band conductors [15][16][17][18]. These studies have shown that, because of the large value of linear and nonlinear absorption, semiconductors InSb and Hg 1-x Cd x Te can not be used for reflection radiation at four-wave mixing in a wide range of radiation intensities. In this aspect, the narrow-gap InAs semiconductor has not been studied sufficiently. The aim and objectives of the study The aim is to search for and study media with high nonlinear susceptibility, also to investigate their characteristics that determine the maximum value of efficiency in four-wave interaction at a wavelength of 10.6 μm. To achieve the aim, let us consider the following objectives: 1. Investigation of the physical mechanism of nonlinear interaction in the InAs of the determining reflection in fourwave interaction at a wavelength of 10.6 μm. 2. Identifying the parameters of nonlinear media influencing the efficiency of reflection. 3. The study of the temperature dependences of these parameters. The measurements of breakdown threshold on the surface and constants of the nonlinear absorption in InAs In the given work, InAs samples have been studied with various alloying degrees at room and nitrogen temperatures ( Table 1). For the entire application range of semiconductors, it is important to know the limit of their performance on the intensity of laser emission. This limit is usually imposed by the destruction threshold of the material. The source of radiation in our work was a pulsed TEA CO 2 laser operating in the lowest-order transverse mode TEM 00 mode. The duration of the generation pulse was ~150 ns at half-height of the front beam and approximately 1.5 μs at the base. The measurements of the breakdown threshold on the surface of all samples studied in our work showed that this value lies within the range of 3÷4•10 7 Vt/cm 2 . As shown by transmittance measurements in InAs samples unlike wide-bandgap semiconductors (for example, Ge), noticeable reduction of transmittance is observed in the radiation intensity even lower than the breakdown threshold on the surface (Fig. 1). This transmittance decrease is reversible, manifests itself at radiation intensity I≿10 6 Vt/cm 2 and conditioned by the processes of nonlinear absorption of radiation in the researched semiconductors. Correlated values of the emission quantum of a CO 2 laser (ħω≃0.117 eV) with bandgap InAs (0.35 eV) suggest that nonlinear absorption in them is conditioned by the three-photon process. To determine from the data on the transmittance of the absorption constants, we will consider the problem of the dependence of transmittance on intensity, taking into account linear and nonlinear absorption. In the steady-state case, the change in intensity with the propagation of light through the semiconductors in the presence of three-photon absorption effects can be written as follows 3 4 ( where α is the linear absorption coefficient, η is the three-photon absorption coefficient, δ is the free hole absorption coefficient appearing as a result of three-photon absorption. δ is connected with η in the following ratio: where q is the absorption cross-section for the case of absorption by the free holes, τ is the lifetime of the nonequilibrium carriers. In A III B V compounds, the absorption cross-section Table 1 The main characteristics of the investigated samples InAs by the free holes is so large [19] that even at moderate laser emission intensities the second term on the right-hand side of the equation (1) can be neglected. In this instance, the expression for transmittance of the samples based on three-photon absorption depending on the intensity of the incident emission 0 I takes the form: where r is the Fresnel reflection coefficient on the sample surface, l is the sample length. Comparison of the results of experimental studies ( Fig. 1) with the data of calculation by formulae (3) allows determining the values of δ directly. In view of the spread of the experimental data for InAs, the following is obtained: 5 3 0.14 0.07 cm /MVt . δ = ± Nonlinear reflection in degenerate four-wave mixing Four-wave mixing is the nonlinear process, in which the mixing of three waves in a nonlinear medium serves to generate a fourth wave (Fig. 2). The mechanism for the appearance of an inverted wave in such a scheme is most simply explained on the basis of a holographic interpretation of WFR [20]. Let an arbitrary wave E 3 (r) be incident on a nonlinear medium which needs to be reversed, and a reference wave E 1 (r) with constant amplitude over the cross-section. If the waves E 1 (r) and E 3 (r) are coherent, then they record the interference perturbations of the dielectric constant (hologram) in a nonlinear medium. If we illuminate this hologram from the opposite side by the wave E 2 (r) that is exactly counterpropagating to the ref- In a volumetric nonlinear medium, an interference pattern may also be written by the waves E 2 (r) and E 3 (r) and read by the wave E 1 (r) with the reconstruction of the same wave E 4 (r). In this conventional scheme of WFR-FWM, the reflection coefficient R of the wave E 3 into the wave E 4 due to the intensity is connected with χ (3) by the relation [20]: where c is the speed of light in a vacuum, n is the linear refractive index, E 1 , E 2 root-mean-square field strengths of reference waves, l is a length of the medium. In case when the medium has a linear (α is the linear absorption coefficient on intensity) and nonlinear ( Here γ n is the constant of n-photon absorption, (3) x M cn πω = is the constant characterizing the nonlinearity of the medium. From this expression, it follows that when only the linear absorption is taken into account, the dependence 1 ( ) R f I = must remain quadratic. The appearance of nonlinear absorption leads to growth restriction of R depending on I 1 , while for large I 1 it should lead to its decline. The results of measurement of the dependence of R on I 1 in InAs samples have been shown in Fig. 3. With the growth of I 1 , R in both samples first increases quadratically, then R reaches a maximum value and decreases. The experimental determination of the third-order nonlinear susceptibility of InAs The constant M, which characterizes the nonlinear coupling of the interacting waves, was determined for each sample from the formula (5) from the data on the measurements of the dependence of R on I 1 at small values of I 1 , when nonlinear absorption can be neglected and a quadratic dependence of R on I 1 is observed 1 . 2 (1 ) It is known that the bandgap in the majority of semiconductors (these include InAs) increases with a drop in the temperature T [19]: Here E g (0) is the bandgap when T=0 K, a and b are constants. In InAs E g (0)=0.426 eV and the corresponding value E g (300 K) is 0.35 eV ( 2 CO 3 ω   ). Thus, by cooling InAs b≃93 K) to, for example, liquid nitrogen temperatures, E g (T=77 K) compared to E g (300 K) increases to 0.41 eV, which significantly exceeds 3 , ω  i. e., nonlinear absorption in InAs with a reduction in temperature may decrease noticeably. Linear absorption in semiconductors is also a function of temperature, but the relation between α and T is highly dependent on the emission absorption mechanism. In case if absorption is caused by crystalline defects and foreign impurities, α practically does not depend on T. If absorption is due to free carriers, the relation between α and T is determined by the free carrier scattering mechanism [22] and practically for all scattering mechanisms α drops with a drop in temperature. The influence of the temperature on α and γ n in InAs was investigated experimentally in samples No. 1 and 3. The results of the study of InAs sample No. 1 show that its linear absorption does not change with a drop in temperature (α≃1 cm -1 ). At the same time, the intensity of incident emission at which nonlinear absorption considerably manifested, increases from ~1 MVt/cm 2 at T=300 K to ~4÷5 MVt/cm 2 for T=77 K. The obtained results prove that linear absorption in this sample is conditioned by crystalline defects and foreign impurities. The reduction in nonlinear absorption is connected in this sample with an increase in the bandgap with a drop in temperature. In contrast to sample No. 1, a noticeable drop (approximately 2 -fold from 8.4 to ~4 cm -1 ) in linear absorption was revealed in sample No. 3. The intensity at which nonlinear absorption noticeably changes the transmittance of the sample with a drop in temperature also drops significantly. The observed variation in α with temperature in sample No. 3 is in good agreement with the theoretical dependence of emission absorption by free electrons on the temperature in the semiconductors [22]. Immutability of the nonlinear absorption constant in InAs sample No. 3 was unexpected. The reason for this effect, apparently, is due to the fact that donor impurity near the bottom of the conduction band with a sufficiently high concentration forms an impurity band. Its distance from the valence band is 2 3 imp CO Е < ω  and has a weaker temperature dependence than E g . Therefore, the nonequilibrium hole generation process in doped InAs can remain a three-photon process conditioned by the three-photon transfer of electrons from the valence band to the impurity band. It can be seen from Table 1 that a significant reduction in R with the drop in temperature in sample No. 1 is connected with the reduction in the constant M characterizing the nonlinearity of the medium by a factor of 5.5. In sample No. 3, M, vice versa, increased by a factor of 2.5 which together with the two-fold drop in the linear absorption caused an increase in R in InAs by a factor of ~30 (Fig. 3). The values of (3) cn M χ = πω obtained from the experimental data in InAs significantly exceed the values of χ (3) in these semiconductors caused by the anharmonism of the motion of bound electrons (Table 2). On the other hand, it can be argued that the observed reflection by FWM is not connected with the generation of free carriers in three-pho-ton absorption in InAs. Otherwise, instead of the observed quadratic dependence of R on I 1 , 6 1 I R in InAs would be observed. The contribution of the thermal nonlinearity mechanism to R, as evidenced by estimates, is insignificant (<0.05 %). Thus, it can be assumed that the primary mechanism responsible for reflection in FWM in such semiconductors is the nonparabolic shape of the conduction band. Calculations of χ (3) conditioned by the nonparabolic shape of the conduction band in semiconductors were carried out in classical work [23]. In Fig. 4, the dots represent the experimental data of χ (3) obtained for various values of N e and corresponding values of (3) YuB χ in InAs calculated in theory [23]. As it is shown in Fig. 4, the experimental values of χ (3) differ significantly from (3) YuB χ and the increase in χ (3) as a function of N e is not monotonic, as it should follow from the theory. Discussion of the results of investigating the mechanism of the third-order nonlinear susceptibility in InAs The observed difference of the measured values of (3) χ and (3) YuB χ may be related to the following fact. The current scheme for calculating nonlinear optical susceptibilities is based upon an expansion of the density matrix of a system consisting of the matter and the electromagnetic field into a series in terms of perturbation theory. For electron polarization, the expansion parameter in perturbation theory is E/E at . In this case, the change in the distribution function of the electrons in the system is neglected. Accounting for the change in the distribution function, for nonlinear optical degenerate effects in the expansion of polarization in terms of the field powers results in parameters other than E/E at and even significantly exceeding it. It is these parameters that give rise to the significant and even "giant" nonlinearities in the interaction of emission and matter. Therefore, significant nonlinearities can be observed in the response of the matter to macroscopic changes introduced under the field action (the generation of current carriers in the semiconductors, absorption saturation, changes in the system of energy levels, etc.) and accompanying irreversible changes in the system. In particular, the study [24] demonstrates that accounting for the energy dissipation of free electrons due to interaction with the crystalline lattice, impurity ions, etc., the nonlinear susceptibility conditioned by the nonparabolic shape of the conduction band in degenerated FWM can considerably exceed (3) YuB where τ p is the momentum relaxation time, τ E is the energy relaxation time. calculated from the mobility data μ, and also the values of τ E determined from the data on τ E /τ p and τ p are presented. Table 2 Experimentally measured values of χ (3) and ( The obtained data τ E /τ p and τ E in InAs coincide in order of magnitude with the characteristic values of these quantities in semiconductors [22]. Scattering mechanism of carriers (ionized impurity scattering, dislocations, optic and acoustic vibrations, etc.) is significantly affected by the relaxation time τ E and . τ p . The table shows that with an increase in electron concentration (and correspondingly concentration of impurity ions), the momentum relaxation time τ p decreases. Such a behavior of τ p depending on N e indicates that an increase in the impurity concentration enhances the scattering effect. The observed dependence of τ E /τ p on N e is apparently related to the fact that τ E decreases with increasing N e more rapidly than τ p . It is known that the band structure in InAs is well described by the Kane model [22] and the conduction band structure can be described as 2 where k is the wave vector of an electron. According to this expression, the major contribution to the nonlinear susceptibility χ (3) caused by the nonparabolic shape of the conduction band is given by electrons with large k. With decreasing temperature, the number of such electrons decreases, and electrons accumulate at the bottom of the conductivity zone near the minimum (for k=0), where the second term in the expressions (8) is negligible. Apparently, the observed decrease in the sample χ (3) with N е ≃2•10 16 cm -3 at T≃77 K relative to χ (3) at 300 K is due to this circumstance insofar as at T≃77K the effective density of states in the conduction band N eff~2 •10 16 cm -3 and the accumulation of all electrons near the minimum of the conduction band by the Pauli principle are still not forbidden. With increasing N e , the accumulation of electrons in the region k, where the second term is insignificant, is hampered in accordance with the Pauli principle. It can, therefore, be expected that for large N e , the decrease in χ (3) with temperature will not be as strong as for N е ≃2•10 16 cm -3 . Indeed, as experiments have shown, in the sample with N е ≃1.6•10 17 cm -3 , χ (3) not only did not decrease, but even increased. This growth, apparently, is due to the fact that the thermal velocity of the charge carriers decreases with a decrease in temperature of the crystal. In the case, when the main mechanism is scattering by impurity ions, a decrease in the thermal velocity of carriers leads to an increase in the interaction of charge carriers with ionized impurity atoms, since the duration of the interaction increases and decreases τ p . Conclusions As a result of the research: 1. It is shown that the narrow-gap semiconductor InAs has a high value of nonlinear susceptibility χ (3) ≃10 -7 esu and allows getting the high efficiency of four-wave mixing. 2. It is determined that the primary mechanism responsible for reflection in four-wave mixing in such semiconductors is the nonparabolic shape of the conduction band. 3. Nonlinear absorption is the main limiting factor in the growth of reflection efficiency in four-wave mixing with increasing radiation intensity.
4,562.6
2017-10-30T00:00:00.000
[ "Physics" ]
Spectrum of the Laplace-Beltrami Operator and the Phase Structure of Causal Dynamical Triangulation We propose a new method to characterize the different phases observed in the non-perturbative numerical approach to quantum gravity known as Causal Dynamical Triangulation. The method is based on the analysis of the eigenvalues and the eigenvectors of the Laplace-Beltrami operator computed on the triangulations: it generalizes previous works based on the analysis of diffusive processes and proves capable of providing more detailed information on the geometric properties of the triangulations. In particular, we apply the method to the analysis of spatial slices, showing that the different phases can be characterized by a new order parameter related to the presence or absence of a gap in the spectrum of the Laplace-Beltrami operator, and deriving an effective dimensionality of the slices at the different scales. We also propose quantities derived from the spectrum that could be used to monitor the running to the continuum limit around a suitable critical point in the phase diagram, if any is found. I. INTRODUCTION Causal Dynamical Triangulations (CDT) [1] is a numerical Monte-Carlo approach to Quantum Gravity based on the Regge formalism, where the path-integral is performed over geometries represented by simplicial manifolds called "triangulations".The action employed is a discretized version of the Einstein-Hilbert one, and the causal condition of global hyperbolicity is enforced on triangulations by means of a space-time foliation. One of the main goals of CDT is to find a critical point in the phase diagram where the continuum limit can be performed in the form of a second-order phase transition.The phase diagram shows the presence of four different phases [2][3][4][5][6][7], and the hope is that the transition lines separating some of these phases could contain such a second order critical point.Presently, such phases are identified by order parameters which are typically based on the counting of the total number of simplexes of given types or on other similar quantities (e.g., the coordination number of the vertices of the triangulation).The main motivation of the present study is to enlarge the set of observables available for CDT, trying in particular to find new order parameters and to better characterize the geometrical properties of the various phases at different scales. One successful attempt to characterize the geometries of CDT has been obtained by implementing diffusion processes on the triangulations [9,10].In practice, one analyzes the behavior of random walkers moving around the triangulations: from their properties (e.g., the return probability) one can derive relevant information, such as the effective dimension felt at different stages of the diffusion (hence at different length scales).In this way, estimates of the spectral dimension of the triangulations have been obtained. In this paper we propose and investigate a novel set of observables for CDT configurations, based on spectral methods, namely, the analysis of the properties of the eigenvalues and the eigenvectors of the Laplace-Beltrami (LB) operator.This can be viewed as a generalization of the analysis of the spectral dimension, since the Laplace-Beltrami operator completely specifies the behavior of diffusion processes (see Appendix A for a closer comparison).Still, as we will show in the following, the Laplace-Beltrami operator contains more geometric information than just the spectral dimension. Nowadays, spectral methods find application in a huge variety of different fields.To remember just a few of them, we mention shape analysis in computer aided design and medical physics [11,12], dimensionality reduction and spectral clustering for feature selection/extraction in machine learning [13], optimal ordering in the PageRank algorithm of the Google Search engine [14], connectivity and robustness analysis of random networks [15].Therefore, the application to CDT is just one more application of a well known analysis tool.On the other hand, some well known results which have been established in other fields will turn out to be useful in our investigation of CDT. In the present paper, we limit our study to the LB spectrum of spatial slices.Among the various results, we will show that the different phases can be characterized by the presence or absence of a gap in the spectrum of the LB operator, as it happens for the spectrum of the Dirac operator in strong interactions, and we will give an interpretation of this fact in terms of the geometrical properties of the slices.The presence/absence of a gap will also serve to better characterize the two different classes of spatial slices which are found in the recently discovered bifurcation phase [3][4][5][6].Moreover, we will show how the spectrum can be used to derive an effective dimensionality of the triangulations at different length scales, and to investigate quantities useful to characterize the critical behavior expected around a possible second order transition point. The paper is organized as follows.In Section II we discuss our numerical setup together with a short review of the CDT approach, summarizing in particular the major features of the phase diagram that will be useful for the discussion of our results.In Section III we describe some of the most relevant properties of the Laplace-Beltrami operator in general, then focusing on its implementation for the spatial slices of CDT configurations and discussing a toy model where the relation between the LB spectrum and the effective dimensionality of the system emerges more clearly.Numerical results are discussed in Section IV.Finally, in Section V, we draw our conclusions and discuss future perspectives.Appendix A is devoted to a discussion of the relation existing between the spectrum of the LB operator and the spectral dimension, defined by diffusion processes as in Ref. [10]. II. A BRIEF REVIEW ON CDT AND NUMERICAL SETUP It is well known that, perturbatively, General Relativity without matter is non-renormalizable already at the two-loop level [16].Nevertheless, interpreted in the framework of the Wilsonian renormalization group approach [17], this really means that the gaussian point in the space of parameters of the theory is not an UV fixed point, as for example it happens for asymptotically free theories.Indeed, Weinberg conjecture of asymptotic safety of the gravitational interaction [18] states the existence of an UV non-gaussian fixed point, which makes the theory well defined in the UV (i.e.renormalizable), but in a region of the phase diagram not accessible by perturbation theory.Various non-perturbative methods have been developed in the last decades to investigate this possibility, like Functional Renormalization Group techniques [19] or the Monte-Carlo simulations of standard Euclidean Dynamical Triangulations (DT) [20][21][22][23] or Causal Dynamical Triangulations, the latter being the subject of this study. Monte-Carlo simulations of quantum field theories are based on the path-integral formulation in Euclidean space, where expectation values of any observable O are estimated as averages over field configurations sampled with probability proportional to e − S , S being the action functional of the theory.Regarding the Einstein-Hilbert theory of gravity, the action is a functional of the metric field g µν , given by 1 S[g where G and Λ are respectively the Newton and Cosmological constants, while the path-integral expectation values are formally written as averages over geometries (classes of diffeomorphically equivalent metrics) where Z is the partition function.The first step in setting up Monte-Carlo simulations is the choice of a specific regularization of the dynamical variables into play.In the case of gravity without matter fields the only variable is the geometry itself, which can be conveniently regularized in terms of triangulations, namely a collection of simplexes, elementary building blocks of flat spacetime, glued together to form a space homeomorphic to a topological manifold.The simplexes representing (spacetime) volumes in 4dimensional spaces are called pentachorons, analogous to tetrahedra in 3-dimensional spaces and triangles in 2dimensional spaces (i.e.surfaces). Besides the general definition, and at variance with standard DT, triangulations employed in CDT simulations are required to satisfy also a causality condition of global hyperbolicity 2 .This is realized by assigning an integer time label to each vertex of the triangulation in order to partition them into distinct sets of constant time called spatial slices, and constraining simplexes to fill the spacetime between adjacent slices (i.e.slices with neighbouring integer labels).The resulting triangulation has therefore a foliated structure 3 , and the simplexes can be classified by a (time-ordered) pair specifying the number of vertices on the slices involved (e.g., the pairs (4, 1), (3,2), (2,3) and (1,4) classify all spacetime pentachorons).In order to ensure both the simplicial manifold property and the foliated structure at the same time, spatial slices, considered as simplicial submanifolds composed of glued spatial tetrahedra, need to be topologically equivalent.This basically means that triangulations are always geodetically complete manifolds, and topological obstructions (e.g., singularities) can only be realized 1 For simplicity, we are not including manifolds with boundaries, so there is no Gibbons-Hawking-York term in the action. 2The global hyperbolicity condition is equivalent to the existence of a Cauchy surface, the strongest causality condition which can be imposed on a manifold [24]. 3The main reason for restricting to foliated triangulations is that it allows to define conveniently the analytical continuation from Lorentzian to Euclidean space (see Ref. [1] for details).However, simulations without preferred foliation in 2 + 1 dimensions have been build in Ref. [25], showing results similar to the foliated case. in an approximate fashion, with increasing accuracy in the thermodynamic limit (infinite number of simplexes). The numerical results shown in Section IV refer to slices with S 3 topology, but other topologies could be investigated as well (e.g., the toroidal one [7,8]). In practice, it is convenient, without loss of generality, to impose a further condition, that is fixing the lengthsquared of every spacelike link (i.e.connecting vertices on the same slice) to a constant value a 2 , and the squarelength of every timelike link (i.e.connecting vertices on adjacent slices) to a constant value −αa 2 .The constant a takes the role of lattice spacing, while α represents a genuinely regularization-dependent asymmetry in the choice of time and space discretizations.With this prescription, simplexes in the same class (according to the above definition) not only are equivalent topologically, but also geometrically, so that the expression of the discretized action greatly simplifies.Indeed, at the end of the day4 , the standard 4-dimensional action employed in CDT simulations with S 3 topology of the slices and periodic time conditions becomes a functional of the triangulation T , and takes the relatively simple form where N 0 counts the total number of vertices, N 4 counts the total number of pentachorons, and N 41 is the sum of the total numbers of type (4, 1) and type (1, 4) pentachorons, while k 4 , k 0 and ∆ are free dimensionless parameters, related to the Cosmological constant, the Newton constant, and the freedom in the choice of the time/space asymmetry parameter α (see Ref. [1] for more details). We want to stress that, even if CDT configurations are defined by means of triangulations, the ultimate goal of the approach is to perform a continuum limit in order to obtain results describing continuum physics of quantum gravity.Therefore, the specific discretization used in CDT must be meant as artificial, becoming irrelevant in the continuum limit.For this reason, simplexes should not be considered as forming the physical fabric of spacetime: eventually, one would like to find a critical point in the parameter space where the correlation length diverges and the memory about the details of the fine structure is completely lost. In standard CDT simulations, configurations are sampled using a Metropolis-Hastings algorithm [26], where local modifications of the triangulation at a given simulation time (i.e.insertions or removals of simplexes) are accepted or rejected according to the probability induced by the action in Eq. ( 3) and complying with the constraints discussed above. Unlike usual lattice simulations of quantum field theories, the total spacetime volume of CDT triangulations changes after a Monte Carlo update.In order to take advantage of finite size scaling methods (i.e.extrapolation of results to the infinite volume limit), it is convenient to control the volume by performing a Legendre transformation from the parameter triple (k 4 , k 0 , ∆) to the triple (V, k 0 , ∆), where the parameter k 4 is traded for a target volume V .In practice, this is implemented by a fine tuning of the parameter k 4 to a value that makes the total spacetime or spatial volumes5 fluctuate with mean around a chosen target volume (respectively N 4 or N 41 ), and adding to the sample only configurations whose total volume lies in a narrow range around the target one.Moreover, a (weak) spacetime volume fixing to a target value N 4 can be enforced, for example, by adding a term to the action of the form ∆S = (N 4 − N 4 ) 2 , where quantifies how much large volume fluctuations are suppressed.A relation similar to the latter holds for fixing the total spatial volume (substituting N 41 with N 4 ).Fixing a target total spatial volume V S,tot = N 41 2 , one can investigate the properties of configurations sampled at different values of the remaining free parameters k 0 and ∆. The general phase structure of CDT which is found in the k 0 -∆ plane is thoroughly discussed in the literature [1,5,6,27].Here we will only recall some useful facts.Four different phases have been identified, called A, B, C dS and C b , as sketched in Fig. 1, where for the two C phase the labels dS and b stand respectively for de Sitter and bifurcation.At a qualitative level, configurations in the different phases can be characterized by the distribution of their spatial volume V S (t), which counts the number of spatial tetrahedra (spatial volume V S ) in each slice as a function of the slice time t.For configurations in the B phase, the spatial volume is concentrated almost in a single slice, leaving the other slices with a minimal volume 6 .For both the C dS and C b phase, the spatial volume is peaked at some slice-time but then, unlike the case of the B phase, falls off more gently with t, so that the majority of the total spatial volume is localized in a so-called "blob" with a finite time extension; also in this case, slices out of this blob have a minimal volume.Finally, configurations in phase A are characterized by multiple and uncorrelated peaks in the spatial volume distribution. From these observations, it is apparent that C dS and C b are the only physically relevant phases.Indeed, the average spatial volume distribution in the C dS phase is in good agreement with the prediction for a de Sitter Uni- I).The position of transition lines is only qualitative. verse, having a S 4 geometry after analytical continuation to the Euclidean space [28].The bifurcation phase, instead, is characterized by the presence of two different classes of slices which alternate each other in the slice time t [5,6]. The transition lines between the different phases (dashed lines in Fig. 1) have been investigated by means of convenient observables.Regarding the B-C b and A-C dS transition lines, the definitions employed are based on the observation that changes in the qualitative behavior of the spatial volume distribution function V S (t) occur for almost constant values of ∆ or k 0 respectively, suggesting quantities conjugated to them in the action (3) as candidate order parameters: namely conj(∆) ≡ (N 4 + N 41 − 6N 0 )/N 4 for the B-C b transition and conj(k 0 ) ≡ N 0 /N 41 for the A-C dS transition.Finite size scaling computations using these observables as order parameters suggest a first-order nature for the A-C dS transition, while the B-C b transition appears to be of second-order [27].The definition of observables employed as order parameters for the C b -C dS transition is more involved [5,6]: in the C b phase, one of the two classes of spatial slices is characterized by the presence of vertices with very high coordination number; also in this case there are hints for a second-order transition, even if results might depend on the topology chosen for the spatial slices [7]. Global counts of simplexes, like those entering the definitions of conj(∆) and conj(k 0 ), are not sufficient to clearly distinguish the different geometrical properties of the various phases.From this point of view, the spectral dimension D S (τ ) (see Appendix A for more details) is probably one of the few useful probes available 1, and the phases in which they are contained.Some of the points are labeled also by a letter for later convenience.The assignment of simulation points to the different phases refers to the total volumes fixed in our runs (N41 = 40k and 80k). up to now to probe the geometrical structure of CDT configurations.It is basically a measure of the effective dimension of the geometry at different stages of the diffusion process, it has permitted to demonstrate that, in the bulk of configurations in the de Sitter phase, the spectral dimension tends to a value D S 4 for large diffusion times [10].In the following, we will show how the analysis of the spectrum of the LB operator, which is discussed in the following section, permits to access new classes of observables, and how some clear characteristic differences among the various phases emerge in this way. The code employed for this study is an home-made implementation in C++ of the standard CDT algorithm discussed in Ref. [1], which was checked against many of the standard results which can be found in the CDT literature.We performed simulations with parameters chosen as shown in Fig. 1 by points marked with a star symbol and reported also in Table I; for later convenience, four points, each being deep into one of the 4 phases, have been labeled by a letter: a, b, c and c.For most simulation points we have performed simulations with two different total spatial volumes, V S,tot = 20k and V S,tot = 40k, adopting a volume fixing parameter = 0.005; we have verified that our results are independent of the actual prescription used. III. THE LAPLACE-BELTRAMI OPERATOR The LB operator, usually denoted by the symbol −∆, is the generalization of the standard Laplace operator.Its specific definition depends on the underlying space and on the algebra of functions on which it acts.For a generic smooth Riemannian manifold (M, g µν ) the Laplace-Beltrami operator acts on the algebra of smooth functions f ∈ C ∞ (M ) in the form [29]: where g is the metric determinant, g µν is the inverse metric and Γ α µν are the Christoffel symbols.It is easily shown that −∆ is invariant with respect to isometries.Furthermore, since it is positive semi-definite, a set of eigenvectors B M solving the eigenvalue problem −∆f = λf is an orthogonal basis for the algebra C ∞ (M, R); in the following we will refer to such sets as spectral bases, which, for convenience and without loss of generality, we will always consider orthonormal.A spectral basis can then be used to define the Fourier transform as basis change from real to momentum space (e.g.sines and cosines in R n , or spherical harmonics in S 2 ), while the eigenvalues associated to each eigenspace contain information about the characteristic scales of the manifold. We will now elaborate further on the interpretation of the spectrum of eigenvalues, considering a diffusion process on a generic manifold M described by the heat equation We can expand the solution in a spectral basis B M = e n |λ n ∈ σ M , λ n+1 ≥ λ n associated to the spectrum of (increasingly ordered) eigenvalues so that Eq. ( 5) is transformed (by orthogonality) in a set of decoupled equations In the form of Eq. ( 8) the geometric role of eigenvectors in the diffusion process is evident: λ n represents the diffusion rate of the mode e n (x), so that the smallest eigenvalues are associated to eigenvectors along the slowest diffusion directions and vice versa.In this specific sense, the spectrum σ M encodes information about the characteristic scales of the manifold, while the set of eigenstates B M identifies all the possible diffusion modes, and forms a basis for the algebra of functions on the manifold.Similar considerations can be applied to the problem of wave propagation on the manifold, where the heat equation is replaced by the wave equation; this is the reason behind the famous idea of "hearing the shape of a drum" [30]. The definition of the Laplace-Beltrami operator can be extended easily to more general algebras, like the graded algebra of differential forms or the algebra of functions on a graph [31,32], the latter being of particular importance in our discussion, since, as discussed below, it allows us to implement straightforwardly the spectral analysis on CDT spatial slices, by means of their associated dual graphs.A undirected graph G [33] is formally a pair of sets (V ,E), where V contains vertices, which assume the role of lattice sites, whereas the set of edges, E ⊂ V × V , is a symmetric binary relation on V encoding the connectivity between vertices in the form of ordered pairs of vertices {(v i , v j )}.The reason why, in this first study, we choose to apply spectral methods to analyze the geometry of spatial slices only is that spatial tetrahedra have all link lengths equal to the spatial lattice size a, so that the distance between their centers is equal for any adjacent tetrahedra; therefore, it is possible to represent faithfully spatial slices by dual undirected and unweighted graphs, where the vertex set is the set of tetrahedra, and the edge set is the adjacency relation between tetrahedra.The algebra on which the Laplace-Beltrami operator acts can be taken as that of the real-valued functions f : V → R, which can be represented as the vector space R N (where N = |V |), once an ordering of the vertices i → v i ∈ V ∀i ∈ {0, 1, . . ., N − 1} has been arbitrarily chosen, without loss of generality 7 .In this representation the Laplace-Beltrami operator becomes formally a matrix, named Laplace matrix, and defined as: where D is the (diagonal) degree matrix such that the element D ii ≡ |{e ∈ E|v i ∈ e}| counts the number of vertices connected to the vertex v i , while A is the symmetric adjacency matrix such that the element ) is 1 only if the vertices v i and v j are connected (i.e.{v i , v j } ∈ E) and zero otherwise. For instance, the graph associated with a onedimensional hypercubic lattice with N sites and periodic boundary conditions corresponds to D = 2 • 1 N ×N , and A ij = δ i,(j+1) mod N + δ i,(j−1) mod N , while the Laplace matrix can be read off as the lowest order approximation to the Laplace-Beltrami operator estimated by evaluating functions on lattices sites: where a is the lattice spacing and Notice that, since any tetrahedron of CDT spatial slices is adjacent to exactly 4 neighboring tetrahedra, the dual graphs are 4-regular (i.e. each vertex has degree 4), so that the adjacency matrix suffices to compute eigenvalues and eigenvectors (L = 4 • 1 − A), and furthermore it is sparse.In practice, we build and save the graphs associated to each slice in the adjacency list representation.Being already a memory-efficient storage for the adjacency matrix of the graph, these structures can be directly fed to any numerical solver optimized for the computation of eigenvalues and eigenvectors of sparse, real and symmetric matrices.The spectra and eigenvectors analyzed in the present paper have been obtained using the 'Armadillo' C++ library [34] with Lapack, Arpack and SuperLU support for sparse matrix computation. By solving the eigensystem for the LB spectrum, we can easily obtain eigenvectors as a side product.Even if the spectrum of a graph does contain much geometric information, still alone it is not capable to completely characterize geometries, but only classes of isospectral graphs.Conversely, the joint combination of eigenvalues and eigenvectors yields complete information on the graph 8 , but decomposed in a way useful for the analysis of geometries. A. General properties of the eigenvalues of the Laplace matrix on graphs Here we will describe some results from spectral graph theory that allow us to extract the information mentioned above.For convenience, we will always consider the basis of eigenvectors B G = { e n } to be real and orthonormal, since in this case the spectral theorem for real symmetric matrices applies. First of all we observe that, if no boundary is present, the Laplace matrix always has the zero eigenvalue, with a multiplicity equal to the number of connected components 9 .For graphs made of a single connected component, any eigenfunction associated to the zero eigenvalue is simply a multiple of the uniform function e 0 = with 1 on each entry.Furthermore, the sum of the components of each eigenvector e n , with the exception of e 0 , is zero, since v∈V e n (v) = ( e n , |V | e 0 ) = 0 by orthogonality of the chosen basis B G .In the following, we will only discuss properties of graphs with a single connected component 10 , like the ones occurring in CDT. Spectral gap and connectivity As argued above, geometric information about the large scales comes from the smallest eigenvalues and associated eigenvectors.The 0-th eigenvalue has a topological character, and in the general case its multiplicity tells us how many connected components the graph is composed of, but for connected graphs its role is trivial and uninteresting. Arguably the most interesting eigenvalue is the first (non-zero) λ 1 , which, depending on the context, is called spectral gap or algebraic connectivity.The latter name comes from the observation that the larger the spectral gap λ 1 , the more the graph is connected. A measure of connectivity for a compact Riemannian manifold M is given by the Cheeger isoperimetric constant h(M) defined as the minimal area of a hypersurface where the infimum is taken over all possible connected submanifolds A. For a graph G = (V, E), the Cheeger constant is usually defined by where ∂A is the set of edges connecting A with V \ A. The relation between the Cheeger constant and the spectral gap for a graph G where all vertices have exactly d neighbours is encoded in the Cheeger's inequalities This property of the spectral gap is interesting for the analysis of geometries of slices in CDT, since, as we will se in the next section, it highlights different behaviors for the various phases. B. Eigenvalue distribution and a toy model When one considers the whole spectrum of the LB operator, two particularly interesting quantities are the density ρ(λ), defined so that ρ(λ) dλ gives the number of eigenvalues found in the range [λ, λ+dλ], and its integral n(λ), which gives the total number of eigenvalues below a given value λ. Both functions can be defined for single configurations (spatial slices) or can be given as average quantities over the Euclidean path integral ensemble.As we shall see, the latter quantity, n(λ), will prove particularly useful to characterize the properties of triangulations at different scales.It is an increasing function of λ and its inverse is simply the n-th eigenvalue λ n .We will usually show λ n as a function of n since, when considering a sample of configurations, taking the average of λ at fixed (integer) n is easier.There are various well known results regarding the two quantities above, most of them involving the LB operator on smooth manifolds.In particular, Weyl law [35,36] gives the asymptotic (large λ) behavior of n(λ): where V is the volume of the manifold (which is assumed to be finite, with or without a boundary), d is its dimensionality, and ω d is the volume of the d-dimensional ball of unit radius.As we shall better discuss below, Weyl law, even if asymptotic, is generally expected to hold with a good approximation in the range of λ for which one is not sensitive to the specific infrared properties (i.e.shape, boundaries and/or topology) of the manifold.How violations to the Weyl law emerge and how they can be related to a sort of effective dimension at a given scale will be one of the main points of our discussion. In the following we shall consider the LB spectrum computed on discretized manifolds.It is therefore useful to start by analyzing a simplified and familiar model, consisting of a regular and finite 3-dimensional cubic lattice, with respectively L x , L y and L z sites along the x, y and z directions.All lattice sites are connected with 6 nearest neighbors sites, with periodic boundary conditions in all directions: this is therefore the discretized version of a 3-dimensional torus.The Laplacian operator can be simply discretized on this lattice and its eigenvectors coincide with the normal modes of a corresponding system of coupled oscillators: they are plane waves having wave number k = (k x , k y , k z ), with and m i integers such that −L i /2 < m i ≤ L i /2, so that Determining n( λ) for a given λ now reduces to counting how many vectors m exist such that λ m ≤ λ.That corresponds to finding the triplets of integer numbers, i.e. the cubes of unit side, within the ellipsoid of semiaxes The latter constraint expresses the particular (cubic) discretization that we have adopted for the 3-dimensional torus, i.e. the structure of the system at the UV scale: if λ is low enough so that R i < L i ∀ i, then we are not sensitive to such scale.On the other hand, the discretized structure of the eigenvalues expresses the finiteness of the system, i.e. the properties of the system at the IR scale: if we have also R i 1 ∀ i then we are not sensitive to such scale either, and the counting reduces approximately to estimate the volume of the ellipsoid, so that which is nothing but Weyl law for d = 3.In Fig. 2 we show the exact distribution of λ n as a function of n/V , for various choices of L x , L y and L z .The tick line represents the Weyl law prediction, λ = 6π 2 (n/V ) 2/3 .When n/V → 1, all systems show similar deviations from the law, which are related to the common structure at the UV scale.The Weyl law is a very good approximation for lower values of n/V , as expected, and actually down to very small values of n/V for the symmetric lattice where L x = L y = L z = 50. For the asymmetric lattices, instead, some well structured deviations emerge at low n/V , where λ follows a Weyl-like power law which is typical of lower dimensional models and can be easily interpreted as follows.For the lattice with L x = L y = l = 15 and L z = 600, one does not find any eigenvalue with m x = 0 and m y = 0 as long as λ < 4π 2 /l 2 0.175, therefore in this range the distribution of eigenvalues is identical to that of a one-dimensional system, for which λ ∝ (n/V ) 1/2 ; for λ > 4π 2 /l 2 also eigenvalues for which m x and/or m y are non-zero appear, and their distribution goes back to the standard 3-dimensional Weyl law.Making a wave-mechanics analogy, at low energy only longitudinal modes are excited, while transverse modes are frozen until a high enough energy threshold is reached.The point where one crosses from one power law behavior to the other brings information about the size of the shorter transverse scale.Similar considerations apply to the lattice L x = 3, L y = 75 and L z = 600, which has three different and well separated IR scales: in this case one sees a one-dimensional power law for small n/V , which first turns into a two-dimensional one as modes in the y-direction start to be excited, and finally ends up in a standard 3d Weyl law when also modes with m x = 0 come into play. The argument above can be rephrased at a more general level.Suppose we have a D-dimensional manifold where d "transverse" dimensions are significantly shorter than the other D − d "longitudinal" dimensions, with a typical transverse scale l.As long as one considers small eigenvalues, the modes in the transverse directions will not be excited, so that the counting of eigenvalues will be given by the Weyl law for . The change from one regime to the other will take place when the transverse directions get excited for the first time, i.e. at λ π 2 /l 2 (the actual prefactor depends on the details of the shorter dimension), which corresponds to n ∝ V l −D , with a proportionality constant which depends only on the details of the short transverse scales and is independent of the details of the longer scales.Therefore, different manifolds, sharing the same structure at short scales associated with an effective dimensional reduction, lead to a distribution λ n where the change from one power law behavior to the other takes place at the same point in the (n/V )-λ plane, where V is the global volume of the manifold.The value of n/V , being proportional to l −D , brings information about the size of the short scale. To better illustrate the concepts above, in Fig. 3 we show the distribution of λ n as a function of n/V for three different choices of L x , L y and L z .The curves obtained for (L x , L y , L z ) = (3, 75, 600) and (L x , L y , L z ) = (3, 75, 1200) go exactly onto each other: their short scale structure is the same and the function n(λ) just differs for different number of modes which are counted along the large direction L z , however this difference disappears when one considers the scaling variable n/V , leading to a perfect collapse.The collapse instead is not perfect when one considers the lattice (L x , L y , L z ) = (3, 15, 600), which has a different "intermediate" scale: moving from large to small n/V , the turning point from dimension 3 to dimension 2 is the same as for the two other lattices, however the turning point from dimension 2 to dimension 1 takes place earlier, because L y is shorter. The possible examples which one can discuss within the toy model are quite limited.For instance, one cannot consider the case in which there are points where the manifold branches into multiple connected ramifications, something which in general can lead to an increase, instead of a decrease, of the effective dimension.However, extrapolating the arguments given above, we can conjecture the following.D-dimensional manifolds having different overall volumes and shape, but sharing a similarity in the structures which are found at intermediate and short scales, will lead to similar (i.e.collapsing onto each other) curves when λ n is plotted against n/V , V being the total volume of the manifold.Moreover, the power law taking place at a given value of n/V will give information about the effective dimensionality d EF F of the manifold at a scale of the order (n/V ) −1/D , with This kind of information is similar to what is obtained by implementing diffusive processes to measure the spectral dimension. IV. NUMERICAL RESULTS In this section we present results regarding mostly the spectrum of the LB operator defined on spatial slices, while a detailed discussion regarding the eigenvectors is postponed to a forthcoming study.We performed the analysis on spatial slices of configurations in each phase; in particular, almost all the results shown come from simulations running deep into each phase, at the points circled and labeled by a letter in Fig. 1 and in Table I. While the total spatial volume has been fixed in each simulation to a target value, the spatial volume of single slices, V S , can vary greatly from one slice to the other (apart from phase B).That will permit us to access the dependence of the spectrum on V S , an information that will be very important for many aspects.As discussed above, each spatial slice will be associated with a 4-regular undirected graph, with each vertex of the graph corresponding to a spatial tetrahedron.For this reason, it will be frequent in the following discussion to borrow concepts and terminology from graph theory. We will first look at the low lying part of the spectrum, show how the transition from one phase to the other can be associated to the emergence of a gap in the spectrum, and discuss what that means in terms of the geometrical properties of the triangulations.We will then turn to results regarding the whole spectrum and show how one can obtain information on the effective dimension of the geometry at different scales.Finally, we will describe two methods to visualize graphs and apply them to show the appearance of spatial slices. A. The low lying spectrum and the emergence of a gap Apart from the zero eigenvalue, λ 0 = 0, the remaining eigenvalues will fluctuate randomly from one configuration to the other and, moreover, their distribution will depend on V S in a well defined way that we are going to discuss later on.As an example, in Fig. 4 we show the distribution of λ 1 and λ 3 on a set of around 3 × 10 3 slices of approximately equal volume V S 2300 and in C dS phase.Therefore, while the spectrum of each spatial slice is intrinsically discrete, because of the finite number of vertices making up the associated graph, it makes sense to define a continuous distribution ρ(λ), assigned so that ρ(λ)dλ gives back the number of eigenvalues which are found on average in the interval [λ, λ+dλ].In general ρ(λ) will be a function of the bare parameters chosen to sample the triangulations and, for fixed parameters, of the spatial volume V S of the chosen slice.In Figs. 5 and 6 we show the low lying part of the distribution ρ(λ) obtained from simulations performed respectively in the C dS and B phases, selecting in each case three different ranges of spatial volumes 11 .In order to focus just on the low part of the spectrum, we have limited the input for ρ to just the first few eigenvalues in each case (n ≤ 100). A striking difference between the two phases emerges.In the B phase there is a gap ∆λ = λ 1 0.1 which does not disappear and is practically constant as the spatial volume V S grows, i.e. as one approaches the thermodynamical limit.This gap is absent in the C dS phase, where the distribution of the first 100 eigenvalues is instead more and more squeezed towards λ = 0 as V S grows.The presence or absence of a gap in the spectrum is a characteristic which distinguishes different phases in many different fields of physics: think for instance of Quantum Chromodynamics, where the absence/presence of a gap in the spectrum of the Dirac operator distinguishes between the phases with spontaneously broken/unbroken chiral symmetry.Let us discuss what is the meaning of the gap in our context. Graphs which maintain a finite gap as the number of vertices goes to infinity are known as expander graphs [37] and play a significant role in many fields, e.g., in computer science.They are characterized by a high connectivity, i.e. the boundary of every subset of vertices is generically large.Such a high connectivity is usually as- sociated with a degree of randomness, i.e. lack of order, in the connections between vertices: for instance, random regular graphs are expanders with high probability [38]. The strict relation between the high connectivity and the presence of a finite gap in the spectrum is also encoded in Cheeger's inequalities, see Eq. ( 13). The property which is maybe most relevant to our context is the fact that the diameter of an expander, defined as the maximum distance 12 between any pair of vertices, does not grow larger than logarithmically with total number of vertices [39,40].Therefore, in this phase the spatial slices do not develop a well defined geometry, since the size (diameter) of the Universe remains small as the volume tends to infinity, a fact described also in previous CDT studies in terms of a diverging Hausdorff dimension.This fact can be easily interpreted in terms of diffusive processes: as argued above (see Section III), the value of the spectral gap, λ 1 , can be interpreted as the inverse of the diffusion time of the slowest mode; the fact that the time to diffuse through the whole Universe stays finite means that its size is not growing significantly. On the contrary, according to the arguments discussed in Section III B, for a graph representing a standard manifold having a finite effective dimension on large scales, one expects that the number of eigenvalues found below any given λ should grow proportionally to the volume V S , n ∝ V S λ d EF F /2 , see for instance Eq. (17).That means that the gap must go to zero as V S → ∞ and, moreover, that a finite normalized density13 of eigenval- ues, ρ(λ)/V S , must develop around λ = 0. Instead, as it will be shown in more detail below, the presence of a spectral gap for slices in the B phase indicates that the effective dimension is indeed diverging at large scales, in agreement with the high connectivity property. As an independent check, we computed the maximum distance from a randomly chosen vertex to all other vertices in the graph (a quantity usually called the eccentricity of the vertex), iterating the procedure for 200 different starting vertices and for each slice in the C dS and B phases.The maximum eccentricity in a graph corresponds to its diameter, so the eccentricity of a random vertex is actually a lower bound to the diameter.Therefore the results, which are shown in the form of a scatter plot in Fig. 7, are consistent with a diameter which, for sufficiently large volumes, grows as a power law of V S in phase C dS , while on the contrary it seems to reach a constant or to grow at most logarithmically in the B phase. The properties of slices in phase A are quite similar to those found in phase C dS , i.e. one has evidence for a finite density of eigenvalues around λ = 0 in the large V S limit, even if the distribution of slice volumes is significantly different from that found in phase C dS .An example of the distribution of the first 30 eigenvalues in this phase is reported in Fig. 8. Instead, the spectra of slices in the bifurcation phase C b need a separate treatment.Indeed, it is well known that the bulk of the configurations are made up of two stronger than the simple absence of a gap.Indeed, one might have situations in which isolated quasi-zero eigenvalues develop, while the continuous part of the spectrum maintains a gap: think for instance of two expander graphs connected by a thin bottleneck.separate classes of slices, which alternate each other in slice-time and have different properties [6]: it is reasonable to expect that this is reflected also in their spectra.This is indeed the case, as can be appreciated by looking at Fig. 9, where we report the value of λ 1 obtained on the different slices (i.e. at different Euclidean times) for a typical configuration sampled in the C b phase, and compare it to a similar plot obtained for the C dS phase.For an easier comparison, the time coordinates of the slices have been relabeled in each case so that the slice with the largest volume corresponds to t slice = 0; moreover, we restricted to the bulk of configurations (i.e.we chose slices with V S > 200).Contrary to the C dS phase, in the C b phase λ 1 changes abruptly from one slice to the other, with small values alternated with larger ones, differing by even two order of magnitudes.This striking difference, which emerges even for single configurations, is even more clear as one considers the whole ensemble: Fig. 10 shows the average of λ 1 , λ 20 and λ 100 for configurations in the C b and C dS phases, with slice times relabeled as before.In the C dS phase λ 1 changes smoothly with t slice , and this change is mostly induced by the corresponding change of the slice volume, while in the C b phase the alternating structure is visible also for higher eigenvalues, even if somewhat reduced and limited to the central region as n grows.Therefore, we conclude that the alternating structure of spatial slices is apparent and well represented in the low-lying spectra: slices in the bulk of C b phase configurations can be separated in two distinct classes by the value of their spectral gap, while in the C dS phase there is no sharp distinction apart from a volume-dependent behavior connected to an observed Weyl-like scaling, which will be discussed in more detail in Section IV B. In order get a better perspective on these results, in Fig. 11 we show the eigenvalues λ n , with n = 1, 20, 100, plotted against the volume of the slice on which they are computed, for the slices of all configurations sampled in the C b phase (in particular at the simulation point labeled c).Slices with volumes larger than a given V S , which we call bifurcation volume 14 , divide in two distinct classes characterized by λ n taking values in well separated ranges.It is interesting that such bifurcation volume depends on n: that also explains why in Fig. 10 the alternating behavior of higher order eigenvalues (e.g., λ 100 ) drops off earlier than lower order ones, since spatial volumes get smaller far from the slice with maximal volume and then their volume becomes less than the bifurcation one at that order.That actually means that the alternating slices found in the C b phase only differ for the low lying part of the LB spectrum, while for high enough eigenvalues order they are not distinguishable; high eigenvalues mean small scales, hence we expect that the alternating slices have the same small scale structures and only differ at large scales.We will come back on this point later on. In view of the close similarity with the properties of slices found respectively in the C dS in the B phase, we assign to the two classes of slices the names dS-type (low spectral gap) and B-type (high spectral gap).Looking again at Fig. 11, we notice that, for sufficiently large volumes, the two classes populate only specific volume ranges.Furthermore, the maximal slice in the C b phase is typically observed to be of B-type, with a volume ranging in a narrow interval which is separated from the volumes of the other slices.This alternating distribution of spatial volumes has been indeed one of the first signals of the presence of the new phase [3,4]. A summary of the characterization of the phase diagram of CDT according to the (zero or non-zero) gap of the LB operator as an order parameter is reported in Table II.To conclude the discussion about the gap, it is interesting to consider how the distribution of λ 1 changes across the different phases.To this purpose, in Fig. 12 we show a scatter plot of λ 1 for different values of ∆ at fixed k 0 = 2.2: darker points corresponds to more frequent values of λ 1 .As ∆ increases, the gap in the B or B-type slices progressively reduces and approaches zero at the point where one enters the C dS phase.A gap in the spectrum is a quantity which has mass dimension two (as the LB operator), i.e. an inverse length squared: if future studies will show that the drop to zero takes place in a continuous way, that will give evidence for a second order phase transition with a diverging correlation length. B. Scaling and spectral dimension As one expects, and as it emerges from some of the results that we have already shown, the typical values obtained for the n-th eigenvalue of the LB operator on spatial slices, λ n , scale with the volume V S of the slice, and in a different way for the different phases.As an example, in Fig. 13 for λ n (for a few selected values of n), as a function of the volume, in the C dS phase. In order to better interpret this scaling, and inspired by the discussion reported in Section III B, in the following we will consider how λ n depends on the variable n/V S .To show that this may indeed be illuminating, in Fig. 14 we report λ n as a function of n/V S for four spatial slices, which have been randomly picked from an ensemble produced in the C dS phase and have quite different volumes, ranging over almost one order of magnitude 15 .The collapse of the four curves onto each other is impressive and, in view of the discussion in Section III B, can be interpreted in this way: despite the fact that the slices have quite different extensions, they show the same kind of structures at intermediate common scales. This kind of scaling is well visible in all phases, as one 15 Notice that n/V S can take values in the range (0, 1) (recall that λ 0 = 0 is excluded from our discussion), while the maximum eigenvalue λ is always bounded by 2k = 8, that is twice the degree of vertices in the k-regular graph.can appreciate by looking at Fig. 15.For convenience, we have divided all spatial slices in small volume bins, and then averaged λ n for each n over the slices of each bin: such averages are reported in the figure against n/V S .average eigenvalues are reported with error bars, which however are too small to be visible.Each phase has its own characteristic profile.The profiles of phases A and C dS are quite similar and differ by tiny deviations: in particular, in both cases one has that λ n → 0 as n/V S → 0, which is an equivalent way to state the absence of a gap in the spectrum.Instead, the profile of phase B is significantly different and characterized by the fact that lim n/V S →∞ λ n = 0, in agreement with the presence of a gap.In Fig. 15 we do not report any data regarding the C b phase, which is discussed separately because of the particular features that we have already illustrated above. Following the discussion in Section III B, each scaling profile can be associated with a running effective dimensionality d EF F of the spatial triangulations at a scale of the order (n/V S ) −1/3 : that can be done by taking the logarithmic derivative of λ n with respect to n/V S , see Eq. (17).For this reason, in Fig. 16 we report d EF F = 2 d log(n/V S )/d log λ n , which has been computed numerically by taking the average derivative of the profile over small bins of the variable n/V S . At very small scales, both the A and the C dS phase are effectively 3-dimensional.However, going to larger scales (smaller n/V S ), the effective dimension decreases, going down to values around d EF F ∼ 1.5, which is approximately the same large scale dimensionality observed by diffusion processes [41].The crossover between the two regimes takes place for n/V S in the range 0.1 − 0.4, meaning that typical structures of lower dimensionality develop, with a transverse dimension of the order of just a few tetrahedra. Actually, the plot of d EF F shows a difference between phase A and phase C dS , which was not clearly visible before: contrary to phase A, in phase C dS the effective dimensionality seems to slowly grow again as one approaches larger and larger scales.This slow grow can be interpreted as a progressive ramification of the lower dimensional structures, i.e. as a hint that it has a fractallike nature. The effective dimensionality has a completely different behavior in phase B: it is smaller than 3 (d EF F 2.5) on small scales, then starts growing and diverges at large scales.This is due to the fact that d log λ n /d log(n/V S ) → 0 as n/V S → 0, because of the presence of the gap, and, on the other hand, the diverging dimensionality can be interpreted in terms of the fact that the diameter of the slice grows at most logarithmically with V S .Also the low dimensionality observed at small scales can be interpreted in terms of the large connectivity of the associated graphs: each tetrahedron has 4 links to other tetrahedra, some of these links are, in some sense, not "local", i.e. they are a shortcut to reach directly some otherwise "far" tetrahedron; then, the probability that a couple of neighbouring tetrahedra are adjacent to a common tetrahedron gets smaller and leads to a lower effective dimensionality at short scales. Regarding the properties of the slices found in the bifurcation phase C b , on the basis of what we have shown and discussed in Section IV A, we have decided to perform a separate analysis for the different classes of spatial slices.In Fig. 17 we report λ n vs. n/V S for slices according to their relative position with respect to the central largest B-type slice (which corresponds to t slice = 0).The differences between the two classes is clearly visible also from the scaling profiles, which resemble, especially for large scales, those found in the B and in the C dS phase for B-type and dS-type slices, respectively. However, one striking feature emerges: at small scales, in particular for n/V S 0.1, the scaling profiles coincide almost perfectly.We conclude that, at such scales, the two classes of slices present strong similarities, despite the completely different large scale behavior.Hints of this fact were already discussed in Section IV A. Such similarities are likely induced by the causal structure connecting adjacent spatial slices in CDT triangulations. C. Running scales and the search for a continuum limit The analysis of the scaling profiles reported above permits to identify well defined scales, in terms of the parameter n/V S , where something happens, like a change in the effective dimensionality of the system.Such scales are given in units of the elementary lattice spacing of the system, i.e. the size of a tetrahedron.On the other hand, the possible presence of a second order critical point, where a continuum limit can be defined for Quantum Gravity, implies that the lattice spacing should run to zero as the bare parameters approach the critical point.This running of the lattice spacing should be visible by the corresponding growth of the value, determined in lattice units, of some physical scale.This is a standard approach in lattice field theories, where one usually considers correlations lengths which are the inverse mass of some physical state. One of the major challenges in the CDT program is to identify and determine physical scales which could pro-vide such kind of information and thus give evidence that the lattice spacing is indeed running.Promising steps in this direction have been already done by means of diffusive processes, where the scale is fixed by the diffusion time, both in CDT [9] and in DT [21].Here we propose that LB spectra and the observed scaling profiles may be helpful in this direction, and that a careful study of how such profiles change as a function of the bare parameters could provide useful information. A possible second order point is believed to separate the C b from the C dS phase, therefore it makes sense to analyze how the profiles change in both phases when moving towards the supposed phase transition, and if the observed changes can be associated to any running scale.A growth in the scale associated to some particular feature of the scaling profile means that its location moves to smaller values of n/V S . As an example, in Fig. 18 we report the scaling profiles obtained for slices in phase C b and ∆ = 0.15, which is closer to the phase boundary with respect to the case ∆ = 0.10, which has been discussed previously and is reported in Fig. 17.An appreciable difference between the two cases is that the region where the profiles of B-type and dS-type coincide is larger (i.e.extends to smaller n/V S ) for ∆ = 0.15.From a quantitative point of view, one finds that the approximate value of n/V S where the profiles start differing by more than 5% is around 0.13 for ∆ = 0.10 and around 0.074 for ∆ = 0.15.In other words, there is a scale up to which B-type and dS-type slices are similar to each other, and such scale grows as one approaches the C b -C dS phase transition. In a similar way, one can look at how the scaling profiles found in the C dS phase change as one approaches the phase transition from the other side.Such scaling profiles are reported in Fig. 19.The short-scale region, and in particular the point where the effective dimension starts changing, seems not sensible to the change of ∆.However the small n/V S (large-scale) region changes, with the profile undergoing an overall bending towards the left: notice that this implies a change in the effective dimensionality observed at the largest scales, which indeed, for ∆ = 0.3, is d EF F 2. Finally, as we have already stressed above, the gap itself, which for B-type slices seems to approach zero as one gets closer to the C b -C dS phase transition (see Fig. 12) could be interpreted in terms of a diverging correlation length if the behavior is proved to be continuous. The reported examples are only illustrative of the fact that the LB spectrum can provide useful scales which could give information on the nature of a possible continuum limit.Such program should be carried on more systematically by future studies, in particular by approaching the C b -C dS phase transition more closely.In this section we will show some details regarding the full distribution of eigenvalues (i.e. over the whole spectrum) in the different phases.Figs. 20, 21 and 22 show the normalized distribution of eigenvalues for spatial slices with volumes in selected ranges, and for simulations performed deep into the phases C dS , A and B respectively. The A and the C dS phase present a detailed non-trivial fine structure which is very similar.Even if we are not interested, at least in the present context, to provide a detailed interpretation of the full spectrum, we notice that such fine structure is mostly relative to eigenvalues which are of order 1 or larger, hence associated to typically small scales; this is confirmed by the fact that, contrary to the low part of the spectrum, such fine structure is almost left invariant by changing the volume of the slice.For instance, it can be noticed that the distributions are sharply peaked around the integer values λ = 4, 5, 6; indeed, by inspecting the spectra of single configurations and the associated eigenvectors, we observed that these integer eigenvalues often occur with high multiplicity and can be associated to the presence of recurrent regular short-scale structures and to very localized eigenvectors. The normalized distribution in the case of configurations in the B phase does not show particular features, other than the already discussed presence of a spectral gap.The distribution looks in general more regular in this phase, even if some of the peaks around integer values are still present, but much reduced in amplitude. E. Visualization of spatial slices We have seen how each spatial slice of the triangulations can be associated to a graph with non-trivial properties, i.e. what is usually called a complex network.There are different methods to visualize a complex network, some of them already considered in previous studies (see, e.g., Ref. [21]), here we will briefly discuss only two of them: Laplace embedding [42] and spring embedding [43,44].The former makes use of the eigenvectors associated to the smallest eigenvalues, which are already computed by solving the eigenvalue problem, while the latter is based on a mapping of the graph to a system of points connected by springs: as we are going to discuss, the two methods are strictly related, however spring em- bedding proves more useful to give an intuitive picture of the short-scale structures.The underlying idea, common to both methods, is to represent any graph G = (V, E) in a m-dimensional Euclidean space by finding a set of m independent functions {φ n (v i )} m n=1 which act as coordinates for each vertex v i ∈ V , in such a way that vertices with smaller graph distance have coordinates with values as closer as possible.The "closeness" can be defined in many ways, consisting in solving different optimization problems, and that makes the two methods different.We will use the notation φ n ≡ (φ n (v i )) |V | i=1 for each n = 1, . . ., m. Laplace embedding The optimization problem for Laplace embedding [42] consists in minimizing the following functional of the coordinate functions {φ n }: subject to the constraints φ n • φ k = δ n,k and φ n • 1 = 0 for each n, p = 1, . . ., m, where 1 is the uniform vector with unit coordinates and L is the matrix representation of the LB operator.It is straightforward to prove that a solution to this constrained optimization problem is given by the set of the first m eigenvectors { e n } m n=1 of the Laplace-Beltrami matrix, where we excluded the 0th mode e 0 = Here the color identifies the values that takes the first eigenvector e1 on each vertex: blue is negative, green is zero and red is positive.A projection of the 3-dimensional figure is shown on the xy plane. For example, the coordinates associated to each vertex v ∈ V in a 3-dimensional Laplace embedding are the values of the first 3 eigenvectors on that vertex, that is v → (e 1 (v), e 2 (v), e 3 (v)) ∈ R 3 .Fig. 23 shows the 3dimensional Laplace embedding of a typical slice in the bulk of a configuration deep in the C dS phase (simulation point c).The geometry seems to be made up of filamentous structures, but that really means that the first 3 eigenvectors, describing the slowest modes of diffusion, are not capable of describing short scale structures inside the filaments.However, they efficiently describe the largest scale geometry, which in the C dS case is nontrivial and unexpected. Spring embedding The optimization problem that has to be solved for spring embedding of an unweighted undirected graph G = (V, E) consists in the energy minimization of a system of ideal springs with fixed rest length l 0 and embedded in R m , with extrema connected in the same way as the links of the abstract graph G [44].Having assigned coordinates {φ n (v i )} m n=1 to each abstract vertex of the graph v i ∈ V , the potential energy of the system is de- fined as: In the limit l 0 → 0, the functional E S becomes equal to E LB but with no constraint, so that the solution would collapse to the trivial solution, bringing all vertices to the same point, in this limit.On the other hand, for l 0 > 0, the springs will push vertexes apart from each other and help resolving even the shortest-scale structures, which are not visible with Laplace embedding.The simplest algorithm to find a (local) minimum is to initialize the coordinates of each vertex to a random value, and then relax the system of springs by performing a gradient descent.Fig. 24 we shows the spring embedding of the same slice represented by Laplace embedding in Fig. 23.The large scale structure is well represented by both methods, but spring embedding permits to better discern short-scale structures at the finest level. Such representations of the spatial slices are illuminating to understand the properties of the LB spectrum for C dS slices.The slices are extended objects, i.e. one finds vertexes which are far apart from each other, implying the existence of slow diffusion modes and a continuum of quasi-zero eigenvalues for large V S .On the other hand, the large scale structure is made of lower-dimensional substructures, which have a typical transverse size of the order of a few vertexes, and which often branch, making the overall spectral dimension (i.e. the diffusion rate) fractional at large scales. For comparison, Fig. 25 shows the spring embedding of a typical slice in B phase.The high connectivity of the graph, which is clearly visible from the figure, does not permit the development of extended large scale structures, so that diffusion modes maintain always fast and a finite gap remains even in the V S → ∞ limit. V. DISCUSSION AND CONCLUSIONS In this work we have investigated the properties of the different phases of CDT that can be inferred from an analysis of the spectrum of the Laplace-Beltrami operator computed on the triangulations.The present exploratory study has been limited to the properties of spatial slices: those can be associated to regular graphs where each vertex is linked to 4 other vertices.Let us summarize our main results and further discuss them: i) We have shown that the different phases can be characterized according to the presence or absence of a gap in the spectrum, which therefore can be considered as new order parameter for the phase diagram of CDT.In particular, a gap is found in the B phase, while for the A and the C dS phases one finds a non-zero density of eigenvalues around λ = 0 in the thermodynamical (large spatial volume V S ) limit.The C b phase, instead, shows the alternance of spatial slices of both types (gapped and non-gapped): that better characterizes the nature of the alternating structures already found in previous works [5,6], which for this reason we have called B-type and dStype slices. The presence or absence of a gap in the spectrum is a characteristic which distinguishes different phases in many different fields of physics: think for instance of Quantum Chromodynamics, where the absence/presence of a gap in the spectrum of the Dirac operator characterizes the phases with spontaneously broken/unbroken chiral symmetry. In this context, the presence of a gap tells us that the spatial slices are associated to expander graphs, characterized by a high connectivity.That can be interpreted geometrically as a Universe with an infinite dimensionality at large scales, with a diameter which grows at most logarithmically in the thermodynamical limit; a small diameter in the phases with a gap is consistent with the findings of previous studies and is supported by a direct computation (see Fig. 7).On the contrary, the closing of the gap can be interpreted as the emergence of a Universe with a standard finite dimensionality at large scales.It is interesting to notice that the value of the gap which is found seems to change continuously as one moves from the B to the C b phase, and approaches zero as the C dS phase is approached. ii) We have shown that the spectrum can be characterized by a well defined scaling profile: the n-th eigenvalue, λ n , is a function of just the scaling variable n/V S .The profile is different for each phase and characterizes it; moreover, from the profile one can deduce information on the effective dimensionality d EF F of the system at different scales, which generalize a similar kind of information gained by diffusion processes. The C dS and the A phase share a similar profile, corresponding to d EF F 3 at short scales, which then drops to d EF F 1.5 for n/V S 0.1.At larger scales, the two phases show a different behavior, with d EF F which keeps decreasing as n/V S decreases in the A case, while in the C dS phase it starts growing again at large scales.Slices in the B phase, instead, show an effective dimensionality which, in agreement with their high connectivity, seems to diverge in the large scale limit. An interesting feature has been found for the two different and alternating (in Euclidean time) classes of spatial slices in the C b phase: despite the different overall structure, they share an identical profile at small length scales, which is likely induced by the causality condition imposed on triangulations and is therefore an essential property of CDT.The profiles remain identical up to characteristic length scale above which they start to diverge, as expected since one class presents a gap and the other does not. iii) We have proposed that the scaling profiles might be used to identify particular length scales which change as a function of the bare parameters, and thus could serve as possible probes of the running to the continuum limit, if any.Among those, we have found of particular interest the characteristic length scale up to which the alternating slices found in the C b phase share the same profile: we have seen that such length grows as one approaches the boundary with the C dS phase.On the other side of the boundary, also the profiles of the slices in C dS phase show a modification at large scales as the C b phase is approached, leading in particular to a growing effective dimensionality. Along these lines, one could conjecture that, if a second order critical point is really found between the two phases, at such a point the different profiles found in the C b phase could merge at all scales and coincide with the profile from the C dS phase.Such a critical point would also been characterized by the vanishing of the gap for the B-type slices of the C b phase.Moreover, it would be interesting to test what the effective dimensionality found at large scales would be at the critical point: is it possible that, just on the critical point where a continuum limit can be defined, the effective dimensionality of spatial slices goes back to D = 3 at all physical scales? The present work can be continued along many directions.First of all, the region around the transition between the C b and the C dS phase should be studied in much more detail than what done in the present exploratory work, to see if some of the conjectures that we have made above can be put on a more solid basis.In addition, a careful study of the critical behavior around the transition of the spectral gap, which is the new order parameter introduced in this study, could provide information about the universality class to which the continuum limit, if any, belongs.Of course, it could well be that one finds a first order transition, i.e. a sudden jump in the gap and in other properties, but then one should perform simulations for lines corresponding to different k 0 to see if the first order terminates at some critical endpoint. We have not considered yet the information which can be gained by inspecting the eigenvectors of the LB operator, that will be done in a forthcoming study.In particular, it will be interesting to consider and analyze their localization/delocalization properties, in a way similar to what has been done in similar studies for the spectrum of the Dirac operator in QCD [45,46]. It will be interesting to extend the study of the spectrum to the full triangulations, i.e. not just for spatial slices.That will require some implementative effort: unlike spatial tetrahedra (which are all identical), pentachorons can have edges with different Euclidean lengths, and therefore a regular graph representation does not describe the geometry faithfully.Nevertheless, the Laplace-Beltrami operator for general triangulated manifolds would have a well defined representation in the formalism of Finite Elements Method, as discussed and applied for example in Refs.[11,12]. Finally, it would be interesting to apply spectral methods also to other implementations of dynamical triangulations, like the standard Euclidean Dynamical Triangulations (DT) where no causality condition is imposed.The implementation in this case would be straightforward, as for the spatial slices of CDT, i.e. given in terms of regular indirected graphs.We plan to address the issues listed above in the next future.where {λ n } and { e n } are the eigenvalues and associated eigenvectors of the LB matrix of the graph G. Notice that the terms in Eq. (A2) corresponding to larger eigenvalues are more suppressed for increasing times than terms corresponding to smaller ones.In particular, for times t k/λ 1 , the only surviving term is given by the 0-th eigenvalue, and the probability distribution tends to be uniformly distributed amongst all vertices: lim t→+∞ K v,v0 (t) = 1 |V | ∀v, v 0 ∈ V (assuming a single connected component). The return probability, obtained from the spectrum, then takes the form where we used the decomposition in Eq. (A2) and the orthonormality of eigenvectors. The return probability Z(t) can be nicely interpreted as a statistical partition function, for its formal analogy with the concept in statistical physics: the diffusion time takes here the role of the inverse temperature, while the eigenvectors and their associated eigenvalues take the role of microstates and their associated energies respectively. In the case of a compact smooth manifold M, for which the Laplace-Beltrami spectrum {λ n } ∞ n=0 is countable but unbounded, the averaged return probability density Z(t) has the following asymptotic expansion for t → 0 + [30]: The return probability for unidimensional random-walks is 1/ √ 4πt, so it is reasonable for a smooth manifold to locally decompose the random motion along the dim(M) directions and get the return probability as a product of independent unidimensional return probabilities.In the case of random walks on R d the return probability density equal to Z(t) = (4πt) − d 2 , so one can infer the value of coefficients: c 0 = vol(M) and c i = 0 ∀i ≥ 1. Corrections to the t − dim(M) 2 behavior must be due to the geometric properties characterizing the manifold under study.For example, the first three coefficient have a geometrical interpretation, as discussed by McKean and Singer [47] c 0 = vol(M) , (A5) where ∂M is the (possibly empty) boundary of the manifold M, R is the scalar curvature of the manifold and J is the mean curvature of the boundary.We expect that similar results hold for graphs approximating manifolds, but a first difficulty can be easily detected as shown by the following argument.At a time t only eigenvalues λ 1 t contribute to the sum in Eq.(A4), but for t → 0 + the full unbounded spectrum of the smooth manifold tends to contribute.The spectrum of a graph G, however, is bounded by the largest eigenvalue, so that here the expansion in Eq. (A4) is not numerically reliable for times t (λ |V |−1 ) −1 .Nevertheless one can plot the return probability as a function of time and get an estimate of the dimension d by extrapolation to τ → 0 + using the definition of what is called spectral dimension [10]: Fig. 26 shows the comparison between the estimates of spectral dimension obtained employing explicit diffusion processes (Eq.(A1) integrated with step size ∆t = 1) and the spectrum of the Laplace-Beltrami matrix on graphs associated to spatial slices in C dS phase: we applied Eq. (A8) using the average of the return probability Z(t) computed on each slice having volume in the range 2000 − 2200, and, for the definition via diffusion, averaging the return probability also over 200 iterations of diffusion processes starting from randomly selected vertices in the slice.Using the definition via diffusion, at small diffusion times the return probability, and therefore also the spectral dimension, is highly fluctuating due to the short scale regularity of the tetrahedral tiling of the space (a phenomenon already discussed in Refs.[1,10]); this is not present in the definition via the spectrum, where a bump is observed instead.For larger diffusion times (τ 100) the curves obtained using both methods agree even using only the lowest 5% part of the spectrum, which confirms that this regime represents indeed the large scale behavior.Here we observe a spectral dimension D S 1.5 for the spatial slices of configurations in C dS phase.This fact, already observed in literature using diffusion processes [41], seems compatible also with the observations obtained from large scale scaling relations for the eigenvalues discussed in Section IV B. 6 ∆FIG. 1 : FIG.1: Sketch of the phase diagram CDT in 4d and with spherical topology of spatial slices.The results shown in the present paper have been obtained from simulations running at the points marked by a star symbol * .The circled and labeled points a,b,c and c refer to simulations running deeply inside the respective phases (see TableI).The position of transition lines is only qualitative. 1 |V | , where we indicate with 1 |V | the vector in R |V | FIG. 2 : FIG.2: Plot of λn against its volume-normalized order n/V , for a hypercubic lattice with periodic boundary conditions (i.e., toroidal) and different combinations of sizes Li for each direction.The straight continuous line is the exact Weyl scaling, see Eq. (14), predicted for d = 3; the dashed straight lines correspond to effective Weyl scalings for effective dimensions d = 2 and 3. FIG. 3 : FIG.3: Same as in Fig.2, for different combinations of the spatial sizes Li of the toroidal lattice. 3 FIG. 4 :FIG. 5 : FIG. 4: Probability distribution of λ1 and λ3 for slices with VS 2300, taken from configurations sampled deep in the C dS phase (simulation point c), and with total spatial volume VS,tot = N 41 2 = 40k. FIG. 6 : FIG. 6: Density ρ(λ) computed from the first 100 eigenvalues for the maximal slices in the B phase (simulation point b) and for different spatial volumes VS. FIG. 7 : FIG. 7: Scatter plot of the eccentricity of 200 randomly selected vertices for each slice of about 400 configurations in the C dS phase (simulation point c) with total spatial volume VS,tot = 20k, and for the maximal slices of about 200 configurations in the B phase (simulation point b) with total volumes VS,tot = 8k, 16k, 32k, 40k.Results are reported against the slice volume VS. FIG. 8 : FIG. 8: Density ρ(λ) computed from the first 30 eigenvalues for slices deep in the A phase (simulation point a) with total spatial volume VS,tot = 8k, and for two different ranges of the spatial slice volume VS. FIG. 9 : FIG. 9: Spectral gap λ1 as a function of the slice-time for single configurations in C b and C dS phases with total spatial volume VS,tot = 40k and with the slice-time of maximal slice shifted to zero.Only slices in the bulk (with volume VS ≥ 200) have been shown. 1 FIG. 10 : FIG. 10: Averages of λ1,λ20 and λ100 as a function of the slice-time for configurations in C b and C dS phases, where the slice-time of maximal slices has been shifted to zero.Only slices in the bulk (with volume VS ≥ 200) have been shown. 1 FIG. 11 : FIG. 11: Scatter plot of the values of λ1, λ20 and λ100 versus the volume of the slice on which they are computed, for slices of configurations deep in the C b phase (simulation point c) and with volume fixing VS = 40k. 1 FIG. 13 : FIG. 13: Averages of eigenvalues λn for selected orders n and computed in narrow bins of volumes (∆VS = 20), for slices of configurations sampled deep into the C dS phase (simulation point c), with total spatial volume VS,tot = 40k. FIG. 14 :FIG. 15 : FIG.14: Plot of λn against its volume-normalized order n/VS, for four randomly selected slices with volumes VS 500, 1000, 2000, 3000, taken from configurations sampled deep into the C dS phase (simulation point c) with total spatial volume VS,tot = 40k. FIG. 16 : FIG.16: Running dimension obtained from the logarithmic slope m of the curves shown in Fig.15as 2/m (see Section III B and Eq.(17)), computed over bins of different ranges of n/VS and for configurations sampled in phases C dS , A and B. The curve associated to the B phase is diverging for n/VS → 0 (it is around 30 for n/VS ∼ 10 −4 ), but part of it has been omitted from the plot, to improve the readability of the curves obtained for the other two phases. 3 FIG. 17 : 3 FIG. 18 : FIG. 17: Averages of λn versus n/VS for slices taken from the bulk (VS > 1000) of configurations sampled in the C b phase (k0 = 2.2, ∆ = 0.10).The total spatial volume is fixed to VS,tot = 40k, and the slice times have been relabeled so that the largest B-type slice has t slice = 0. 3 FIG. 19 : FIG.19: Averages of λn versus n/VS, computed in bins of n/VS with size 2/VS,max, for slices taken from configurations sampled in the C dS phase, with k0 = 2.2 and different values of ∆.The total spatial volume of each configuration is VS,tot = 40k. FIG. 20 :FIG. 21 : FIG. 20: Normalized distribution of all the eigenvalues for slices with volume in the range VS ∈ [2000, 2500] for configurations deep into the C dS phase (simulation point c), and with total spatial volume VS,tot = 40k. FIG. 22 : FIG. 22: Normalized distribution of all the eigenvalues of the maximal slices for configurations deep into the B phase (simulation point b), and with spatial volume about VS 8k. 6 FIG. 24 : FIG.24: Spring embedding in 3 dimensions for the graph associated to a typical slice in the C dS phase; the slice is the same as in Fig.23.The rest length has been fixed to l0 = 0.02.Also in this the color identifies the values that the first eigenvector e1 takes on each vertex, see Fig.23.A projection of the 3-dimensional figure is shown on the xy plane. FIG. 25 : FIG. 25: Spring embedding (l0 = 0.015) in 3 dimensions for the graph associated to a typical slice deep in the B phase (simulation point b) and with volume VS 4000.Also in this the color identifies the values that the first eigenvector e1 takes on each vertex, see Fig. 23. FIG. 26 : FIG.26: Estimates of the running spectral dimension (see Eq. (A8)) obtained either via diffusion processes (continuous line), or using Eq.(A4) (dashed lines) with the full spectrum or only the lowest 5% part of it, for slices in the volume range 2000 − 2200 taken from configurations sampled in the C dS phase (simulation point c) with total spatial volume VS,tot = 40k. we show the average values obtained TABLE II : Characterization of the phase diagram of CDT according to the zero or non-zero gap of the LB operator as an order parameter.
19,551
2018-04-06T00:00:00.000
[ "Physics" ]
Removal of zinc ions from aqueous solution using micellar-enhanced ultrafiltration at low surfactant concentrations Micellar-enhanced ultrafiltration (MEUF) of zinc ions (Zn2+) from aqueous solutions using single anionic surfactant sodium dodecyl sulphate (SDS) at low critical micelle concentrations (cmc) (0.2×cmc – 3×cmc) was investigated. When the initial SDS concentration was below the cmc, unexpectedly high rejection (97.5%) was obtained due to concentration polarisation occurring near the membrane-solution interface. Based on this mechanism, the true rejection of the solute is no longer a function of the initial SDS concentration in the bulk solution but a function of the SDS concentration at the concentration polarisation layer. The removal of Zn2+ at low Zn2+ feed concentrations was very efficient. The characteristics of Zn2+ ion adsorption to surfactant micelle were also studied. The Langmuir model could be used to elucidate the Zn2+ adsorption isotherm to the SDS micelle. The study demonstrates the potential practicality of the MEUF technique for the removal of heavy metal ion pollutants such as Zn2+ at low surfactant concentrations. R percent rejection (%) C concentration of the Zn 2+ (mg/ℓ) J permeate flux (m 3 /m 2 •s) Δp trans-membrane pressure (Pa) R m hydraulic resistance of membrane (m -1 ) R f secondary resistance of the membrane (m -1 ) μ viscosity coefficient (Pa•s) α volume concentrated ratio β concentration concentrated ratio V volume (ℓ) K adsorption equilibrium constant (ℓ/mmol) q max maximum amount of adsorbed Zn 2+ (mmol/g) q e amount of adsorbed Zn 2+ at equilibrium (mmol/g) C e concentration of Zn 2+ in the bulk liquid phase at equilibrium (mmol/ℓ) Introduction Heavy metal water pollution is a serious environmental problem in the world.The metal ions are non-biodegradable, highly toxic and may have a potentially carcinogenic effect.If directly discharged into the sewage system, they may seriously damage the operation of biological treatment plants.Wastewater containing dissolved metal ions such as zinc, cadmium, nickel and copper originate from a variety of sources such as metal mine-tailing leachate, refineries, semi-conductor manufacturing, battery, abandoned metal mines and metal plating industries.At present, the traditional techniques for the removal of metal ions from wastewater that are in practice include adsorption, extraction, precipitation, electrolytic method, ion exchange method, and distillation.However, these techniques have their own disadvantages, such as inconvenient operation, secondary pollution of deposition, loss of expensive chemicals, difficulty in recovering metal ion, strong pH sensitivity, incapable of reducing metal ions concentration to the levels required by law and so on.Micellar-enhanced ultrafiltration (MEUF) as a surfactantbased separation process is an effective technique to remove almost all the toxic metal ions and/or soluble organic solutes from aqueous solutions (Baek et al., 2003;2004;Gzara et al., 2001;2000;Juang et al., 2003;Kim et al., 2003;Liu et al., 2004;Tung et al., 2002;Yurlova et al., 2002).In the MEUF process, the surfactant is added to the polluted aqueous solution containing metal ions and/or organic solutes.The surfactant forms micelles which are charged spherical aggregates containing 50 to 150 surfactant molecules at a concentration higher than its critical micelle concentration (cmc) and above its Kraft point temperature (Gzara and Dhahbi, 2001).The metal ions are adsorbed on the surface of the oppositely charged micelles by electrostatic attraction.The organic solutes are solubilised in the micelles interior by ion-dipole interaction.Then the micellar solution passes through an ultrafiltration membrane with a small enough pore size to reject the micelles containing the contaminants.As micelles are rejected, the adsorbed metal ions and the solubilised organic solutes will also be rejected.The un-adsorbed metal ions or un-solubilised organic solutes and surfactant monomers pass through the ultrafiltration membrane to the permeate side.As a result, the permeate will contain very low concentrations of un-adsorbed metal ions or un-solubilised organic solutes and surfactant monomers, resulting in a clean permeate which can be recycled or discarded.The retentate solution is much more concentrated and considerably lower in volume than the initial solution; therefore, the further treatment or disposal of the smaller amount of solution is less expensive and much easier, such as recovering the surfactant and metal ion.The principle is shown in Fig. 1 (Sadaoui et al., 1998).This method has the following advantages: simple operation; environmentally safer; low-energy requirement; high removal efficiency; easy to recover metal ions; less expensive; separation can be carried out at room temperature; the modular membrane surface can be easily adjusted to the wastewater flows; and various industrial membranes are now available. Since the MEUF technique was proposed, there have been a number of studies in the wastewater treatment field.However, at present the study of the MEUF technique is still at a laboratory-scale stage.Many studies were mostly carried out in batch stirred cells using lamellar membranes at surfactant concentrations much higher than the cmc (Baek et al., 2003;Gzara et al., 2000;2001;Juang et al., 2003).In these studies, the permeate fluxes of ultrafiltration membranes were very low when using lamellar membranes and very high concentration surfactants.Since the concentrations of surfactants were much higher than the cmc, large quantities of surfactants must be used for the separation and therefore the concentrations of surfactants in the retentate were very high.Consequently, the economic viability of the MEUF process will strongly depend on the ability to recover a large portion of the surfactant from the retentate.Clearly, this may increase the cost of the separation process.On the other hand, the surfactant monomers inevitably leaked into the permeate through the ultrafiltration membrane and produced secondary pollution.To overcome the deficiencies mentioned above, some studies were conducted using mixed anionic-non-ionic surfactants (Aoudia et al., 2003;Fillipi et al., 1999).Aoudia et al. (2003) reported that Cr 3+ rejection (99%) was obtained at total surfactant mixtures (SDS-nonylphenol ethoxylated) concentration of 3 × cmc.But the total surfactant mixtures concentration (3 × cmc) is comparatively high.The mixed anionic-non-ionic surfactant system is not very effective for reducing the dosage of surfactant.Using non-ionic surfactant also makes the recovery of surfactant more difficult in these studies.Considering the factors discussed above, there is an apparent need to achieve efficient solute rejection using a single surfactant at relatively low concentration.It will reduce the dosage of the surfactant and the surfactant concentration in the retentate markedly, thereby reducing the expense of the process.Also, it will reduce the sur-factant concentration in the permeate and improve the permeate flux of the ultrafiltration membrane.When the surfactant concentration is low, the efficient solute rejection is not expected in principle, but the concentration polarisation effect can assist in achieving such aims at low-concentration surfactant.Some level of concentration polarisation may have a beneficial effect in terms of permeate rejection. On the other hand, previous studies on the removal of metal ions using the MEUF technique were mainly based on the rejection of metal ions and the permeate flux.The characteristics of metal ion adsorption to surfactant micelle have been scarcely investigated (Ahmadi et al., 1995;Li et al., 2006).However, the adsorption characteristics are the key factors for the successful application of MEUF. In the present study, an attempt is made to remove Zn 2+ ions from aqueous solutions by MEUF using single anionic surfactant sodium dodecyl sulphate (SDS) at low concentrations in order to reduce the expense of the process and the secondary pollution.The modified polysulphone hollow-core fibre ultrafiltration membrane is used in the study.The hollow-core fibre UF device is operated in linear continuous and cross-flow mode which has much higher flux and much more effective membrane area than the conventional batch-cell system using a lamellar membrane.The effects of parameters such as the initial surfactant SDS concentration (0.2×cmc to 3×cmc), and the initial pollutant Zn 2+ ion concentration (20 mg/ℓ to 300 mg/ℓ) on the efficiency of Zn 2+ ion rejection and the permeate flux will be investigated.The characteristics of Zn 2+ ion adsorption to surfactant micelle will also be examined.The adsorption isotherm model of Zn 2+ ion adsorption to surfactant micelle will be established to investigate the mechanism of Zn 2+ adsorption to the SDS micelle.These results will be helpful to realise the practical application of this technique. Experimental Materials The SDS used in this research was obtained from Tianjin Kermel Chemical Reagents Development Center.Its molecular formula is CH 3 (CH 2 ) 10 CH 2 OSO 3 with a molecular weight of 288.38, and a purity of 99%.Zinc nitrate hexahydrate was obtained from Shanghai Tinxin Chemical Reagent Plant.Its molecular formula is Zn(NO 3 ) 2 •6H 2 O with a molecular weight of 297.49, and purity of 99%.Nitric acid, sodium hydroxide and sodium hypochlorite were purchased from Shanghai Chemical Reagent Limited Company, in AR grade.The feed solutions were prepared by dissolving different amounts of SDS and zinc nitrate hexahydrate in deionised water.The deionised water was produced by 1. Procedure Micellar-enhanced ultrafiltration experiments were performed at room temperature.The procedure is shown in Fig. 2. The feed tank was initially filled with 10 ℓ of feed solution.The solution temperature was held constant at 30°C using a thermostat to avoid any precipitation because the Kraft point of the SDS is 14°C.The solution pH was not adjusted.After being fully mixed, the solution was fed into the membrane module for linear continuous ultrafiltration by Peristaltic pump at a constant pressure of 0.07 MPa.At the desired time intervals, the permeate was sampled.The used membrane was immediately flushed at room temperature for 15 min at 0.03 MPa using deionised water, 0.01 M HNO 3 , 0.1 M NaOH, 1% NaClO.After each step in the cleaning procedure, deionised water was circulated at 0.03 MPa and room temperature, until the pH of the permeate became neutral.When maintained as described above, the membrane exhibited a constant initial permeate flux after daily use. The deionised water permeate flux and solution permeate flux of the ultrafiltration membrane were measured by the rotameter at constant trans-membrane pressures. Effect of the SDS concentration on the rejection of Zn 2+ To evaluate the filtration efficiency in removing the Zn 2+ from the feed solution, we used the rejection rate R expressed as: (1) where: C i is the initial concentration of the Zn 2+ (mg/ℓ) in the feed solution C p is the concentration of the Zn 2+ (mg/ℓ) in the permeate. Figure 3 shows the variation of the Zn 2+ rejection with the initial SDS concentrations ranging from 0.2 × cmc (1.56 mmol/ℓ) up to 3 × cmc (23.4 mmol/ℓ) at a constant Zn 2+ concentration of 50 mg/ℓ and a constant pressure of 0.07 MPa.The critical micelle concentration of SDS (7.8 mmol/ℓ) was obtained by conductivity measurement (not shown).The rejection of Zn 2+ increased with the initial concentration of SDS.As observed from the figure the rejection of Zn 2+ increased from 38.6% to 97.5%, when the initial concentration of SDS grew from 0.2 × cmc to 0.8 × cmc.When the SDS concentration is below its cmc, no micelles are present in the bulk solution in theory and no rejection of Zn 2+ is expected.The rejection could be primarily attributed to the concentration polarisation.The concentration polarisation is an important characteristic of all ultrafiltration systems.It is caused by the accumulation of retained solutes or particles on the membrane surface.Some level of concentration polarisation may have a beneficial effect in terms of permeate rejection.The increased concentration of the solute in the vicinity of the membrane surface has been shown to act as a 'secondary' membrane and aids in rejecting solutes.As the initial SDS concentration is below the cmc, all the surfactant molecules are in the form of free monomers, the size of which is much smaller than the pore diameter of the membrane.Under these conditions, monomers should easily cross the membrane, and yet the surfactant is partly retained.The surfactant monomer is impeded as it passes through the membrane into the permeate since the permeate concentration is lower than the cmc; this retardation may be caused by charge or steric effects (Gzara and Dhahbi, 2001).The SDS concentration being rejected by the membrane becomes higher in the region of the retentate solution adjacent to the membrane surface than the bulk solution.This region is called the concentration polarisation layer.When the SDS concentration reaches cmc level at the concentration polarisation layer, many SDS monomers begin to form large numbers of big-size micelles at the concentration polarisation layer.These micelles provide more adsorption sites for the Zn 2+ in the initial feed solution and reduce the fraction of Zn 2+ passing through the membrane to the permeate side.Furthermore, an increase in the initial SDS concentration may also result in higher concentration polarisation (much larger number and larger size of micelles) at the concentration polarisation layer.Therefore, the rejection of Zn 2+ increased rapidly when the initial concentration of SDS grew from 0.2 × cmc to 0.8 × cmc. As the SDS concentration exceeded its cmc, the rejection variations were small (98% to 99%).Aoudia et al. (2003) reported that the Cr 3+ rejections (99%) were obtained at total surfactant (SDS-nonylphenol ethoxylated) concentrations of 3 × cmc and 30 × cmc.Interestingly, this rejection (above the SDS's cmc) is practically independent of surfactant concentration at a constant metal ion concentration, strongly suggesting concentration polarisation as the obvious mechanism.In terms of this mechanism, the true rejection of the solute is no longer a function of the initial SDS concentration in the bulk solution but a function of the SDS concentration at the concentration polarisation layer when the solute concentration remains constant.Also, at the initial SDS concentration of 0.8 × cmc and 3 × cmc, the rejections were 97.5% and 99%, respectively (Fig. 3).Thus, some level of concentration polarisation is a valuable practical aspect of the MEUF process, in terms of the low surfactant concentration required to achieve high solute rejections. The economic viability of the MEUF process is strongly dependent on the ability to recover the surfactant, still a challenging task.Therefore, using a low-surfactant concentration system is highly desirable in order to reduce surfactant usage and surfactant loss.The concentration polarisation effect may assist in achieving such aims.When the initial SDS concentration was equal to 0.8 × cmc, not only high Zn 2+ rejection (97.5%) could be obtained but the permeate flux was also comparatively high (shown in Fig. 3).Therefore, the initial surfactant SDS concentration of 0.8 × cmc is the appropriate value to obtain effective treatment effect at low surfactant concentration. Effect of the SDS concentration on the permeate flux and the secondary resistance However, in spite of the many advantages of ultrafiltration process, flux decline is still the most serious and inherent obstacle for the efficient application of the MEUF process.Therefore, not only the separation efficiency of metal ions and the optimisation of process variables but also the flux behaviours in micellarenhanced ultrafiltration should be investigated systematically. The resistance of the ultrafiltration membrane in micellar-enhanced ultrafiltration includes the hydraulic resistance of the membrane and the secondary resistance which is caused by the fouling of the membrane.They are expressed as: (2) (3) where: R m is the hydraulic resistance of membrane (m -1 ) R f is the secondary resistance of the membrane (m -1 ) μ w is the viscosity coefficient of water (Pa•s) μ s is the viscosity coefficient of solution (Pa•s); J w is the permeate flux of water (m 3 /m 2 •s) J s is the permeate flux of solution (m 3 /m 2 •s) The general relationship between the solution permeate flux and total resistance is given by the following equation: (4) In the study, the modified polysulphone hollow-core fibre ultrafiltration membrane was used.The deionised water permeate flux of the membrane (20 ℓ/m 2 •h) is much higher than that of the lamellar membrane (2.31ℓ/m 2 •h) reported by Juang et al. (2003).This indicates that the hollow-core fibre ultrafiltration membrane is much better than the lamellar membrane. The study of the permeate flux variation according to the initial SDS concentration (0.2 × cmc to 3 × cmc) in the feed solution (Fig. 4) reveals that the permeate flux decreased with the increase of the initial SDS concentration as the ultrafiltration progressed, and the secondary resistance increased with the increase of the initial SDS concentration.As shown in Fig. 4, the permeate flux decreased to 50% of the flux of deionised water when the initial SDS concentration was equal to 3 × cmc.The viscosity coefficient of solution μ s increased very slightly with the increase of the initial SDS concentration (not shown), so it could be neglected in the experiment.The reduction in the permeate flux can be attributed to the concentration polarisation explained above.Although no micelles are present in the initial feed solution at the initial SDS concentration below the cmc, a larger fraction of surfactants is present in the micellar form in the vicinity of the membrane surface.The micelles accumulate on the membrane surface continually and some small micelles block the membrane pores.Further, an increase in the initial SDS concentration may also result in higher concentration polarisation at the concentration polarisation layer.Therefore, higher secondary resistance of the membrane increased and the permeate flux decreased synchronously.Similarly in the same way, when the initial SDS concentration was higher than the cmc, the permeate flux through the membrane decreased due to a large increase in the secondary resistance to flow caused by the concentration polarisation. Though the permeate flux decreased with the increase of the initial SDS concentration due to the concentration polarisation, the permeate flux of 13.2 ℓ/m 2 •h was comparatively high when the initial SDS concentration was equal to 0.8 × cmc.It indicates good potential practical application of the MEUF technique using the hollow-core fibre ultrafiltration membrane to remove metal ions from wastewater at low surfactant concentration. Effect of the SDS concentration on the permeate SDS concentration The permeate SDS concentration should be considered to evaluate the performance of the MEUF process because the surfactant in the permeate may induce a secondary pollution. The variation in permeate SDS concentration as against the different initial SDS concentrations (0.2 × cmc to 3 × cmc) in the feed solution is described in Fig. 5.The experimental results were depicted at a constant pressure of 0.07MPa.As observed from the figure, the permeate SDS concentration increased with the increase of the initial SDS concentration .When the initial SDS concentration was below the cmc (0.8 × cmc), almost 53% surfactant rejection was reached.As the size of surfactant monomers is much smaller than the membrane pore size, the monomers can easily pass through the membrane in principle.The rejection can be attributed to the concentration polarisation explained above and the adsorption of surfactant at the membrane surface.Then, when the initial SDS concentration was higher than the cmc, the permeate SDS concentration increased with the initial SDS concentration and did not exceed the cmc value (when the initial SDS concentration increased to 3 × cmc, the SDS permeate concentration was equal to 4.2 mmol/ℓ).Whatever the concentration of surfactant in the feed is, the surfactant concentration in the permeate is lower than the cmc (Gzara and Dhahbi, 2001).Consequently the loss of the surfactant SDS and the secondary pollution by SDS is weak. Effect of the SDS concentration on the volume concentrated ratio and the concentration concentrated ratio The volume concentrated ratio α and the concentration concentrated ratio β are also used in our experiment to evaluate the ultrafiltration efficiency.They are expressed as: ( where: V i is the initial volume of the feed solution (ℓ) V r is the volume of the retentate solution (ℓ) C i is the initial concentration of the Zn 2+ (mg/ℓ) in the feed solution C r is the concentration of the Zn 2+ (mg/ℓ) in the retentate. Figure 6 shows the variation of the volume-concentrated ratio and the concentration-concentrated ratio with the initial SDS concentration ranging from 0.2 × cmc up to 3 × cmc at the initial Zn 2+ concentration of 50 mg/ℓ and a constant pressure of 0.07MPa.Along with the increase of the initial SDS concentration, the volume-concentrated ratio decreased gradually.The concentration-concentrated ratio increased when the initial SDS concentration grew from 0.2 × cmc to 0.8 × cmc.When the initial SDS concentration was equal to 0.8 × cmc, the maximum concentration-concentrated ratio was obtained.Beyond the concentration, the concentration-concentrated ratio decreased gradually probably due to the increase of the retentate volume.The high volume-concentrated ratio and concentration-concentrated ratio not only reflect better efficiency of MEUF but are also propitious to recover surfactant and metal ion from the retentate by some methods, such as chemical precipitation (Juang et al., 2003), electrolytic method (Liu and Li, 2004) and so on.Recovery of surfactant and metal ion for reuse makes the MEUF process more economical and safer. Effect of the Zn 2+ concentration on the rejection of Zn 2+ The effect of varying the initial Zn 2+ concentration on the Zn 2+ rejection was investigated at the initial SDS concentration constant equal to 0.8 × cmc and a constant pressure of 0.07MPa.According to Fig. 7, along with the increase of the initial Zn 2+ concentration, the Zn 2+ rejection decreased gradually and the permeate Zn 2+ concentration increased synchronously.The Zn 2+ rejection decreased from 98% to 69.8% with the initial Zn 2+ concentration ranging from 20 mg/ℓ up to 300 mg/ℓ.This is because the initial SDS concentration was held constant at 0.8 × cmc.When the initial SDS concentration is constant, the amount of micelles produced by concentration polarisation is approximately constant with the initial Zn 2+ concentration ranging from 20 mg/ℓ up to 300 mg/ℓ.Therefore the amount of adsorption site afforded by the micelles is limited.Along with the increase of the initial Zn 2+ concentration, large numbers of the adsorption sites are occupied by the Zn 2+ ions, and the amount of the adsorption sites decreases synchronously.Therefore, a large number of un-adsorbed Zn 2+ ions pass through the membrane into the permeate solution. The efficient removal of Zn 2+ at low Zn 2+ feed concentrations is a very important feature of MEUF.As observed from Fig. 7, the Zn 2+ rejection was 98% when the initial Zn 2+ concentration was equal to 20 mg/ℓ.Other metal clean-up methods, such as precipitation by pH adjustment, show a decrease in efficiency as the metal solution is diluted.On the contrary, MEUF exhibits an increase in efficiency upon dilution. Effect of the Zn 2+ concentration on the permeate SDS concentration and the permeate flux Figure 8 shows the variation of the permeate SDS concentration and the permeate flux with the initial Zn 2+ concentration ranging from 20 mg/ℓ up to 300 mg/ℓ at the initial SDS concentration of 0.8 × cmc and a constant pressure of 0.07 MPa.Both the permeate flux and the permeate SDS concentration remained constant with the initial Zn 2+ concentration varying.This is because the initial SDS concentration was held constant at 0.8 × cmc.The permeate SDS concentration and the permeate flux are independent of the initial Zn 2+ concentration. Adsorption isotherm The Zn 2+ adsorption isotherm to the SDS micelle was established to investigate the characteristics of Zn 2+ adsorption to the SDS micelle (Fig. 9).The adsorption isotherm revealed that Zn 2+ adsorption increased with increasing Zn 2+ concentration in the bulk liquid phase.At the equilibrium point of the isotherm, however, the amount of adsorbed Zn 2+ remained constant along with the increase of the Zn 2+ concentration.This phenomenon can be compared to the Langmuir adsorption isotherm model.The Langmuir adsorption isotherm equation is expressed by the following equation (Stumm and Morgan, 1996): (7) where: K is the equilibrium adsorption constant (ℓ/mmol) q max is the maximum amount of adsorbed Zn 2+ ion (mmol/g) q e is the amount of adsorbed Zn 2+ ion at equilibrium (mmol/g) C e is the molar concentration of Zn 2+ ion in the bulk liquid phase at equilibrium (mmol/ℓ). The value of C e can be determined by the permeate Zn 2+ concentration.Based on the mass balance, the amount of Zn 2+ adsorbed at equilibrium q e is calculated.The rearrangement of Eq. ( 7) is given by the following equation: The linear relationship between (1/q e ) and (1/C e ) can be shown in Eq. ( 8).A linear plot of (1/q e ) against (1/C e ) was employed to give the values of K and q max from the slope and intercept of the plot (Fig. 10).According to Fig. 10, the Langmuir parameters together with the correlation coefficient r 2 are calculated.The equilibrium adsorption constant K and the maximum amount of adsorbed Zn 2+ ion q max are equal to 17.2 ℓ/mmol and 2.326 mmol-Zn 2+ per g-SDS (151mg/g), respectively.The correlation coefficient r 2 is equal to 0.999.These parameters show that the Langmuir equation fits the Zn 2+ adsorption isotherm to the SDS micelle well.The Langmuir equation of the Zn 2+ adsorption to the SDS micelle can be obtained: (9) Conclusions The removal of Zn 2+ ions from aqueous solutions by MEUF using single anionic surfactant sodium dodecyl sulphate (SDS) at low concentrations was investigated.When the initial SDS concentration was below the cmc (0.8 × cmc) unexpectedly high Zn 2+ rejection (97.5%) was obtained due to concentration polarisation occurring near the membrane-solution interface. The true rejection of the solute is no longer a function of the initial SDS concentration in the bulk solution but a function of the SDS concentration at the concentration polarisation layer.Although the permeate flux decreased with the increase of the initial SDS concentration due to the concentration polarisation, the permeate flux of 13.2 ℓ/m 2 •h was comparatively high when the initial SDS concentration was equal to 0.8 × cmc.The permeate SDS concentration increased with initial SDS concentration.Whatever the concentration of surfactant in the feed is, the surfactant concentration in the permeate is lower than the cmc.In order to reduce surfactant dosage and surfactant loss, a good choice for initial surfactant SDS concentration is 0.8 × cmc (6.24 mmol).The removal of Zn 2+ at low Zn 2+ feed concentrations is very efficient. The characteristics of Zn 2+ ion adsorption to surfactant micelle were represented by the Langmuir isotherm model.The equilibrium adsorption constant K and the maximum amount of adsorbed Zn 2+ ion q max are equal to 17.2 ℓ/mmol and 2.326 mmol/g (151 mg/g), respectively.The Langmuir isotherm model is effective for better understanding the mechanism of Zn 2+ adsorption to the SDS micelle and also provides a theoretical tool needed for the MEUF technique application and optimisation. These results demonstrate the potential practicality of the MEUF technique for removal of heavy metal ion pollutants such as Zn 2+ at low surfactant concentrations and provide the scientific and technical basis for the application of the MEUF technique in practice.In the future, MEUF will be used widely to treat wastewaters containing heavy metal ions. Further studies are indicated for the mechanism influencing concentration polarisation on the rejection of metal ions and permeation flux.The characteristics of metal ion adsorption to surfactant micelles and recovery of surfactant and metal ions also require further studies.
6,185.2
2009-12-08T00:00:00.000
[ "Materials Science" ]
GATA3 Transcription Factor Abrogates Smad4 Transcription Factor-mediated Fascin Overexpression, Invadopodium Formation, and Breast Cancer Cell Invasion* Background: Fascin is a pro-metastasis actin bundling protein overexpressed in basal-like breast cancer. Results: GATA3 abrogates TGFβ and Smad4-mediated fascin overexpression by abolishing the binding of Smad4 to fascin promoter. Conclusion: GATA3 is a novel suppressor of the canonical TGFβ-Smad signaling pathway. Significance: These findings provide mechanistic insight into how TGFβ-mediated invasion and metastasis are differentially regulated in different subgroups of breast cancer. Transforming growth factor β (TGFβ) is a potent and context-dependent regulator of tumor progression. TGFβ promotes the lung metastasis of basal-like (but not the luminal-like) breast cancer. Here, we demonstrated that fascin, a pro-metastasis actin bundling protein, was a direct target of the canonical TGFβ-Smad4 signaling pathway in basal-like breast cancer cells. TGFβ and Smad4 induced fascin overexpression by directly binding to a Smad binding element on the fascin promoter. We identified GATA3, a transcription factor crucial for mammary gland morphogenesis and luminal differentiation, as a negative regulator of TGFβ- and Smad4-induced fascin overexpression. When ectopically expressed in basal-like breast cancer cells, GATA-3 abrogated TGFβ- and Smad4-mediated overexpression of fascin and other TGFβ response genes, invadopodium formation, cell migration, and invasion, suggesting suppression of the canonical TGFβ-Smad signaling axis. Mechanistically, GATA3 abrogated the canonical TGFβ-Smad signaling by abolishing interactions between Smad4 and its DNA binding elements, potentially through physical interactions between the N-terminal of GATA3 and Smad3/4 proteins. Our findings provide mechanistic insight into how TGFβ-mediated cell motility and invasiveness are differentially regulated in breast cancer. suppress tumor progression depending on tumor type and stage (3). In estrogen receptor (ER) 2 -negative breast cancer patients the overexpression of the type II TGF␤ receptor is associated with worse overall survival, and the up-regulation of TGF␤ signature genes promotes lung metastasis (4,5). On the other hand, TGF␤ signaling has no effect on the prognosis among ER positive breast cancer patients (4,5). It is not fully understood how TGF␤-mediated tumor metastasis is differentially regulated among breast cancer subtypes. GATA3 is a member of the GATA family of zinc finger transcription factors required for the development and morphogenesis of the mammary gland (6 -8). GATA3 expression levels are high in well differentiated, luminal breast cancer (ERand/or progesterone receptor (PR)-positive, Her2 (human EGF receptor 2)-positive or -negative) but suppressed in poorly differentiated basal-like subgroup (ER, PR, and Her2 triple negative) (6, 9 -11). The targeted deletion of GATA3 in mouse mammary gland results in the expansion of luminal progenitor cells, and the ectopic expression of GATA3 in mammary stem cells induces luminal differentiation (7), suggesting that GATA3 is critical to maintaining the differentiation of the luminal lineage. Ectopic expression of GATA3 in basal-like breast cancer cells caused reversal of epithelial-to-mesenchymal transition and suppresses the metastasis of breast cancer to the lung (12)(13)(14)(15). There is increasing evidence suggesting that re-introduction of GATA3 in basal-like breast cancer cells induces differentiation to luminal-like phenotype (6,10,12,16); however, the regulation of TGF␤-mediated invasion and metastasis by GATA3-mediated differentiation is not clear. Fascin is an actin bundling protein that plays a critical role in lung metastasis of basal-like breast cancer (17,18). Fascin promotes the metastasis of breast and other cancers by facilitating membrane protrusions such as filopodia and invadopodia during cancer cell migration and invasion (19 -22). We recently reported that fascin expression is up-regulated by the canonical TGF␤-Smad3-Smad4 signaling pathways in poorly differentiated cancer cells but not in well differentiated polygonalshaped cancer cells (19). TGF␤-mediated filopodia formation and cancer cell invasion were almost abrogated when fascin was depleted with shRNA, suggesting that fascin is critical for TGF␤-mediated invasion and metastasis. However, it is not clear how the differentiation state of cancer cells affects TGF␤induced fascin expression. Here we demonstrate that Smad4 directly promotes fascin transcription by binding to a Smad binding site on the fascin promoter. The binding of Smad4 to the fascin promoter is abrogated by ectopic GATA3, potentially through direct interactions between GATA3 N-terminal and Smad3/4 proteins. Importantly, ectopic GATA3 abrogates Smad4-mediated invadopodia formation, Matrigel invasion, and the transcription of direct or indirect TGF␤ response genes, suggesting that ectopic GATA3 inhibits the global response to the canonical TGF␤-Smad signaling axis. Our data imply that high expression levels of GATA3 in ER-positive, luminal-like breast cancer might be responsible for the lack of TGF␤-mediated metastasis in this subtype of breast cancer. Retrovirus and Stable Cell Line Preparation-Vesicular stomatitis virus-G pseudotyped retroviruses were prepared and concentrated as described previously (23). Briefly, HEK293 cells (in 10-cm dishes) were co-transfected with retrovirus vector encoding desired cDNA (5 g), retrovirus packaging plasmids encoding gag-pol (5 g), and vesicular stomatitis virus-G (5 g) using PEI reagent. The retroviruses in the supernatant were harvested and concentrated by centrifugation. To generate stable cell lines, MDA-MB-231 cells were infected with retrovirus and selected with appropriate antibiotics for 1-2 weeks before being used for experiments. Luciferase Assay-The full-length human fascin promoter has been described previously (24). The luciferase reporter constructs were generated by inserting full-length or truncated human fascin promoter into pGL3 basic vector (Promega) between XhoI and HindIII. To perform dual luciferase reporter assay, 12,000 MDA-MB-231 cells were seeded in 12-well plates and cultured overnight. Cells were transfected with 1 g/well fascin promoter reporter together with 100 ng/well Renilla luciferase construct (pRL-TK) using Lipofectamine 2000. 24 h after transfection, the cells were treated with 5 ng/ml TGF␤ for 12 h before lysis. Cell lysates were subjected to dual reporter luciferase assays according to the manufacturer's instructions (Promega). Chromatin Immunoprecipitation (ChIP) Assays-ChIP assays were performed according to a previously reported protocol with minor modification (25). 1 ϫ 10 7 MDA-MB-231 Cells were treated with control medium or medium containing 5 ng/ml TGF␤ for 5 h and then fixed with 1% formaldehyde (Sigma F8775) for 10 min at room temperature. The cells were scraped, washed in ice-cold PBS, and centrifuged at 1500 ϫ g at 4°C for 5 min. Subsequently, the pellet was resuspended in cell lysis buffer (44 mM Tris-HCl (pH 8.1), 1% SDS, and 1 mM EDTA (pH 8.0)). The cells were sonicated 3 times for 15 s each. Subsequently, the cell lysates were centrifuged at 10,000 ϫ g at 4°C for 15 min. An aliquot of the sheared chromatin was used as the input for the ChIP assay. The remainder of the chromatin was diluted with ChIP dilution buffer (16 mM Tris-HCl (pH 8.1), 250 mM NaCl, 0.1% SDS, 1% Triton-X-100, and 1.2 mM EDTA) and rotated at least 4 h with primary anti-Smad4 antibody at 4°C with mouse IgG as control. 60 l of 1:1 protein G-Sepharose were added to the immune complexes, and the mixture was rotated at 4°C for 2 h. The beads were washed 5 times with ChIP dilution buffer and eluted with ChIP elution buffer (0.1 M sodium bicarbonate, 1% SDS, 5 mM NaCl). The cross-links were reversed by incubation at 65°C for 4 h. DNA was isolated by ethanol precipitation. The associated proteins with the DNA were digested with 50 g of proteinase K at 37°C for 30 min. DNA was purified by the phenol:chloroform extraction method followed by ethanol precipitation. Purified DNA was resuspended in 30 l of water and assayed with semi-quantitative PCR. For TGF␤ treated coimmunoprecipitation experiment, MDA-MB-231 cells overexpressing Smad4 or Smad4 and GATA3 were seeded on 60-mm dishes overnight. The cells were treated with 5 ng/ml TGF␤ for 6 h and lysed in a lysis buffer containing 1 mM NaVO 4 and 5 mM NaF. The lysate were incubated with anti-FLAG antibody (M2)-conjugated agarose beads for 2 h at 4°C. The beads were washed extensively, and the bound proteins were eluted by boiling in 1ϫ SDS sample buffer for 5 min and then subjected to Western blotting. Quantitative Real-time (qPCR)-Total RNA was extracted from cultured cells using TRIzol reagent (Invitrogen), and the RNA was treated with DNase for 15 min at 37°C. The reverse transcription was performed using the iScript cDNA synthesis kit (Bio-Rad). The qRT-PCR assay was carried out with the Applied Biosystems 7900HT fast real-time PCR system using Applied Biosystems SYBR Green PCR master mix. Primers for qRT-PCR are shown in Table 1. All reactions were performed in triplicate, and the experiments were repeated at least three times. Invadopodia Assay-The invadopodia activity assay protocol was adapted according to Artym et al. (26) by plating cancer cells onto glass coverslips coated with a thin film of fluorescent gelatin. The immunofluorescence staining was performed as previously described (22,27,28). Briefly, 80,000 MDA-MB-231 cells were plated on Texas Red-labeled gelatin-coated glass coverslips (18 mm). After a 24-h incubation, cells were fixed in 4% paraformaldehyde and permeabilized with antibody diluting buffer (2% BSA, 0.1% Triton X-100 in PBS) and followed by incubation with Alexa Fluor 488 phalloidin for 30 min. An extensive wash was carried out between each step. The coverslips were then fixed onto slides and imaged using a Zeiss fluorescence microscope. To quantify gelatin degradation, fluorescent micrographs were taken from 3-5 random fields for each group. The total gelatin degradation area in each field was measured using ImageJ software by selecting regions of interest. The degradation area per cell for each field was derived by dividing total area with total number of cells present in the field. To determine the effects on invadopodium formation, cells were plated on coverslips coated with unlabeled gelatin and stained for actin (using phalloidin) and cortactin (using anticortactin antibody, 1:1000 dilution). Invadopodia are defined as actin-and cortactin-rich dots on the ventral side of the cells. Cells with three or more invadopodia are defined as invadopodia-positive and otherwise as invadopodia-negative. Cell Migration and Invasion Assay-Cells (1 ϫ 10 5 ) suspended in starvation medium were added to the upper chamber of an insert (for migration assay) or a Matrigel-coated insert (for invasion assay), and the insert was placed in a 24-well dish containing medium with or without serum. Cell migration assays were carried out for 4 h, and invasion assays were carried out for 12 h. Cells were then fixed with 3.7% formaldehyde and stained with crystal violet-staining solution; cells on the upper side of the insert were removed with a cotton swab. Three randomly selected fields (10ϫ objectives) on the lower side of the insert were photographed, and the cells on the lower surface of the insert were counted. RESULTS Fascin Is a Direct TGF␤-Smad Target Gene-When Smad3 or Smad4 was ectopically expressed in two basal-like breast cancer cells (MDA-MB-231 and MDA-MB-468), fascin protein levels increased from 2 to Ͼ30-fold, phenocopying the TGF␤induced fascin overexpression in this poorly differentiated subtype of breast cancer (Fig. 1, A and B). In contrast, ectopic Smad4 had no detectable effects on fascin protein levels in three luminal-like breast cancer cell lines (MCF-7, BT-474, and T47D) (Fig. 1B). The lack of Smad4-induced fascin expression in luminal breast cancer cells suggested that Smad co-factors might play a role in this regulation. There are two potential Smad binding sites (at Ϫ1211 and Ϫ370, respectively) on the fascin promoter (Fig. 1C). To determine whether Smad4 directly regulates fascin expression by binding to the fascin promoter, we constructed a series of luciferase reporters containing full-length (P2900) or truncated fascin promoters (P1315, P402, and P210) (Fig. 1C). The truncation of the promoter region containing the Ϫ1211 Smad binding element had no noticeable effect on the activation of fascin promoter by TGF␤ (Fig. 1C). However, the activation of fascin promoter by TGF␤ was abolished in the P210 reporter, which contains the core fascin promoter elements (29) but neither of the two CAGAC Smad binding elements (Fig. 1C). To investigate whether Smad3 and Smad4 were required for the transactivation of fascin promoter by TGF␤, we knocked down Smad3 and Smad4 in MDA-MB-231 cells using shRNA. Smad3 and Smad4 knockdown abrogated the activation of P402 luciferase reporter by TGF␤ as well as TGF␤-induced overexpression of fascin protein (Fig. 1 , D and E). Our luciferase reporter experiments suggested that the Smad transcription complex might promote fascin expression by directly binding to the Ϫ370 Smad binding sites. To determine if this was the case, we performed ChIP experiments using anti-Smad4 antibody. The Smad4 antibody successfully precipitated the fascin promoter in TGF␤-treated MDA-MB-231 cells but not in the control cells, suggesting that Smad4 directly interacted with the Ϫ370 Smad binding element upon TGF␤ activation ( Fig. 1, F and G). To further confirm that the Ϫ370 site is required for TGF␤ to activate fascin promoter activity, we mutated the Ϫ370 Smad binding element from CAGAC to TTAGT in the P402 reporter. The mutation almost completely abrogated TGF␤-induced luciferase expression. Taken together, our data suggest that fascin is a novel direct target gene of the canonical TGF␤-Smad signaling pathway. The activation of fascin expression by TGF␤ is dependent on the binding of Smad transcription complex to the Ϫ370 CAGAC site on the fascin promoter. GATA3 Negatively Correlates with Fascin Expression in Breast Cancer Patients-We noted that TGF␤ and Smad4 only induced fascin overexpression in basal-like but not in luminal-like breast cancer cells (Fig. 1, A and B). It was also noted that most of the cancer cells that respond to TGF␤-induced fascin transcription also have mesenchymal-like morphology, whereas the non-responsive cells mostly adopt epithelial-like polygonal shapes Primer names Primer sequence Table 1, Fig. 2A), suggesting that these two EMT transcription factors might be involved in regulating fascin expression in breast cancer patients. We decided to focus on GATA3 due to its robust correlation with fascin. When the 99 breast cancer patients in the MSKCC cohort were sorted according to GATA3 expression levels, fascin expression levels were higher in patients with low GATA3 expression , and E were determined through densitometry using ImageJ software. All the expression level changes were relative to the untreated control within the same group (as indicated by the black line). ( Fig. 2B). Indeed, when patients were stratified into "GATA3 low" (GATA3 levels at or below the median level) or "GATA3 high" (above median) groups, the average fascin expression levels in the GATA3 low patients were Ͼ2-fold higher than the GATA3 high group (Fig. 2C). GATA3 and Fascin Are Critical for Breast Cancer Lung Metastasis-We previously reported that the expression levels of fascin were about 2-fold higher in "TGF␤ high" breast cancer patients than in "TGF␤ low" patients (19). Breast cancer patients with high levels of fascin are more prone to developing lung metastasis (17,18). Intriguingly, it was recently suggested that TGF␤ also promoted lung metastasis in ER-negative breast cancer patients (5). To evaluate whether fascin and GATA3 played a role in TGF␤-mediated breast cancer lung metastasis, we stratified the TGF␤ high breast cancer patients in the MSKCC cohort (19) to different groups based on fascin or GATA3 expression levels. As shown in Fig. 2D, TGF␤ high breast cancer patients with high fascin levels (or low GATA3 cohort; the patients were sorted according to GATA3 express levels from low (left) to high (right). C, fascin mRNA levels were Ͼ2-fold higher in the GATA3 low group (n ϭ 50, GATA3 levels at or below the median) when compared with the GATA3 high group (n ϭ 49, GATA3 levels above the median) in the MSKCC cohort. ****, p Ͻ 0.0001 (two-tailed Students t test). D and E, 50 breast caner patients in the MSKCC cohort with high TGF␤1 expression levels (TGF1 at or above median) were further stratified into two subgroups based on fascin (D) or GATA3 (E) expression levels. The probability values (p) and hazard ratios (HR) of lung or bone metastasis free survival were calculated by log-rank tests. DECEMBER 27, 2013 • VOLUME 288 • NUMBER 52 levels in Fig. 2E) were remarkably more susceptible to developing lung metastasis (p ϭ 0.003, HR ϭ 10.7 for fascin and p ϭ 0.004, HR ϭ 9.4 for GATA3) than those patients with low fascin levels (or high GATA3 levels in Fig. 2E). In contrast, it appeared that fascin or GATA3 expression levels had no significant impact on the bone metastasis in this group of breast cancer patients (Fig. 2, D and E). Taken together, our data indicated that both fascin and GATA3 are critical for breast cancer lung metastasis. GATA3 Abrogates TGF␤-Smad4 Signaling Ectopic Expression of GATA3 Abrogates TGF␤ and Smad4mediated Gene Transcription-To determine the role of GATA3 in TGF␤ and Smad4-mediated fascin transcription, we stably expressed GATA3 in MDA-MB-231 and MDA-MB-468 cells. The ectopic expression of GATA3 in the spindle-shaped MDA-MB-231 cells induced a morphology change to epithelial-like polygonal shape, which is consistent with the previous observation that ectopic GATA3 reversed the epithelial-to-mesenchymal transition in basal-like breast cancer cells (12). When ectopically expressed in the two basal-like breast cancer cell lines, GATA3 only had a very modest inhibitory effect on fascin protein expression (ϳ10 -30% reduction in protein levels according to Western blotting) (Fig. 3, A and B). However, GATA3 almost abrogated the TGF␤ and Smad4-mediated overexpression of fascin (Fig. 3, A and B). GATA3 also abrogated TGF␤ and Smad4-mediated increase in fascin mRNA levels, suggesting that GATA3 might inhibit the TGF␤-and Smad4-mediated transcription (Fig. 3C). The robust inhibition of TGF␤and Smad4-mediated fascin overexpression in basallike breast cancer cells was not due to un-physiologically high levels of ectopic GATA3, as the levels of ectopically expressed GATA3 in MDA-MB-231 and MDA-MB-468 cells were Ͻ10% that of the endogenous GATA3 protein levels in the luminal breast cancer cell lines (MCF-7 and T47D). It is also worth noting that even at such low levels ectopic GATA3 was suffi- cient to exert a robust inhibitory effect on TGF␤-and Smad4mediated responses. We also sought to determine whether GATA3 knockdown in luminal breast cancer cells would make them responsive to TGF␤-mediated fascin overexpression. Despite successful reduction of endogenous GATA3 protein by Ͼ80% through shRNA, the residual GATA3 levels in MCF-7 and T47D cells were still two to three times higher than the ectopic GATA3 levels in basal-like cells with enforced expression (Fig. 3D). Consequently, TGF␤-induced fascin expression was unremarkable in these luminal breast cancer cells even after GATA3 knockdown (Fig. 3E). To determine whether the inhibition of TGF␤ and Smad4mediated gene transcription is specific to fascin, we used quantitative PCR to assess the GATA3 effects on the transcription of a panel of five additional TGF␤ response genes, including three genes directly regulated by the Smad transcriptional complexes (ANGPTL4, vimentin, and p21) and two genes indirectly regulated by TGF-Smad signaling (E-cadherin and N-cadherin). Gata3 modestly increased the expression levels of E-cadherin and decreased the levels of N-cadherin and vimentin (Fig. 3, F and G), which is consistent with the luminal differentiation and reversal of EMT phenotypes induced by GATA3 (7,31). Strikingly, ectopically expressed GATA3 abrogated TGF␤and Smad4-mediated transcription of all five TGF␤ response genes, suggesting that GATA3 might globally inhibit the signaling of the canonical TGF␤-Smad signaling pathway (Fig. 3, F and G). GATA3 Abrogates Smad4-mediated Invadopodium Formation and Invasion-Invadopodia are adhesive membrane protrusions that coordinate ECM degradation and invasion in cancer cells (32,33). Invadopodia share many protein components and similar regulatory mechanisms with filopodia and are considered "invasive filopodia" in metastatic cancer cells (20,21,34). It was recently reported that fascin promoted invadopodium formation by stabilizing the actin core of invadopodia (20). We sought to investigate the role of TGF␤ and fascin in invadopodium regulation in basal-like breast cancer cells. When stained for F-actin and cortactin, ϳ30% of MDA-MB-231 cells contained round actin and cortactin-positive dots on the ventral side of the cell (Fig. 4A). When plated on glass coverslips coated with fluorescence-labeled gelatin, these actin protrusions were able to degrade gelatin, leaving dark spots on a bright background, suggesting that those were invadopodia. Treatment with TGF␤ or overexpression of Smad4 increased the percentage of invadopodia positive cells from ϳ30% (47 of 159) to ϳ90% (139 of 151) and ϳ50% (79 of 159), respectively (Fig. 4, A and B). Smad3 and Smad4 knockdown almost abolished TGF␤-mediated invadopodium formation without significant effects on basal levels of invadopodia positive cells (Fig. 4, C and D), suggesting that TGF␤ promotes invadopodium formation through the canonical Smad-dependent pathway. To investigate the role of fascin in TGF␤-mediated invadopodium formation, we employed shRNA to knock down fascin expression in MDA-MB-231 cells. Fascin knockdown decreased the proportion of invadopodia-positive cells from ϳ26% (39 of 151) to ϳ10% (15 of 152) (Fig. 4, C and D). Although TGF␤ treatment in fascin knockdown cells still increased invadopodium formation (about 20% of the cells were positive for invadopodia after TGF␤ treatment), the increase was remarkably lower than in control shRNA expressing cells, suggesting that fascin is critical for TGF␤-mediated invadopodia formation. Because our data indicated that GATA3 abrogated the global response to TGF␤-Smad4 signaling, we further investigated the effects of GATA3 in Smad4-mediated invadopodium formation and ECM degradation. The ectopic expression of GATA3 remarkably decreased the invadopodium-positive MDA-MB-231 cells from ϳ30% to Ͻ10% (14 of 150) and inhibited the gelatin degradation activity of the breast cancer cells (Fig. 4, E-G). Importantly, unlike in the control MDA-MB-231 cells, overexpression of Smad4 in MDA-MB-231-GATA3 cells failed to increase either the proportion of invadopodium-positive cells or the degradation of gelatin (Fig. 4, E-G). Next, we investigated the effects of GATA3 on Smad4-mediated migration and invasion of MDA-MB-231 cells through Boyden chamber assay. Smad4 promoted the motility and invasiveness of MDA-MB-231 cells by Ͼ2.5-fold. In MDA-MB-231-GATA3 cells the pro-migration and pro-invasion activity of Smad4 was dramatically diminished (Fig. 4, H-I) despite similar levels of Smad4 protein expression in the control cells and GATA3 cells (Fig. 3B). Fascin knockdown was able to significantly inhibit Smad4-mediated invasion (Fig. 4J), and ectopic overexpression of fascin was able to partially rescue the inhibition of Smad4-mediated invasion by GATA3 (Fig. 4K). Taken together, our data indicated that reintroduction of GATA3 into the basal-like MDA-MB-231 breast cancer cells abrogated the ability of TGF␤-Smad signaling pathway to promote invadopodium formation, ECM degradation, and Matrigel invasion at least partially through abrogating the TGF␤and Smad4-mediated fascin overexpression. GATA3 Abrogates the Binding of Smad4 to Fascin Promoter-To understand the molecular mechanisms by which GATA3 regulates Smad4-mediated fascin transcription, we inspected the fascin promoter for GATA3 binding elements and identified three potential GATA3 sites at Ϫ1869, Ϫ1707, and Ϫ1114 (Fig. 5A). To determine whether these GATA3 binding elements were required for GATA3 to inhibit TGF␤ and Smad4mediated fascin transcription, we used the P402 luciferase reporter to investigate the activation of this truncated reporter by TGF␤. Surprisingly, although the P402 reporter did not contain any of the three GATA3 binding elements, ectopic GATA3 still abolished the activation of P402 truncated promoter by TGF␤. We next investigated whether GATA3 abolished TGF␤-mediated fascin expression by inhibiting Smad3 phosphorylation. Ectopically expressed GATA3 only modestly decreased basal and TGF␤-stimulated phosphor-Smad3 levels in MDA-MB-231 cells (Fig. 5B), suggesting that this is not likely to be a major mechanism. To determine whether GATA3 might directly regulate the activity of the Smad transcriptional complex by binding to Smad proteins, we expressed HA-GATA3 alone or together with FLAG-Smad3 or FLAG-Smad4, respectively, in HEK293 cells. The FLAG-tagged Smads were immunoprecipitated with M2 anti-FLAG beads, and the presence of GATA3 bound to the beads was assayed through anti-HA immunoblotting. As shown in Fig. 5C, M2 beads precipitated GATA3 when GATA3 was co-expressed with either FLAG-Smad3 or with FLAG-Smad4, but not when expressed alone, suggesting that GATA3 directly interacts with Smad transcription factors (Fig. 5C). To investigate whether Smad3 might interact with endoge-GATA3 Abrogates TGF␤-Smad4 Signaling DECEMBER 27, 2013 • VOLUME 288 • NUMBER 52 nous GATA3 and whether such interaction might be regulated by TGF␤, we expressed FLAG-Smad3 in MCF-7 cells. As shown in Fig. 5D, M2 anti-FLAG beads successfully co-immunoprecipitated endogenous GATA3 with FLAG-Smad3 in MCF-7 cells with or without TGF␤ treatment, suggesting that the Smad3-GATA3 complex might not be affected by TGF␤ treatment. To determine the effects of GATA3 on the formation of Smad3-Smad4 transcriptional complex, we investigated the Smad3-Smad4 interaction in the MDA-MB-231 control cells and the GATA3 stable line using FLAG-Smad4 to immunoprecipitate endogenous Smad3. A very low Smad3-Smad4 interaction was detected in the control cells and GATA3 cells before TGF␤ stimulation (Fig. 5E). Stimulation with 5 ng/ml TGF␤ increased the amount of Smad3 precipitated FLAG-Smad4 in both cell lines; however, the formation of Smad3-Smad4 complex was inhibited by ϳ70% in the GATA3 cells when compared with the control cells (Fig. 5E). We postulated that the direct interaction between GATA3 and Smad3/4 and the reduced formation of Smad3-Smad4 complex might contribute synergistically to inhibit the binding of Smad4 to the promoter of TGF␤ response genes and thus abrogated Smad4-mediated invasion in basal-like breast cancer cells. To examine this possibility we investigated the interaction between Smad4 and the promoters of fascin and p21 through ChIP. Only very small amounts of the fascin or the p21 promoter were immunoprecipitated by anti-Smad4 antibody in the MDA-MB-231 control cells or the GATA3 cells (Fig. 5, F and G). TGF␤ increased the immunoprecipitation of p21 and fascin promoter by anti-Smad4 antibody by 5-30-fold (Fig. 5, F and G), consistent with the notion that these two genes are directly regulated by the canonical TGF␤-Smad4 signaling. However, when GATA3 was ectopically expressed in MDA-MB-231 cells, TGF␤ was not able to increase the binding of Smad4 to either the fascin promoter or the p21 promoter, suggesting that GATA3 abrogated the TGF␤-mediated activation of Smad4 transcriptional activity. GATA3 N-terminal Interacts with Smad3 and Smad4-GATA3 is a 443-residue protein containing two transactivation domains (TA1 and TA2) on the N terminus and two DNA binding Zinc-finger domains (ZnF1 and ZnF2) on the C terminus (Fig. 6A). To understand the structural determinant for the interactions between GATA3 and Smad3/4, we constructed a ; the phosphor-Smad3 and total Smad3 levels in these cells were determined by Western blotting. C, physical interactions between GATA3 and Smad3 and between GATA3 Smad4 were determined by immunoprecipitation (IP). HEK293 cells were transient transfected with HA-GATA3 alone or HA-GATA3 together with FLAG-Smad3 or FLAG-Smad4. FLAG-tagged Smads were precipitated with M2 beads, and co-precipitated GATA3 was detected with Western blotting (IB). D, MCF-7 cells with or without enforced expression of FLAG-Smad3 were treated with or without TGF␤ and used for immunoprecipitation as described in C. IgG heavy-chain bands from M2 beads that migrated in close proximity to GATA3 bands are indicated. F and G, GATA3 ectopically expressed in MDA-MB-231 cells abrogated the binding of Smad4 to the fascin promoter and p21 promoter, as determined by ChIP assay with anti-Smad4 antibody. The fascin and p21 promoter precipitated by the anti-Smad4 antibody were detected through semi-quantitative PCR. G, quantification of the results in F through densitometry using ImageJ. RU, relative units. series of HA-tagged GATA3 fragments (Fig. 6A). These fragments were co-expressed with FLAG-Smad3 or FLAG-Smad4, and their interactions were determined through co-immunoprecipitation (Fig. 6, B-E). Smad3 and Smad4 interacted strongly with both the N1 (1-259) and the N2 (1-295) fragments and weakly with the C1 (259 -443) fragment, suggesting that the interactions with Smad3/4 mainly involved the N-terminal region containing the two transactivation domains (Fig. 6, B and C). It appeared that the N2 fragment (N1 fragment plus ZnF1) interactions with Smad3/4 were stronger than with the N1 fragment, suggesting that the zinc finger domain 1 might contribute to strengthening the interactions between Smad3/4 and GATA3 N-terminal and might account for the residual interaction activity in the C1 fragment. To test this possibility we constructed a C2 fragment containing residue 295-443 (C1 minus ZnF1) (Fig. 6A). The C2 fragment appeared to be prone to degradation, but we were able to obtain high expression levels of this protein fragment after inhibition of the proteasome pathway with MG132. The MG132 treatment did not affect the interaction between full-length GATA3 and Smad3/4; however, despite the high expression levels in the MG132 treated cells, the C2 fragment failed to interact with either Smad3 or Smad4 (Fig. 6, D and E). Taken together, our data suggest that the interactions between GATA3 and Smad3/4 are mainly mediated by GATA3 N-terminal and further strengthened by ZnF1 domain. DISCUSSION Fascin is a pro-metastasis actin bundling protein overexpressed in all of the carcinomas examined to date (35). In breast cancer patients, fascin expression levels are significantly higher in the basal-like subgroup when compared with the luminal subgroup or to normal breast tissues (17,36). There is emerging evidence suggesting that cytokines and growth factors in the tumor microenvironment, such as TGF␤, IL-6, and EGF, may promote fascin overexpression in cancer cells (19,24,(37)(38)(39). We previously reported that TGF␤ promoted fascin overexpression in breast and lung cancer cells through a Smad3-and Smad4-dependent but MAPK-independent pathway (19); however, Fu et al. (38) reported that TGF␤ mediated fascin overexpression through Smad-independent but MAPK-dependent pathway in gastric cancer. It is not immediately clear whether the discrepancy is due to different types of cancer cells used in the two studies. Nonetheless, our data here further indicate that fascin is a direct target gene of the canonical TGF␤-Smad signaling pathway, at least in basal-like breast cancer. TGF␤ acti- FIGURE 6. The physical interactions between Smad3/4 and GATA3 were mediated by its N-terminal and ZnF1 domain. A, schematic illustration of the structural organization of GATA3 domains and truncated GATA3 fragments used in this study; TA1, TA2, ZnF1, and ZnF2 are transactivation domains 1 and 2 and zinc finger domains 1 and 2, respectively. The beginning and ending residue numbers for each domain/region/fragments are as indicated. B-E, physical interactions between full-length GATA3, GATA3 fragments, and Smad3 and Smad4 were determined by immunoprecipitation (IP). HEK293 cells were transiently transfected with HA-tagged GATA3 and GATA3 fragments alone or together with FLAG-Smad3 or FLAG-Smad4 as indicated in the respective panels. FLAG-tagged Smads were precipitated with M2 beads, and co-precipitated GATA3 fragments were detected with Western blotting (IB). Cells in D and E were treated with 2 M MG132 for 12 h before cell lysis and immunoprecipitation. The two asterisks in D mark IgG heavy chain from M2 beads that co-migrate with FLAG-Smad3. vates the transcription of fascin gene by promoting the binding of Smad4 to the Ϫ370 Smad binding sites on the fascin promoter. Our findings together with previous reports on the regulation of fascin expression by Stat3 and NFB (37,40) suggest that signaling pathways downstream of the inflammatory cytokines (e.g. TGF␤, IL-6, TNF␣, etc.) might be responsible for fascin overexpression in metastatic cancers. The effects of inflammatory tumor microenvironment on fascin overexpression warrant further exploration in the future. Our earlier data suggested that the differentiation state of the cancer cells might affect the response to TGF␤-mediated fascin overexpression (19). Interestingly, the up-regulation of fascin by TGF␤ and Smad4 was only observed in basal-like breast cancer cells but not in the luminal-like cells. By examining two breast cancer microarray datasets, we identified GATA3, a master regulator of mammary morphogenesis and luminal differentiation (6 -9, 31) as a potential regulator of fascin expression. Indeed, ectopically expressed GATA3 abrogated the TGF␤ and Smad4-mediated transcription and overexpression of fascin in basal-like breast cancer cells. Our data further indicated that ectopically expressed GATA3 might globally suppress the transcriptional activity of the canonical TGF␤-Smad signaling and abrogate the ability of Smad4 to promote invadopodium formation, cell migration, and invasion in MDA-MB-231 cells. The abrogation of Smad4-mediated responses by GATA3 was probably due to the blockade of the interaction between Smad4 and its DNA binding sites. Intriguingly, ectopic expression of GATA3 in MDA-MB-231 cells, although at a relative low level when compared with luminal cancer cells, remarkably inhibited invadopodium formation. Such inhibition is unlikely through fascin, as GATA3 only very modestly reduced basal expression levels of fascin in MDA-MB-231 cells. It was recently reported that transcription factor or TGF␤-mediated EMT significantly promoted invadopodium formation in breast cancer cells (41). It is well established that ectopic expression of GATA3 in mesenchymal-like cancer cells induces epithelial-like phenotypes. It is possible that ectopic expression of GATA3 inhibits invadopodium formation by reversing EMT. It was previously reported that Smad3 and GATA3 physically interact with each other in T cells to enable the regulation of GATA3 target genes by TGF␤ signaling (42). It was not clear, however, whether such interactions affect the canonical Smadmediated signaling pathway. Our data indicate that, when ectopically expressed in basal-like breast cancer cells, GATA3 physically interacts with Smad3 and Smad4 and interferes with the formation of Smad3-Smad4 transcription complex. It is tempting to hypothesize that GATA3 is a novel co-suppressor of Smad transcription factors; the physical interactions between GATA3 and Smad3/4 might block the TGF␤-Smad signaling by abrogating the interactions between Smad transcription factors and their DNA binding elements. The interaction between Smad3/4 and GATA3 was mainly mediated by its N-terminal region and strengthened by the ZnF1 domain. It is interesting to note that the ZnF1 domain was previously implicated in the interactions between GATA3 and FOG1 and FOG2 (43). The effects of TGF␤ on cancer progression are highly context-dependent (3,44). Although the core components of the canonical TGF signaling pathways are preserved in most breast cancers, TGF␤ signaling promotes lung metastasis only in ERnegative, but not the ER-positive, breast cancer (4,5,44). Our data suggest that high expression levels of GATA3 in the ERpositive, luminal-like breast cancer might contribute to suppression of TGF␤-mediated metastasis in this subgroup of breast cancer by abrogating Smad4-mediated invadopodium formation, ECM degradation, cell migration, and invasion. Indeed, TGF␤ high breast cancer patients in the MSKCC cohort with low fascin expression or high GATA3expression were much less likely to develop lung metastasis, lending further evidence to the notion that these two genes are critical players in TGF␤-mediated breast cancer metastasis.
7,757.4
2013-11-14T00:00:00.000
[ "Medicine", "Biology" ]
Progress in Multi-Disciplinary Data Life Cycle Management Modern science is most often driven by data. Improvements in state-of-the-art technologies and methods in many scientific disciplines lead not only to increasing data rates, but also to the need to improve or even completely overhaul their data life cycle management. Communities usually face two kinds of challenges: generic ones like federated authorization and authentication infrastructures and data preservation, and ones that are specific to their community and their respective data life cycle. In practice, the specific requirements often hinder the use of generic tools and methods. The German Helmholtz Association project ’’Large-Scale Data Management and Analysis” (LSDMA) addresses both challenges: its five Data Life Cycle Labs (DLCLs) closely collaborate with communities in joint research and development to optimize the communities data life cycle management, while its Data Services Integration Team (DSIT) provides generic data tools and services. We present most recent developments and results from the DLCLs covering communities ranging from heavy ion physics and photon science to high-throughput microscopy, and from DSIT. Introduction The central role of data in science has been boosted in the past few years by the advance of Big Data 1 . The sources of these data are experiments, observations and simulations. Policies like data privacy, data preservation and data curation directly affect researchers' handling of scientific data. The project 'Large-Scale Data Management and Analysis' [2] of the German Helmholtz Association covers both generic and community-specific research and development for scientific data life cycles. Data experts in the Data Life Cycle Labs (DLCLs) perform joint R&D with selected domain scientists, while data experts in the Data Services Integration Team (DSIT) are responsible for generic data tools and services. Selected results from Data Life Cycle Management In this central section of the paper, highlights of the actual R&D performed by the DLCLs and the DSIT are presented. They show in an exemplary way the breadth and depth of the challenges and solutions in data life cycle management. DLCL Key Technologies For this paper, we focus on a novel imaging method based on Localization Microscopy (LM, see Figure 1). LM is a novel imaging technique which focuses on analysis of cellular nanostructures. For example, the chromatin nanostructures of eukaryotic cells has been difficult to analyze by light optical microscope techniques due to the limited physically resolution of 200 nm, the Abbe limit. For deep understanding of these subcellular nanostructures, it is necessary to have resolution ranging down to 20nm and less. Spectral Precision Distance Microscopy (SPDM), an embodiment of LM, allows capturing of high-resolution images at 20 nm range. Presently, datasets produced during systematic research are in the range of several TB. There are three different kinds of datasets used: raw datasets, intermediate results and high-resolution images. In the near future, they will add up in size to 150-200 TB, which is about 100 times more than the data generated using a conventional fluorescence microscopes. For managing the extremely large datasets, dealing with their descriptive metadata is very important. The metadata enable the comprehensive description of the data and their provenance allowing the datasets to be referenced and reused. The associated metadata of the dataset are partly embedded in the dataset itself and partly produced in an additional file during the experiment. For producing valuable research results, several aspects of handling the datasets need to be fulfilled: data sharing, referencing, long-term storage, curation and performant data transfer. The aforementioned prerequisites can be fulfilled using an Open Reference Data Repository [3]. Within LSDMA, the 'KIT Data Manager' [4] repository system has been developed. It provides a generic repository architecture that can be fully customized to build community specific data repositories. For sustainable and long-term data storage many data back-ends, e.g. the Large Scale Data Facility (LSDF) [5], can be integrated seamlessly. The repository system provides comprehensive high-level services for • data management and staging, • metadata management, • authorization and sharing, • data discovery based on metadata. Currently the available services are extended to provide the seamless integration of various analysis workflows and image data annotation technologies. These developments can be applied in all scientific fields in which novel measurement and imaging technologies are developed. Open Reference Data Repositories enable the results to be shared and discussed openly in the scientific communities. DLCL Energy The DLCL Energy has designed a concept for a user-oriented system for users' energy consumption data [6] and has started a prototype implementation. This modular system (see Figure 2) aims to tackle the technical challenges as well as the requirements posed by the privacy needs of the respective users. Energy data often is faulty and incomplete. Moreover, different data sources need to be incorporated into one system. Thus, specialized input modules for different kinds of data are aggregated in the input handler which allows for error-tolerant import of those datasets. Imported data is then processed by a central module of the system -the so-called data custodian -before being sent to the database connector for storage in one or more systems. Input Handler Request for data must be issued to the request handler as direct access to the data storage itself is not possible. The data custodian will decide whether processed data is released to the requesting third party or not. Any request for data and the subsequent decisions are logged in the access log. Decisions are based on the requesting party, the requested data quality, and the user-defined rules. The user can decide to allow for higher quality data to be shared, to reduce quality before releasing data, or to deny the release of data. Data quality is defined by temporal and spatial resolution as well as artificial noise. Temporal restrictions lead to reduced frequencies whereas spatial resolution can be lowered by aggregating different data source into one. The user can define boundaries of data quality for different third parties or decide for each request manually. The Data Custodian Service provides decision support in order to reduce privacy threatening impacts of data distribution. Client-side visualization of the stored energy data helps users to understand the implications a release of their data might have. Thus, the user and third parties can negotiate data qualities which allow for third parties to conduct their analyses while at the same time protecting the users' privacy. The Data Custodian Service can be used not only on a local level but also as part of a larger system in a hierarchical architecture for the entire Smart Grid [7]. DLCL Earth and Environment In climatology, a particular task is the comparison and calibration of observed data of remote sensing instruments mounted on the ground, on aircrafts, balloons and on satellites. This requires a matching of geolocations and time of a pair of two devices in given ranges. In [8] the used algorithm is described and the speedup of the geo-matching by using parallel processes to query a MongoDB [9] is explained. Meanwhile we imported geolocations and times of 22 devices and further improved the geo-matcher. In Figure 3 an example of the performance improvement due to parallelization is illustrated. DLCL Health The research of the anatomical structure of the human brain on the level of single nerve fibers is one of the most challenging tasks in neuroscience nowadays. In order to understand the connectivity of brain regions on the one hand and to study neurodegenerative diseases on the other hand, a detailed three-dimensional map of nerve fibers has to be created. One mapping technique is Three Dimensional Polarized Light Imaging (3D-PLI) [12] which allows the study of brain regions with a resolution at sub-millimeter scale. Therefore about 1,500 slices, each 70 micron thick, of the post-mortem brain are imaged with a microscopic device using polarized light. The images of brain slices are processed with a chain of tools for calibration, independent component analysis, enhanced analysis, stitching and segmentation. These tools have been integrated in a UNICORE workflow [13], exploiting many of the workflow system features, such as control structures and human interaction. Prior to the introduction of the UNICORE workflow system, the tools involved were run manually by their respective developers. This approach led to delays in the entire process. The introduction of an fully automated UNICORE approach reduced the makespan of the entire workflow to hours rather than weeks and at the same time, the results are highly reproducible and scalable now. Tailored solutions were worked out for some peculiarities of the workflow system. For example, in order to use results of one workflow job as input in the next job, the workflow system usually copies this data to the common workflow storage before copying it into the working directory of the next job. The amount of data for a single brain slice is on the order of magnitude of up to 1TB, with intermediate results at the same scale. Thus, the total amount of data easily adds up to several TB of data movement within the workflow, which can be avoided by working directly on a central workflow storage which is available on the file system on the machine running the single job. Additionally, configured storages can be used if there are shared file systems among multiple machines at a single site. Another task for processing large data sets in the 3D-PLI context is the workflow support for the iteration over arbitrary file sets of image data. A brain slice in the workflow is comprised of tiles. The number of tiles belonging to a single brain or their names are not known before the workflow execution. All tiles belonging to a slice are put in a directory, serving as input to the workflow. Thus, the workflow engine is configured to iterate efficiently over the tiles generating independent jobs and intermediate Figure 4 shows the final result of workflow execution of one processed brain slice. DLCL Structure of Matter A photon science highlight of this DLCL was presented at this same conference ( [14]); for this paper, we focus on heavy-ion physics. The exact amount of computing, storage and archiving required for the Facility for Antiproton and Ion Research (FAIR, [15]) depends on many factors but is certainly beyond the capacity of a single computing centre. The required resources are dominated by the experiments Compressed Baryonic Matter (CBM, [16]) and Anti-Proton Annihilations at Darmstadt (PANDA, [17]). Current estimates for the sum of all experiments are 300,000 cores and 40 PB of disk space plus the same amount for archive during the first year of data taking. Especially in order to be able to meet peak demands for computing, it may be necessary to offload some of the computing tasks to public or community clouds, local HPC resources and super computers. In this contribution an enabling technology is described which gives the possibility to include local HPC resources into a distributed computing environment for FAIR. A prototype has been implemented which will be operated in production mode for the ALICE Tier-2 centre at GSI [18] within the global Worldwide LHC Computing Grid [19] environment. An xrootd [20] based storage infrastructure has been developed and implemented which can also be used by Grid jobs in the firewall protected environment of the GSI [21] HPC cluster. The main elements are the xrootd redirector as well as the xrootd forward proxy server. The redirector is using the split directive of xrootd and redirects external clients to the external interface of the GSI storage element and internal clients to the internal interface which is directly connected to the local Infiniband [22] Cluster. The xrootd forward proxy server provides the possibility to Grid jobs running inside the protected HPC environment to read input data from external data sources using the proxy interface. Writing to external storage elements is possible via the same technique. The setup is shown in Figure 5. DSIT Most of the participating institutions in DSIT have a strong background in the X.509 certificate based federated identity management, which is (among others) used within the High Energy Physics (HEP) community. Experience shows that a relevant share of users unable to use X.509 certificate protected resources. At the same time a significant increase of users with access to SAML [23] based authentication is observed. This can especially be observed in the education sector, where SAML infrastructures are rolled out and provide user accounts to all students and employees by default. To allow a widespread usage of sophisticatede-infrastructures, a better co-existence between SAML and X.509 has to be established. One approach would be to modify the infrastructure services to support SAML natively. However, this is impractical due to the complexity and manifold operational processes established upon X.509. DSIT is therefore working on concepts and methods to provide migration paths from X.509 certificates to other authentication infrastructures such as SAML or OpenID Connect [24]. The goal is to be transparent to the user, e.g. by translating credentials on behalf of the user. This can be applied at several levels, e.g. as described in [25][26][27]. It is important to note that information to be used for authorisation decisions should survive a translation process and furthermore, the establishment of trust relations between credential provider and consumer is a complex issue in its own. In LSDMA's DSIT we are working on two approaches towards improving access to our resources via SAML, yet supporting X.509 where feasible. One involves the use of an online certification authority (CA), e.g. DFN SLCS [27], while the other builds on the replacement or extenstion of core authentication components in the infrastructure. The first approach is technically feasible, as a user visits the SLCS portal to which he authenticates via SAML at his home institutions identity-provider (IdP). The DFN SLCS CA is accredited within the IGTF trust framework [28] and can issue certificates for use in LCG. However, organisational challenges have yet to be adressed. Firstly every IdP has to sign a contract with SLCS to ensure that all users given the entitlement required to obtain a certificate have undergone existing identity vetting procedures. Secondly, most IdPs do not currently support different levels of assertion for their users, left alone adhere to one common scheme for this. Currently, commandline access is not supported and to some extent, certificates still have to be handled by hand. LSDMA is working on an improved web client to replace the current java-webstart based approach. LSDMA also pushes forward a commandline client, which uses the ECP profile of SAML [29]. The second approach is to extend the Lightweight Directory Access Protocol (LDAP) as one of the core components that handle local authentication of users. We are building on top of the existing work of [25,30], which was started at KIT. The inital status is that non-web logins via SAML/ECP are supported to any service which can authenticate against either Pluggable Authentication Modules (PAM) or LDAP. Both ECP modes are supported: the less secure proxy mode as well as the more secure enhanced client mode. Both methods uses the user provided password via LDAP to authenticate via SAML/ECP to a backend IdP. This approach is very versatile and can in principle be extended to support technologies like OpenID on the backend. We are currently working on an extension to support kerberos [31] and gridFTP [32] on the client side. Kerberos, however requires users once to generate a service password which has to be subsequently used. The support for gridFTP is implemented via the globus authorisation callout, in which the subject of the X.509 certificate is passed via LDAP. Trust is established by relying on the fact that gridFTP has verified the users certificate. Additionally, we are coducting conceptional work to ensure that third party authorisation can be supported. For this we plan to follow the VOMS concept in that an external (web)service is offered for the administration of group membership. The membership information then can be retrieved by the Service Provider (SP) from the external (web)service and mapped into unix group IDs by the LDAP service. Lessons Learned From a distance, it might seem that scientific communities 'only' need large-scale storage, but practice shows that this is not the case. The challenges posed by the advance of scientific Big Data are diverse. Though the communities recognize these challenges, their main focus stays on the analysis of their scientific data. Most of the challenges are community-specific, as can be seen by the joint R&D performed by the DLCLs and their respective communities. Tools and workflows for running experiments can be changed and replaced only gradually. Even if new technologies and approaches promise major advances, their implementation might not be feasible. This makes the collaboration of domain scientists and data experts in the planning phase of new experiments so valuable. When LSDMA started in 2012, all its subprojects started simultaneously. Syncing developments between DLCLs and DSIT was a process that took time. The communities knew their immediate needs rather well, but carving out mid-and long-term requirements that were common within these communities required much communication and reflection. As the needs of the LSDMA communities differ substantially, tools and workflows developed for one particular community are rarely taken up by another community; yet the ideas and concepts of these tools are used when designing solutions for other communities. Handling scientific data has been a very important topic and will become even more important.
4,026.4
2015-12-23T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Feasibility of using respiration‐averaged MR images for attenuation correction of cardiac PET/MR imaging Cardiac imaging is a promising application for combined PET/MR imaging. However, current MR imaging protocols for whole‐body attenuation correction can produce spatial mismatch between PET and MR‐derived attenuation data owing to a disparity between the two modalities' imaging speeds. We assessed the feasibility of using a respiration‐averaged MR (AMR) method for attenuation correction of cardiac PET data in PET/MR images. First, to demonstrate the feasibility of motion imaging with MR, we used a 3T MR system and a two‐dimensional fast spoiled gradient‐recalled echo (SPGR) sequence to obtain AMR images of a moving phantom. Then, we used the same sequence to obtain AMR images of a patient's thorax under free‐breathing conditions. MR images were converted into PET attenuation maps using a three‐class tissue segmentation method with two sets of predetermined CT numbers, one calculated from the patient‐specific (PS) CT images and the other from a reference group (RG) containing 54 patient CT datasets. The MR‐derived attenuation images were then used for attenuation correction of the cardiac PET data, which were compared to the PET data corrected with average CT (ACT) images. In the myocardium, the voxel‐by‐voxel differences and the differences in mean slice activity between the AMR‐corrected PET data and the ACT‐corrected PET data were found to be small (less than 7%). The use of AMR‐derived attenuation images in place of ACT images for attenuation correction did not affect the summed stress score. These results demonstrate the feasibility of using the proposed SPGR‐based MR imaging protocol to obtain patient AMR images and using those images for cardiac PET attenuation correction. Additional studies with more clinical data are warranted to further evaluate the method. PACS number: 87.57.uk over several minutes, each CT slice is captured in less than 1 s. Similarly, in whole-body PET/ MR imaging, MR images for attenuation correction, unlike PET data, are usually acquired using a breath-hold Dixon sequence, which takes about 18 s for each 21 cm bed position. (14) Examples of respiration associated attenuation artifacts in clinical whole-body PET/MR have been reported by Keller et al. (15) The difference in image acquisition time suggests that artifacts caused by spatial mismatch can also occur in cardiac PET/MR imaging. For cardiac PET/CT attenuation correction, the use of respiration-averaged CT (ACT) images has been reported to reduce respiratory motion-induced misalignment of PET and CT images. (13,16,17) Similarly, we posit that using respiration-averaged MR (AMR) images for attenuation correction could reduce misalignment between cardiac PET and MR data and thus reduce myocardial perfusion artifacts in PET/MR images. As a proof of concept, in the present study, we: 1) proposed a spoiled gradient-recalled echo (SPGR)-based MR imaging protocol for obtaining cardiac AMR images under free-breathing conditions; 2) demonstrated the feasibility of deriving attenuation maps from AMR data; and 3) evaluated the proposed technique in a patient study. A. Phantom study To assess the effect of respiratory motion on the proposed MR imaging protocol, we scanned a spherical phantom (diameter = 16.5 cm) containing 0.1% sodium azide under simulated respiratory motion using a 3T clinical MR imaging system (GE Discovery MR750; GE Healthcare, Waukesha, WI) integrated with a motion-enabled table (ROCKER system, GE). The spherical phantom was fixed to the top of the table. Because the table is able to generate one-dimensional periodic motion along the axial direction of the scanner, it can be used to simulate respiratory motion, which is usually modeled as one-dimensional motion along the superior-inferior direction of the patient (i.e., the axial direction of the scanner). The table moves with a prescribed velocity and range; it pauses briefly at either end of the motion, leading to a trapezoidal motion track (Fig. 1). In our experiment, we used a range of ± 1.5 cm and a velocity of 1.5 cm/s, which resulted in a motion period of 4.88 s. To obtain axial slices of the phantom under simulated respiratory motion, we performed a two-dimensional (2D) multislice, multiphase, fast SPGR sequence (field of view = 260 mm × 260 mm, slice thickness = 5 mm, frequency/phase encoding = 128 × 128, repetition time [TR]/ echo time [TE] = 3.0 ms/1.4 ms, flip angle = 20°, bandwidth = ± 125 kHz) with a single-channel head coil. Fourteen temporal frames were acquired for each slice, and each frame's duration was 0.4 s, resulting in a temporal coverage of 5.6 s for each slice. A total of 30 slices were acquired, covering 150 mm along the axial direction. The scan duration was 169 s. For MR imaging, the patient was placed in an 8-channel torso coil and scanned using the GE 3T MR imaging system in a supine, arms-up position. Images of the patient's thorax and upper abdomen were obtained using the 2D multislice, multiphase SPGR sequence used for the phantom, with slightly modified parameters (TR/TE = 3.7 ms/2.2 ms, flip angle = 20°, frequency/phase encoding = 128 × 128, field of view = 400 mm × 400 mm, slice thickness = 5 mm, bandwidth = ± 125 kHz). In particular, TE was automatically determined by the console as the result of choosing the "in phase" setting, which ensures the phase difference between water and fat signals in the MR image is minimized. TR was automatically adjusted to account for the change in TE. Axial slices were acquired for attenuation correction of the PET images. A total of 30 slices were acquired, covering 150 mm along the superior-inferior direction. The acquisition time for each 2D frame was 0.48 s, and 12 temporal frames (5.76 s) were obtained consecutively to ensure continuous and sufficient coverage of at least one respiratory cycle for each slice location. The temporal coverage was close to 5.9 s, the duration chosen in a previous ACT study which was based on recorded breathing cycles for 600 patients. (13) The total scan duration was slightly less than 3 min, a typical PET acquisition time in oncology. B.2 Data processing Previously developed segmentation-based methods produce attenuation maps which assign discrete attenuation coefficients for each tissue class. As a result, these methods cannot directly convert AMR images into synthetic ACT images, ACT AMR , whose attenuation properties should reflect the motion blurring. Direct conversion from AMR to ACT AMR can be potentially achieved with a pattern recognition/machine learning algorithm combined with a dedicated ACT/AMR atlas, (18) or fuzzy segmentation of the MR images. In the present study, we circumvented this problem by processing each MR image frame acquired at different temporal phases instead of processing AMR images. After each frame of the MR image was converted into a synthetic CT image, ACT AMR was derived as the average of all the frames for each slice. A simple three-class (air, lung, and soft tissue) segmentation approach (19)(20)(21) was adopted to convert MR images into synthetic CT images. To overcome the low signal-to-noise ratio and spatial inhomogeneity in each frame of the MR images, we implemented the following steps to achieve better segmentation. First, anisotropic diffusion filtering (22) was applied to reduce noise while preserving edge information. Then, sequential morphological erosion/dilation algorithms, which aim to remove small, isolated noise clusters that were treated as "soft tissue" during the initial thresholding, were used to threshold-segment and refine the soft tissue. Bone voxels could not be separately identified with the obtained MR image; instead, they were incorporated into the soft tissue class during segmentation. After the soft tissues were identified, the rest of the pixels in each 2D image were grouped into connected regions using a modified Moore-Neighbor tracing algorithm. (23) The region that contained pixels outside the body contour was identified as air, while all the regions inside the body were identified as lung. After segmentation, predetermined CT numbers were assigned to each segmented class to generate a corresponding synthetic CT image. Finally, the averaged attenuation images were derived as the arithmetic mean of the individual synthetic CT images of all phases. The assigned CT numbers for lung and tissue were determined by segmenting CT images obtained in cardiac PET/CT. Non-anatomical components (scanner table, blanket, etc.) were first removed from the CT images, and then the lung was segmented using a region growing algorithm with a fixed upper threshold (-350 HU). Tissue was segmented by applying a lower threshold (-500 HU) and then excluding the segmented lung. Both fat and bone were included in the soft tissue class. Class-specific mean CT numbers were then calculated from the segmented tissue classes and used to create attenuation images. We used two sets of CT numbers to generate attenuation images from MR data. In one set, the CT numbers were from the patient who underwent MR imaging (patient-specific [PS]). This set was created to ensure that the attenuation property of the created image matched with that of the patient. In the other set, the CT numbers were the mean of class-specific mean CT numbers from a 54-patient reference group (reference group [RG]). This set was created to ensure that average attenuation could also be used for attenuation correction. The average attenuation images derived using these two sets of CT numbers -ACT AMR-PS and ACT AMR-RG -were used along with the original ACT data for attenuation correction of the PET data. Before performing attenuation correction, we removed the table in the ACT data so that the ACT data matched the AMR data. Both the ACT-and AMR-derived attenuation images were manually shifted to ensure good alignment with the emission images in the myocardium region. To reduce subjectivity, two independent observers verified the results of the manual registration. Attenuation correction of the PET data was then performed with the ACT-and AMR-derived attenuation images, the results of which are referred to as PET ACT , PET AMR-PS , and PET AMR-RG , respectively. B.3 Assessing differences in attenuation-corrected PET images Quantitative difference in the myocardium region was evaluated. The myocardium was segmented in PET ACT using a region growing algorithm with the lower threshold set at 50% of the maximal myocardium activity. We evaluated the myocardial quantification difference between MR-based and CT-based PET data by comparing voxel-by-voxel difference and mean slice activity (MSA). To assess the potential clinical impact resulting from the quantification difference, we used a semiquantitative five-point scoring system (24) to evaluate the reformatted 17-segment perfusion map for each attenuation-corrected PET dataset. The definitions of these quantities are described below. For each voxel, the relative difference d 1 and absolute relative difference d 2 were computed as: where I and I REF represent the measured uptake in each voxel. For a slice z, the MSA was first computed as: where j is the index for voxel, M z is the set of voxels in slice z that were identified as myocardium, and N z is the size of M z . For comparison, normalized mean slice activity (nMSA) was calculated as: where MSA max is the maximal MSA of all slices in the attenuation-corrected PET datasets. The difference in MSA in slice z was calculated as: The polar perfusion maps were created with the Emory Cardiac Toolbox (ECToolbox, Atlanta, GA) using the 17-segment model recommended by the American Heart Association. (25) Based on the amount of perfusion present in each segment, a score ranging from 0 to 4 was assigned automatically by the software as an indicator of cardiac perfusion function (0 = normal, 1 = equivocal, 2 = moderately reduced, 3 = severely reduced, 4 = absent). A. AMR images of the phantom and patient For both the phantom under simulated respiratory motion and the patient under free-breathing conditions, visual inspection of the acquired MR images revealed that the proposed MR protocol could generate AMR images without visible motion artifacts and with average motion blurring effect, which is crucial to the success of the proposed technique (Fig. 2). For the patient study, motion artifacts were not visible in the individual frames, even when the images are displayed at the signal intensity level of noise, indicating the effectiveness of the proposed MR protocol for free-breathing MR acquisition (Fig. 3). B. Class-specific mean CT numbers The mean CT numbers calculated for lung and tissue were -726 HU and 47 HU, respectively, for the patient and -727±51 HU and 4±18 HU, respectively, for the 54-patient reference group. The same segmentation parameters were used for all datasets. C. Quantification of PET AMR Representative slices of the sagittal, coronal, and axial views created from AMR, ACT AMR-PS , and ACT data are shown in Fig. 4. With PET ACT as the reference, the differences d 1 and d 2 of PET AMR-PS were -2.0% ± 5.1% and 4.3% ± 3.3%, respectively; for PET AMR-RG , the differences were -6.2% ± 5.0% and 6.3% ± 4.8%, respectively. The nMSA at different PET AMR-PS , PET AMR-RG , and PET ACT slices are plotted in Fig. 5. The highest MSA value was that of slice 7 of PET AMR-PS , and this value was used to normalize all three datasets. The absolute quantification difference in mean myocardial activity between PET AMR-PS and PET ACT at different slices was 2.0% ± 1.6%; the maximum difference was 5.0%. The absolute quantification difference between PET AMR-RG and PET ACT was 4.7% ± 2.5%, with a maximum difference of 8.8%. Reformatted PET images (PET ACT , PET AMR-PS , and PET AMR-RG images) of the myocardium, along the short axis, horizontal long axis, and vertical long axis, are shown in Fig. 6. The PET images of the left ventricle in the attenuation-corrected PET images were reformatted into the polar maps using a 17-segment model. In the original ACT-corrected PET image, the summed stress score was 0, indicating normal cardiac function. The scores in all segments were the same in both AMR-corrected PET datasets (Fig. 7). IV. DISCUSSION The different patient tables of the MR system and the PET/CT system caused a visible spatial mismatch between AMR and ACT images in the dorsal area of the patient (Fig. 4). (The MR system's table has a The patient had normal cardiac function, with a summed stress score of 0 in the PET ACT image. Using MR-derived attenuation images for attenuation correction did not affect the stress scores. and PET AMR-RG , which would not be an issue in a combined PET/MR system. In the present study, we reduced the impact of this spatial mismatch on PET quantification by ensuring the alignment of the attenuation images and emission images in the myocardium region. This was achieved with manual registration, a typical approach to correct misregistration between cardiac PET and CT images. (12) In a previous phantom-based study, Zhang et al. (26) found that reconstructed PET activity can be underestimated by 10%-20% if one does not correct the attenuation the MR system's table contributes. Our phantom experiment, using the PET/CT system with its curved table, produced similar results. Therefore, the imaging system table must be considered to accurately quantify PET data. In the present study, we attempted to incorporate the PET/CT system's table into the AMR-derived attenuation images. Unfortunately, this proved to be difficult due to the mismatch of the patient's body contour resulting from the table difference. For the purpose of fair comparison, therefore, we removed the table from the ACT images before performing the attenuation correction. As a proof of concept, we used a three-class segmentation scheme to derive MR-based attenuation images. Despite its simplicity, this method achieved relatively accurate quantification in the reconstructed PET images. While creating the attenuation map from AMR images, we did not perform bone segmentation, which is difficult without the aid of a dedicated ultrashort echo time imaging sequence and, to date, has been mainly applied in PET/MR imaging of the brain. (27,28) In brain PET, ignoring bone has been suggested to cause quantification bias. (29) In whole-body PET/MR imaging, however, neglecting bone in segmented attenuation images has been suggested to cause large errors only in regions that are inside or near bones. (30)(31)(32) In one example demonstrated by Samarin et al., (32) classifying bone as soft tissue resulted in less than 6% difference for PET voxels in the heart region. Ouyang et al. (33) also concluded that threeclass segmentation can be sufficient for PET quantification in the heart, as it yields less than 5% quantification difference after compensation. These studies indicate that bone segmentation may not be necessary for cardiac PET/MR. Martinez-Moller et al. (14) proposed using a four-class segmentation scheme with the Dixon technique, in which fat is separated from nonfat soft tissue and assigned a different attenuation coefficient. Although evaluating different segmentation-based attenuation correction methods was beyond the scope of our study, it should be noted that the Dixon technique can be integrated into our proposed AMR protocol with a modification of the MR sequence, to separate fat and nonfat soft tissue while maintaining similar temporal resolution. Such an approach may improve PET quantification in patients with higher body fat composition. To investigate the impact of assigned CT numbers on quantification, we created two sets of attenuation images from MR data: ACT AMR-PS and ACT AMR-RG . As expected, ACT AMR-PS resulted in a smaller quantification difference, owing to the more accurately estimated attenuation coefficients for the patient. In clinical PET/MR applications, however, patient-specific CT images are usually unavailable, and general coefficients must be used for attenuation correction. In the present study, the mean lung CT number of the patient (-726 HU) was close to that of the reference group (-727 ± 51 HU); however, a Student's t-test revealed that the patient's mean tissue CT number was significantly higher than that of the reference group (47 HU vs. 4 ± 18 HU, p < 0.001). As a result, the quantification difference in the ACT AMR-RG -corrected PET data (6.3%) was higher than that in the ACT AMR-PS -corrected PET data (4.3%); however, the error was small. For patients whose mean attenuation coefficients or CT numbers deviate less from the population mean, less quantification difference is expected. AMR-based attenuation correction did not affect the summed stress score, indicating that the quantification difference is not clinically significant in this one case. Further investigation is required to evaluate the clinical impact of the proposed method of MR-based attenuation correction. Several authors have proposed MR-based respiratory motion correction for thorax PET/ MR, (34)(35)(36)(37) and at least one phantom-based study tested a tagged MR imaging-based technique for cardiac motion correction. (38) Although such approaches aim to eliminate the impact of motion in the reconstructed PET image, they usually require nonstandard MR sequences that are not clinically available. In contrast, the approach we propose uses a scheme that has been proven effective in PET/CT to reduce the spatial mismatch between emission and attenuation data and the consequent artifact in the cardiac perfusion PET image. The present study's findings suggest that a similar improvement can be achieved in cardiac PET/MR imaging without resorting to motion correction. In the present study, we tested the feasibility of using AMR images of the thorax to create attenuation maps for cardiac PET data. As a proof of concept, we designed a simple strategy to include the motion blurring effect by processing the images of individual phases. This strategy does not fully capture the motion blurring effect and is a potential limitation of this study. However, the quantification errors were small, suggesting that this simple strategy is feasible. In future work we will investigate methods that directly convert AMR into motion-blurred attenuation images. V. CONCLUSIONS The present study's findings demonstrate the feasibility of using AMR images for attenuation correction of cardiac PET data. Despite the fact that the different tables of the MR and PET/CT systems caused a geometrical mismatch of the AMR-based and ACT-based attenuation images, the PET data corrected with the MR images achieved accurate quantification and maintained the same summed stress score. Further study with more patients is warranted to determine the effectiveness of AMR-based attenuation correction in cardiac PET/MR imaging.
4,632.6
2015-07-01T00:00:00.000
[ "Medicine", "Physics" ]
PRESUPPOSITION AND ENTAILMENT USED IN GRETA THUNBERG’S SPEECH AT UN CLIMATE ACTION SUMMIT 2019 Climate change is one of the hottest topics lately. Global emissions are reaching record levels and showing no sign of peaking. Antonio Guterres UN Secretary-General invited all leaders to join Climate Action Summit in New York, 23rd September 2019. This summit also featured the participation of business leaders, indigenous people, youth, and many others. The star of the showed Greta Thunberg a Swedish teen activist who sailed to New York for the event from Sweden on a zero-emissions sailboat. This research aimed to reveal how presupposition and entailment were used in the speech and how they were contributing to the context of the speech. The research used a descriptive qualitative method for analyzing her speech which involved document and material analysis to collect the data. The results showed that this research found out that the most commonly used presupposition is existential presupposition. The function is to emphasize, to draw attention and sympathy toward the listeners. While the most used entailment is one-way entailment. This type of entailment is commonly used to deliver the ideas through the utterance by adding some more details of the main idea. Existential presupposition and one-way entailment led to the referential function of language which aimed to send information or the speaker's idea to the audience. It can be concluded that the presuppositions of the speech must be entailed by the global context, which means the global context or common ground knowledge entails that presupposition. In a word, both presupposition and entailment hence become a strategy to make the audience become more focused in the context of the speech. INTRODUCTION Public speech is an activity that is usually done by important figures such as, leaders, politicians, motivators, activists, and many others. Public speech has many functions which depends on the purpose the speaker is trying to achieve with their speech. As an activist, speech has become one of the many ways to get people to listen to their message. Among many activists around the world, there is a person who stands out who is very young, her name is Greta Thunberg. She is a 16-years-old Swedish environmental activist on climate change who becomes one of the hottest topics of the year. At the age of 15, she began spending her school days outside of the Swedish Parliament, demanding stronger action against global warming by holding a sign saying "School Strike for the Climate". Her speech at U.N Climate Action Summit 2019 in New York made her fame rose up. She addressed the world leaders to take strong action against climate change. Thunberg told a lot of statements, like expressing how she felt about climate change, facts about climate change, and many more. Greta Thunberg's way of speaking is forceful and she backed up her arguments with wellchosen scientific data points which appeared to be in contrast with the style of her peers. Her speech contains some presupposition and entailment. A presupposition is when something presupposed to be true in a sentence that contains other information. It is also required how the speakers organize what they want to say in accordance with whom they referring to, where and when they are talking, and in what circumstance they are talking. All types of presuppositions convey meaning more than what is said. The use of presupposition can be analyzed by using a theory that connects the production and comprehension of speech act. The presupposition theory has been defined by many scholars and researchers, which often is almost similar or identical to each other. George Yule (1996:25) is one of the linguists who explained that presupposition is something the speaker assumes to be the case prior to making an utterance. He also divided presupposition into six types. The Presupposition that appeared in Greta Thunberg's speech appears in many different types. Meanwhile, entailment is a logical concept related to the meaning of one sentence to the others. It is important to understand the relation between sentences. When a sentence is related to other sentences, the idea in the sentences becomes stronger. As stated by Griffith (2006: 25) entailment happens when the truth of one proposition depends on each other, it means that the truth of two propositions correlates with each other. In order to find the connection between presupposition, entailment, and the context of the speech, the research was analyzed by dynamic approach theory by Stalnaker (1970Stalnaker ( , 1973Stalnaker ( , 1974Stalnaker ( , 1978 and language function by Roman Jakobson (1960) and Holmes (2001). There are several research in the past that have analyzed speeches before. One of them (Ariyanti & Nistiti, 2019) Maintaining Confessional Discourse through Presupposition in Feminist Speech. In their research, they analyzed the types of utterance in a speech by Chimamanda Ngozi Adichie. The result of their research is that Chimamanda Ngozi used three confessional discourse function which are, therapeutic, interrogatory and didactic through presupposition types. Another research was done by Ida Catur Wahyu (2016), with the title Revealing the Function of Reference in Presupposition of English Cigarette Taglines. In her research, Ida analyzed the tagline using presupposition and referring expression by Yule (1997). She found that existential presupposition is mostly used in the advertisement to maintain the product existence by employing conciseness and emphasis function. Meanwhile, the use of referring expression helps to limit the consumers' inference of the presupposition information. There are three referring expressions that are found, which are proper nouns, noun phrases, and pronouns. This research attempts to analyze what types of presupposition and entailment that used in Greta Thunberg's speech and what is the contribution of presupposition and entailment in the speech. And the goal of this research is to define the relation between the presupposition and entailment types and revealing the contribution of presupposition and entailment in Greta Thunberg's speech. Presupposition The general definition of Presupposition is the relation between sentences or propositions (with interpretations), either belonging to semantics or to pragmatics. As stated by Richardson (2007), Presupposition refers to the information triggered by certain linguistic construction which is irrefutably credited as absolute truth by participants in an utterance in a specific context. Yule (1996:25) explained that presupposition is something the speaker assumes to be the case prior to making an utterance. In this case, the speaker has presupposition in the form of utterances, not sentences. According to Yule (1996:27), presupposition can be found in linguistic form as indicators of potential presupposition, which can only become actual presuppositions in context with speakers. Yule divided presupposition into six types. Existential presupposition is the assumption assumed to be committed to the existence of entities named by the speaker and present in possessive construction such as 'my cat' which leads to a particular strong presupposition about the existence. Factive presupposition is the assumption that something is true and it is identified by the presence of some verbs such as know, realize, regret, aware, glad, be and etc. The use of these verbs triggers the presupposition that what follows is a fact. Lexical presupposition is the assumption that is using one form, the speaker can act as if another meaning will be understood. Structural presupposition is the assumption from a certain sentence structure where the information presented in the sentence is already considered as the truth. (Yule, 1996). The part of the sentence structure contains words and phrases. The speaker can use such structures to treat information as presupposed and accepted it to be true by the listener. It can easily found in the use of 'WHquestion' construction in English. Non-factive presupposition is the assumption that is assumed to be untrue. It can be identified by words like imagine, pretend, dream, etc. The use of those words triggers the presupposition that what follows is fiction. Counterfactual presupposition is the assumption that what is presupposed is not only untrue but also the opposite of what is true, in other name is contrary to the fact. Identified by the words IF-clauses. Entailment As stated by Griffiths (2006:25), entailments can be described as a proposition that is definitely true when given a proposition. Yule (1998:129) also stated that entailment is something that follows from what is mentioned before. In addition, Rambaud (2012:70) stated that entailments are related to the knowledge of a particular language, and not the knowledge of the truth and falsity of the normal sense of the world. It was concluded that entailment is sentential meaning relation (Fromkin, Rodman and Hyams, 2003:195) Entailment can be divided into several types. According to Griffiths (2006), there are two types of entailment, they are one-way entailment and two-way entailment. According to Brinton (2000:13), one-way entailment is different from paraphrase. In one-way entailment, a sentence does not paraphrase the other sentence. One of them similar to the conclusion of the other. It is an entailment that works only in one direction. Kreindler (1998:86) provides the illustration of this entailment. When the two propositions are labeled as 'p' and 'q'. If 'p' is true, 'q' must be true. But if 'q' true, it does not necessarily that 'p' also true since it can be false. For example, if the sentence My shirt is navy is true, then the sentence My shirt is blue is true. But, if the sentence My shirt is blue is true, then the sentence My shirt is navy is not always true. As stated by Griffiths (2006:27), in contrast with one-way entailment, two-way entailment has a meaning which correlates to each other, and the sentences that contain two-way entailment paraphrase each other. Fromkin, Rodman, and Hymans (2003:197) stated that two-way entailment or paraphrase is sometimes expressed in the term of active-passive pairs. For example, the sentence She did not invite me to the party and I was not invited to the party is in relation to two-way entailment or paraphrase. Dynamic Approaches Based on the theory by Stalnaker (1970Stalnaker ( , 1973Stalnaker ( , 1974Stalnaker ( , 1978 presuppositions are generally seen as imposing requirements on the possible context of utterance. In his theory, this is the fundamental pragmatic ideas. Based on the idea of contexts of utterance in terms of the common groundthe set of worlds which compatible and mutually supposed for purpose communication. Assertation used to add information to the common ground. On the other hand, presuppositions fit well with what is already entailed by the common ground, This shows that presuppositions are an aspect of meaning that is taken for granted by the participants. An utterance that comes with a presupposition requires the common ground entail the presupposition in order to be precise. Language Function According to Roman Jakobson (1960), there are six functions of language (or communicative functions). Each of the functions has an associated factor. They are the referential, the emotive/expressive, the conative/directive, the poetic, the phatic, and the metalingual function. Based on Holmes (2001) the referential function is a function to convey information it can be seen through different forms of speech such as interrogative or declarative speech. This function is oriented towards the context of communication. The referential function aims to send information or to tell others about the speaker's idea. According to Jakobson (1960), this function is the function towards the message as such, focus on the message for its own sake. This function is oriented towards the message of communication and focuses on the paradigmatic and syntagmatic category reversal. The poetic function is aimed at foregrounding textual features in a particular form is the essence of the message. Based on Jakobson (1960) emotive function is focused on the addresser, aims at a direct expression of the speaker's attitude toward the speech, and is oriented to the speaker. The function of this function is to communicate the emotion of the speaker or expressing the feeling of the speaker. The conative function is oriented toward the speaker. It occurs earliest in the child's language acquisition. This function is influencing behaviour towards other words. Through this function, the speaker aims to get someone to do something related to the speaker's utterance. Based on Lanigan (2010) the phatic function is focused on physical and psychological engagement. This function also distinguishes the first and the second person discourse function. According to Jakobson (2007), the phatic function has a function as the opening, checking that is working or not as the representation of the social relationship. This function is oriented on the contact between the speaker and the receiver. According to Jakobson (1960), the metalingual function is that whenever the speaker or the audience needs to check whether they use the same code, speech is focused on the code: it performs a multilingual function. This function is aimed to refer to the natural interaction and focusing upon the code, whether clarify it or renegotiate it. METHOD The writer used qualitative approach after considering the nature of the data and the objectives of this research. This method employs deep analysis via detailed description rather than using number in analyzing data such as in quantitative method. The analysis explained the presupposition and entailment that used by Greta Thunberg in her speech. By using descriptive method, it was easier to find the variant and the meaning of presupposition and entailment which included in Greta Thunberg's speech. This research also explained the function of presupposition and entailment that are used in the speech. The subject in this research is Greta Thunberg. The data in this research are utterances which used by Greta Thunberg in her speech at U.N Climate Action Summit 2019. This research used a presupposition and entailment analysis based on George Yule's presupposition theory and entailment theory by Griffiths (2016). The source of data was taken from www.npr.org. Data collecting technique in this research applying documentation technique. The writer retrieved the script of the speech from www.npr.org. Therefore, the data is collected from the utterance of Greta Thunberg's speech. In answering each research question, the writer uses different elements related to the problem applied. After the data are collected, it performed data reduction, whereas data which considered unimportant for research had been eliminated and the researcher will only focus to data which related with the research. To answer the research questions, this research analyzed the presupposition and entailment that used in the speech. From the result, the research questions may be answered by find out what type of presupposition and entailment used in speech and what is the contribution to the topic of the speech. After that drawing the conclusion and giving suggestion based on the result of the analysis. Findings The finding shows six types of presupposition with further description of presupposition in the first paragraph. The finding also shows two types of entailment used in the speech, and the language function which found in the speech. Presupposition After analyzing the utterances contained on Thunbergs speech, this research found that there are five types of presupposition, and two types of entailment that used by the speaker. They are existential presupposition, counterfactual presupposition, factive presupposition, non-factive presupposition, and lexical presupposition. The researcher did not find any structural presupposition. And one-way entailment and two-way entailment. The speaker preferred to apply existential presupposition and one-way entailment in her speech. Existential presupposition used to emphasize, drawing attention and sympathy from the audience. While, one-way entailment is the most common used entailment in delivering idea. a. Existential Presupposition Utterance: "You have stolen my dreams and my childhood with your empty words and yet I'm one of the lucky ones". Based on Yule (1996), existential presupposition present in possessive contraction. The word my and your is categorized as possessive contraction because it is possessive adjectives. The word "my" refers to herself or the speaker, and the word "your" refers to the audience or the world's leaders that attended the summit. Utterance: "The popular idea of cutting our emissions in half in 10 years only gives us a 50 percent chance of staying below 1.5 degrees and the risk of setting of irreversible chain reactions beyond human control". This sentence is triggered by possessive pronoun, as can be seen in the utterance above by the word "our". The word "our" refers to the speaker and the audience. Also, the use of definite noun phrase "the popular idea" which the speaker assumes that the popular idea is exist and it makes this sentence categorized as existential presupposition. b. Counterfactual Presupposition Utterance: "This is all wrong. I shouldn't be up here. I should be back in school on the other side of the ocean. Based on Yule's theory counter-factual presupposition has a meaning that what is presupposed not only true but it is the opposite of what true (contrary to facts). The sentence "I shouldn't be up here. I should be back in school" was contrary to the facts that she is attending the summit. Utterance: "You say you hear us and that you understand the urgency, but no matter how sad and angry I am, I do not want to believe that. Because if you really understood the situation and still kept on failing to act then you would be evil and that I refuse to believe" According to Yule, the characteristic of counterfactual presupposition is identified by the words IF-clauses. In this utterance it shows that even the world leader understands about the situation but they keep failing to solve the problem. c. Factive Presupposition Utterance: "To have a 67 percent chance of staying below 1,5-degree global temperature risethe best odds given by the IPCC (Intergovernmental Panel on Climate Change). The world had 420 gigatons of CO2 left to emit back on January 1st 2018. Today, that figure is already down to less than 350 gigabytes" The sentence above is categorized as factive presupposition because triggered by the word "odds". The use of this word indicate what follows is facts. Utterance: "There will not be any solutions or plans presented in line with these figures here today, because these numbers are too uncomfortable and you are still not mature enough to tell it like it is." The types of factive presupposition use verbs or words that refer to reality or facts (something true). In this utterance the use of the word "will" indicate what follow is facts. d. Non-factive Presupposition Utterance: "How dare you pretend that this can be solved with just business as usual and some technical solutions? With today's emissions levels, that remaining CO2 budget will be entirely gone within less than eight and a half years." According to Yule's theory, the presence of verb "pretend indicates that what the speaker said is considerably assumed to have followed by an untrue information. In this utterance means someone is lying that the problem already solved. e. Lexical Presupposition Utterance: "We are in the beginning of a mass extinction and all you can talk about is money and fairytales of eternal economic growth. How dare you!" This sentence is categorized in to lexical presupposition, triggered by the word "beginning", when someone uttered this sentence means that the event hasn't happened before. Utterance: "You are failing us, but the young people are starting to understand your betrayal." In the datum above, based on theory by Yule, the use of word "starting" was categorized as lexical items which triggers lexical presupposition. Entailment a. One-way Entailment Utterance: "The popular idea of cutting our emissions in half in 10 years only gives us a 50% chance of staying below 1.5 degrees [Celsius] and the risk of setting of irreversible chain reactions beyond human control." In the utterance above, firstly Greta states that the popular idea of cutting the emission in half in 10 years. In order to expressing disapproval, she strengthens her statement by adding two additional sentences. By stating that those popular idea only give 50% chance of staying below 1.5 degree [Celsius], and she also mention the risk is beyond human control. One of the distinctive characteristics of one way entailment is the speaker giving more details to make the audience trust the argument. By providing the two sentences that explain the reason why she is not agree with cutting the emissions make the audiences believes in her. In the utterance above, Thunberg argues that there is 67% of chances to stay below 1.5 degrees global temperature rise. Then she gives more detailed facts given by Intergovernmental Panel on Climate Change that the CO2 left is 420 gigatons to emit back. She also straightens her argument by giving detailing date which is Jan 1st, 2018. The way she delivers this one-way entailment is by exploring some facts about the idea mentioned by the speaker. When she utters her statement, she explains more by giving further fact. She wants the audience or listener to believe in her. The strategy of giving facts or examples is also employed by Greta Thunberg. b. Two-way Entailment Utterance: "You say you hear us and that you understand the urgency. But no matter how sad and angry I am, I do not want to believe that. Because if you really understood the situation and still kept on failing to act, then you would be evil. And that I refuse to believe" In the utterance above, the words used in the two sentences has same the meaning. However, a word in the first sentence is replaced by another word. In datum above, the sentence "I do not want to believe that" in the first sentence is replaced by "I refuse to believe" in the second sentence. But the meanings of those two sentences are the same. Thus, the first sentence is the paraphrase of the second sentence. Utterance: "People are suffering. People are dying. Entire ecosystems are collapsing." In the utterance above, the word "suffering" in the first sentence is replaced by "dying" in the second sentence. The main ideas of those two sentences are similar. When the speaker says that people are suffering. In other words, the speaker also means that people are dying. In conclusion, two-way entailment can be seen in two sentences or more that have the same meaning. They are an idea that is uttered repeatedly. However, those sentences are used with a different expression. The speaker's aim in using twoway entailment is to emphasize the idea of the sentence to the listener or audience. Language Function Utterance: "The popular idea of cutting our emissions in half in 10 years only gives us a 50 percent chance of staying below 1.5 degrees and the risk of setting off irreversible chain reactions beyond human control." It can be seen in utterance above, that the presupposition used in datum above is existential presupposition. From those utterance it can be analyzed that there is a popular idea to solving the global warming problem. And the entailment that used in datum above is one-way entailment, which the speaker adding more details about her argument. Based on Jakobson's language function theory, it can be categorized the function of Greta's utterance is referential function. The referential function is related to the factor of context and describes the situation, object or mental state. In this utterance it can be seen that Greta persuade the audience that the popular idea of cutting the emission is does not have the effect that world leaders proclaim, by stating the fact and mentioning the risk. Utterance: "We will not let you get away with this. Right here, right now is where we draw the line. The world is waking up and change is coming, whether you like it or not." It can be seen from the utterance above, that the presupposition used in datum above is existential presupposition, and the entailment used is one-way entailment. From those utterance it can be analyzed that the world is exist and it is changing. Based on Jakobson, emotive function focuses on the speaker and aims at a direct expression of the speaker's attitude toward what she is speaking about. In the end of her speech, Greta expresses her feeling by saying this utterance. She wants the audience to know that the world is changing. Presupposition From the finding, it was found that the dominant type of presupposition used in Greta Thunberg's speech is existential presupposition (56.25%). This case happens because the speaker or Greta use existential presupposition to emphasizes, drawing attention and sympathy whether from her generation nor the audiences. Meanwhile, the less frequent presupposition that appears in the speech is nonfactive presupposition. It happens because the number of this presupposition is less than other presupposition, it only appears once throughout the speech. Because nonfactive presupposition delivers falsity of a case while the speech of Greta Thunberg mostly contain about facts which makes non-factive presupposition inappropriate to use. Each presupposition has specific function in the speech. The first is existential presupposition, this type of presupposition triggered by the use of definite noun phrase. Based on the data, the speaker mentioned seven definite noun phrase. Such as, young people, people, the science, the politics, the popular idea, the aspects of equity, the consequences and the eyes. The use of this definite noun phrase is to show the listener that young people, people, the science, the politics, the popular idea, the aspects of equity, the consequences and the eyes are exist. Besides triggered by definite noun phrase, existential presupposition also triggered by the use of possessive contraction in this speech such as my which refers to the speaker herself, our which refers to the speaker and the audience, your which refers to the world leader The speaker mostly use this type of presupposition to deliver their intentions, and convincing the audience. The second is counterfactual presupposition the use of this presupposition to show the truth implicitly by uttering the contrary condition. Based on Yule (1996) theory, counterfactual presupposition triggered by the use of IF-clauses. In the data, this presupposition appears twice throughout the speech. The first sentence is "This is all wrong. I shouldn't be up here. I should be back in school on the other side of the ocean", contrary to the facts that she is attending the summit. The second sentence "If you really understood the situation and still kept on failing to act then you would be evil and that I refuse to believe". This is contrary to the fact that the world leader does not understanding the situation. The third is factive presupposition which aims to declaring the fact. In this speech, the speaker mostly apply this type of presupposition to tell the audience about facts, when she applying this presupposition she used data which has reliable source. This type of presupposition appears twice in the speech. Firstly, "To have a 67 percent chance of staying below 1,5 degree global temperature risethe best odds given by the IPCC (Intergovernmental Panel on Climate Change). The world had 420 gigatons of CO2 left to emit back on January 1st 2018. Today, that figure is already down to less than 350 gigabytes", triggered by the use of word odds means what follows is the fact, in the sentence above the speaker mentioned reliable source IPCC (Intergovernmental Panel on Climate Change) after the words odds. Means that, according to IPCC or Intergovernmental Panel on Climate Change the world had 420 gigatons of CO2 left to emit back on January 1st 2018. Secondly, "There will not be any solutions or plans presented in line with these figures here today, because these numbers are too uncomfortable and you are still not mature enough to tell it like it is". Triggered by the use of word "will" makes this sentence categorized as factive presupposition. Means that there is no solution and plans presented today. The fourth is non-factive presupposition which is used to show the falsity of something, based on the data this type of presupposition applied by the speaker to criticize something or to say her disagreement about something. In the speech, this presupposition only appears once. In the sentence, "How dare you pretend that this can be solved with just business as usual and some technical solutions? With today's emissions levels, that remaining CO2 budget will be entirely gone within less than eight and a half years". Triggered by the use of word "pretend", which means what follows is not true. In the sentence above means, the world leader pretending or lying that the problem can be solved with business and technical solutions which it's impossible with today's emissions level. The fifth is lexical presupposition, based on the data the speaker use this type of presupposition to deliver reminder to the audience. Appears twice in the speech, the first sentence is "We are in the beginning of a mass extinction and all you can talk about is money and fairytales of eternal economic growth. How dare you!". triggered by the word "beginning". In the sentence above means, we are now in the beginning of mass extinction which never happen before, but the world leader only cares about money and economic growth. In the second sentence, "You are failing us, but the young people are starting to understand your betrayal." Triggered by the use of word "starting". The sentence means, young people are start to realizing the betrayal done by the world leader. In conclusion, from six types of presupposition, there are five types of presupposition used in Greta Thunberg's speech. Which is existential presupposition, counterfactual presupposition, factive presupposition, non-factive presupposition, and lexical presupposition. Entailment The finding show that all types of entailment were employed by Greta Thunberg in her speech at UN Climate Summit 2019. They are one way entailment and two-way entailment. One-way entailment is the most used in Greta Thunberg's Speech. This entailment appears for nine times throughout the speech. The one-way entailment is the most common used entailment in delivering ideas. They usually give details of the main idea, the details can be in the form of example, explanation, or description. The use of one way entailment is to help the hearer to understand the idea. An example of one way entailment is in the following datum. "To have a 67 percent chance of staying below 1,5 degree global temperature rise. The best odds given by the IPCC (Intergovernmental Panel on Climate Change). The world had 420 gigatons of CO2 left to emit back on January 1st 2018. Today, that figure is already down to less than 350 gigabytes." In the utterance above, Greta states her arguments the best chance to stay below 1.5 degree on global temperature rise. Then, she gives some details with reliable source on how to get 67 percent chance. Based on IPCC (Intergovernmental Panel on Climate Change) the world must emit back 420 gigatons on January 1st 2018, the good news is those number are down to 350 gigabytes today. Therefore, that sentence is categorized as one way entailment. The other way to deliver one way entailment, except giving some details is by giving some explanation about the idea mentioned by the speaker. When a speaker utters a statement, and explains it more, means that they want the audience or hearer believe in them. The strategy of giving example is also employed by Greta. "Fifty percent may be acceptable to you. But those numbers do not include tipping points, most feedback loops, additional warming hidden by toxic air pollution or the aspects of equity and climate justice" In the utterance above, Greta firstly states that fifty percent is enough according to the world leader. Then, she strengthens her statement by giving explanation why she explains why those fifty percent is not acceptable in the second sentence. By stating that those fifty percent do not include tipping points, most feedback loops, toxic air pollution, the aspects of equity and climate justice. By providing further explanation, Greta makes the audience or hearer believe in her statement. Another example of one-way entailment is presented below. "So a 50 percent risk is simply not acceptable to us. We who have to live with the consequences." In the datum above, Greta argues that fifty percent risk is not acceptable to us. Then, in the second sentence she explains who is "us" that she mentioned. Us here means we who bear all the consequences. Secondly, Two-way entailment appeared twice in Greta Thunberg's speech. The two-way entailment can be identified if the paraphrase is happening. The easiest way to paraphrase is rewording means replacing a word by another word which has the same meaning. The two-way entailment intended to emphasizes the important point so that listeners understand and remember that this is an important point. This phenomenon also happened in Greta's utterance in her speech. Those three sentences are in relation of two-way entailment. The words used in those sentences are almost the same. The word 'suffering' in the first sentence is replaced by the word 'dying' in the second sentence, and the word 'dying' is replacing by the word 'collapsing' in the third sentence. Therefore, those three sentences have similar meaning. When a speaker uses such way of delivering idea, means they want the audience or hearer really get what they want to deliver. "You say you hear us and that you understand the urgency, but no matter how sad and angry I am, I do not want to believe that. Because if you really understood the situation and still kept on failing to act, then you would be evil. And that I refuse to believe." In utterance above, those two sentences are in relation of paraphrase. The main idea of the two sentences is similar. When the speaker says that she does not want to believe that they understand the urgency, in other word she also said if they really understand the situation. In conclusion, in expressing one way entailment a speaker has many choices of methods such as, giving example, description or giving conclusion of the idea that the speaker wants to deliver. One way entailment is expressed by the speaker to strengthen their idea. Meanwhile, two-way entailment can be seen in two or more sentence that has similar or exact same meaning. The speaker's aim in using two-way entailment is to emphasize the idea of the sentence. The Contribution of Presupposition and Entailment in Topic of the Speech In this research giving knowledge about how presupposition and entailment contribute to the topic of the speech by analysing the types of presupposition and entailment that was found in the speech. In the datum 16 "There will not be any solutions or plans presented in line with these figures here today, because these numbers are too uncomfortable and you are still not mature enough to tell it like it is." This datum hangs on presupposition that the solutions and plans are not presented today and on what is entailed by this. The first line of the datum presents the assertion "There will not be any solutions and plans today". "will be not" means that nothing happens. In other words, the narrator explicit words constitute an assertion about the truth value of the proposition "the solution and plans" are not presented today; her words assert not that it is true or false, but according to the speaker there is no solution or plans presented today. So, the propositions are: a. The solution and plans are not presented today. b. The solution and plans are not available. If proposition (b) is not true, then proposition (a) is also not true. That is, if the solution and plans are presented today, it makes no sense to say to say The solution and plans are not presented today is true or false. So, the speaker assumption is the solution and plans are not available because it is not presented today. The type of presupposition in this utterance is existential presupposition, because the noun used in Greta's utterance is a definite noun phrase "The solution and plans" which according to Yule (1996) , existential presupposition are generally exist in any definite noun phrase Based on theory by Stalnaker (1970Stalnaker ( , 1973Stalnaker ( , 1974Stalnaker ( , 1978 presupposition are generally seen as imposing requirements on the possible context of utterance. Based on the idea of contexts of utterance in terms of the common groundthe set of worlds which compatible and mutually supposed for purpose communication. Assertation used to add information to the common ground. The common ground for this utterance is "the solution and plans" And what exactly is entailed by "the solution and plans." The very specific details of these "solution and plans" in order to solve global warming problem are given by Greta Thunberg's. One of the solution mentioned by Greta's is she urges everyone to use their right to vote and pick a candidate that is going to put climate change as main problem and continually press those in power to adapt their policies and adopt new legislature to solve climate change problem and save the earth. Thus, the proposition (a) The solutions and plans are presented today entails proposition (b) The solutions and plans are adapting their policies and adopt new legislature that consider climate change as a major problem. Because the truth of (a) ensures the truth of (b) and the falsity of (b) ensures the falsity of (a). In other words, if it is true that the solution and plans are presented today, it is necessarily true that the policymakers are consider climate change as a major problem in their polices and legislature. And if it is not true the policymakers are considered climate change as major problem in their policies and legislature, then it is necessarily not true that the solutions and plans are presented. The presuppositions of Greta's must be entailed by the global context or common ground knowledge that precedes the conversation. It means that the global context or common ground knowledge entails that presupposition. In example above, the common ground for "the solution and plans" are she urges everyone to use their right to vote and pick a candidate that is going to put climate change as main problem and continually press those people to adapt their policies and adopt new legislature to solve climate change problem. From the utterance above, the utterance is categorized as referential function because the utterance are containing important information to be known by the audience. Here, Greta's shows that she wanted to make her audience know about these solution and plans. In this utterance, Greta Thunberg, as the speaker wants to deliver her thoughts to the audience about the solution of global warming. All of the utterance which categorized as one-way entailment are mostly categorized as referential function. Because one of the characteristics of one-way entailment is commonly used by people in delivering their ideas by giving more details of the main idea. And the detail can be in the form of explanation, example or description. Meanwhile, the referential function is the function that contains an information message about the speaker's thought which be delivered to the audience. It means that Greta Thunberg focuses on her audience through the message that she delivered in this speech. By adding further details of her argument, she wants to make the audience to focuses on the theme of the speech, which is global warming. Through this speech, she wants to persuade and convince the audience to believe in her argument about global warming. Conclusion Based on the data analysis result and discussions in the previous chapter, several points can be concluded. Firstly, that several presuppositions can be found in Greta Thunberg's speech at UN Climate Action Summit 2019. From six types of presupposition based on Yule (1996:26) it was found there are five types of presupposition in Greta Thunberg's speech, which are existential presupposition, counter-factual presupposition, factive presupposition, non-factive presupposition, lexical presupposition. Mostly she used existential presupposition in her speech which appear nine times (56.25%), in which those existential presuppositions have the function to emphasizes, drawing attention and sympathy toward the listeners. Secondly, she also used counter-factual presupposition to give the idea about the ideal world and to deliver sarcasm toward the attendee. The counter-factual presupposition appears 2 times (12.5%) throughout the speech. Thirdly, she used factive presupposition to stating the fact to straighten her argument. The factive presupposition appears twice (12.5%) in the speech. Fourth, she used a non-factive presupposition to emphasize her speech. The non-factive presupposition appears one time (6.25%) in the whole speech. Finally, she used a lexical presupposition to emphasize and intimidate the attendee. The lexical presupposition appears twice (12.5%) in her speech. Basically, these results bring the ideas conveyed by Lisetyo (2020) that the meaning of those presupposition is basically based on what the theory said. For example, the meaning of existential is to show something to be existed, the meaning of lexical is based on the word contextual meaning used in the utterance, the meaning of structural is based on the use of W-H question words, and the meaning of factive is based on the word and verb that means a fact, while the meaning of of nonfactive is the contrary of factive. Thus, those presuppositions have a contribution by making her speech more attractive, mesmerizing, interesting, persuasive, and effective to both attendee and the listener. To have a deeper understanding of the speech, the research also analyzed by the entailment theory. Based on theory by Yule (1998:129) entailment is something that follows from what is mentioned before. Entailment can be divided into several types, some scholar have their own types of entailment. The one that is used in this research is by Griffiths (2006) who divides entailment into two, the first one is one-way entailment and the second one is two-way entailment. The most common entailment used in this speech is one-way entailment. One-way entailments appears 11 times (87.5%) or almost the whole of the speech use this type of entailment. This type of entailment is commonly used to delivering the ideas through the utterance. The speaker usually adding some more details of the main idea. The details can be in the form of description, example, or explanation. It means to help the listener understand the idea. Lastly, two-way entailment appears two times (12.5%) in the speech. Twoway entailment can be seen in two or more sentences that have the same meaning. They have the same idea but the speaker used a different expression to emphasize the idea. Then to find the connection between the presupposition and entailment used in the speech, the research also done by using dynamic approach theory. Based on theory by Stalnaker (1970Stalnaker ( , 1973Stalnaker ( , 1974Stalnaker ( , 1978, The presuppositions of the speech must be entailed by the global context or common ground knowledge that precedes the conversation. It means that the global context or common ground knowledge entails that presupposition. In order to find the contribution of presupposition and entailment in the context of the speech. The research also done by using language function theory by Roman Jakobson (1960). From five types of language function, three was found in this speech which are, referential function, emotive function, and conative function. Because the speaker mostly used one-way entailment which has characteristic for adding some more details of the main idea, which lead to the use of referential function. Which can be seen by the dominant of the use of referential function in this speech. Referential function is a function to convey information it can be seen through different forms of speech such as interrogative or declarative speech. And this function aims to send information or the speaker idea to the audience. Suggestion In this section, the researcher suggests for future researchers who want to write this kind of research to widely extend the research by enhancing the research focus not only exploring the types of presupposition and entailment but also use more varieties in pragmatics approach such as language function or dynamic which used in this research. The research about the dynamic between presupposition, entailment, and language function is still lacking, and it is important to acknowledge the dynamic phenomenon of those three. Therefore, it is a good chance for the other researcher to do further research in order to widen the research about pragmatic. The research of presupposition, entailment and language function hopes to contribute to the composition as well as a better understanding of speeches.
10,312.4
2021-04-24T00:00:00.000
[ "Philosophy" ]
Casein kinase TbCK1.2 regulates division of kinetoplast DNA, and movement of basal bodies in the African trypanosome The single mitochondrial nucleoid (kinetoplast) of Trypanosoma brucei is found proximal to a basal body (mature (mBB)/probasal body (pBB) pair). Kinetoplast inheritance requires synthesis of, and scission of kinetoplast DNA (kDNA) generating two kinetoplasts that segregate with basal bodies into daughter cells. Molecular details of kinetoplast scission and the extent to which basal body separation influences the process are unavailable. To address this topic, we followed basal body movements in bloodstream trypanosomes following depletion of protein kinase TbCK1.2 which promotes kinetoplast division. In control cells we found that pBBs are positioned 0.4 um from mBBs in G1, and they mature after separating from mBBs by at least 0.8 um: mBB separation reaches ~2.2 um. These data indicate that current models of basal body biogenesis in which pBBs mature in close proximity to mBBs may need to be revisited. Knockdown of TbCK1.2 produced trypanosomes containing one kinetoplast and two nuclei (1K2N), increased the percentage of cells with uncleaved kDNA 400%, decreased mBB spacing by 15%, and inhibited cytokinesis 300%. We conclude that (a) separation of mBBs beyond a threshold of 1.8 um correlates with division of kDNA, and (b) TbCK1.2 regulates kDNA scission. We propose a Kinetoplast Division Factor hypothesis that integrates these data into a pathway for biogenesis of two daughter mitochondrial nucleoids. Introduction The single-cell eukaryote Trypanosoma brucei causes human African trypanosomiasis (HAT) in some regions of sub-Saharan Africa. The trypanosome mitochondrial genome, comprised of catenated double-stranded DNAs, is organized as a single nucleoid termed "kinetoplast" [1][2][3]. Loss of kinetoplast DNA (kDNA) disrupts mitochondrial membrane potential in stumpy form bloodstream trypanosomes [4,5] and interferes with development of the parasite in the inhibits kinetoplast division [24,25], while mitosis progresses normally. Consequently, a population of "mutant" trypanosomes with a single kinetoplast and two nuclei (1K2N) arises [24]. We hypothesized that loss of TbCK1.2 disrupted kinetoplast division by preventing one or more of six processes; (a) synthesis of kDNA, (b) scission of kinetoplasts, (c) separation of cleaved kDNAs, (d) basal body duplication, (e) movement of basal bodies, or (f) flagellum nucleation. We find that kDNA synthesis occurs in 1K2N trypanosomes. Compared to control cells, 1K2N cells separate basal bodies to normal overall extents, although the distribution of interbasal body distances contracted in them. There was a 4-fold increase in the fraction of uncleaved kDNA in the population, indicating that TbCK1.2 facilitates kDNA scission. These data document failure of kDNA scission even after separation of basal bodies, providing genetic evidence that separation of basal bodies is not sufficient to divide kinetoplasts. TbCK1.2 is a founding member of a group of proteins that are required for division of kinetoplasts (i.e., kinetoplast division factors) (discussed in [11]). We propose a "kinetoplast division factor" (KDF) hypothesis to (i) explain the uncoupling of basal body separation from division of kDNA, and (ii) integrate all available new data into a working hypothesis for division and inheritance of the mitochondrial genome in a trypanosome. TbCK1.2 regulates division of kDNA in T. brucei The mitochondrial genome of the African trypanosome is organized as one nucleoid (kinetoplast) [2]. To ensure inheritance of this genome during cell division, kinetoplast DNA (kDNA) synthesis, division (i.e., scission and initial separation), and inheritance are coordinated with the cell cycle (S1 Fig). Division of kDNA is a poorly understood process, although many genes involved in post-division segregation have been identified (reviewed in [3,11]). Division of kDNA is hampered after knockdown of a casein kinase TbCK1.2 [24] (Fig 1A). To pinpoint the step where TbCK1.2 contributes to division of the kinetoplast, we produced a tetracycline-inducible TbCK1.2 RNAi line [25] in which one allele of the protein was tagged endogenously with a V5 epitope at the N-terminus (V5-TbCK1 RNAi line). Knockdown of TbCK1.2 reduced the level of V5-TbCK1. During a normal division cycle, the kinetoplast (K) is divided before mitosis producing cells with two kinetoplasts and one nucleus (2K1N trypanosomes). After knockdown of TbCK1.2 for 24 h a new population of cells with one kinetoplast and two nuclei (1K2N) emerged ( Fig 1A). Thus, TbCK1.2 is important for division of the kinetoplast but is not required for mitosis. The percentage of 1K1N cells was reduced after knockdown of TbCK1.2 ( Fig 1A). Compared to the uninduced control, the difference in distribution of kinetoplasts and nuclei per trypanosome was statistically significant (p = 3.96 x 10 −7 ; χ 2 ) after knockdown of TbCK1.2. TbCK1.2 is important for cytokinesis Typically, about 10% of a bloodstream trypanosome population has two nuclei and two kinetoplasts (2K2N), the pre-cytokinesis stage in cell division. After knockdown of TbCK1.2 approximately 30% of cells have two nuclei (counting 1K2N and 2K2N trypanosomes) at 18 h (Fig 1D), and that fraction holds steady at 24 h post-RNAi ( Fig 1A). We infer that knockdown of TbCK1.2 for 18 h and beyond leads to failure of cytokinesis ( Fig 1A). kDNA is replicated after knockdown of TbCK1.2 We examined a hypothesis that inability to duplicate the kinetoplast was the result of failure of kDNA synthesis, i.e., there was not sufficient mitochondrial DNA to partition between two kinetoplasts. Towards this goal, kDNA content of 1K2N trypanosomes was compared to that of kinetoplasts in uninduced (i.e., for TbCK1.2 knockdown) control cells. Since kDNA synthesis normally occurs in 1K1N trypanosomes before division of the kinetoplast [23,26], 1K1N trypanosomes contain between one-to-two equivalents of kDNA. Division of replicated kDNA yields trypanosomes with two kinetoplasts (2K1N and 2K2N), in which each kinetoplast contains one equivalent of kDNA. We observed an increase in kDNA content in 1K1N and 1K2N trypanosomes after knockdown of TbCK1.2 for 24 h (Fig 1B). In control cells, the median DAPI fluorescence intensity of kDNA in 1K1N cells (1 x 10 4 arbitrary units (A.U.)) is approximately twice that in 2K1N trypanosomes (6.5 x 10 3 A.U.) ( Fig 1B). This data is expected since 2K1N are not synthesizing kDNA whereas a fraction of the 1K1N cells is in S-phase. For comparison, kDNA intensity above the 95 th percentile of control 1K1N cells (see horizontal dotted line in Fig 1B) is considered "over-replicated". After a 12-h knockdown of TbCK1.2 there was no difference in the kinetoplast/nucleus profiles of control and experimental trypanosomes; neither group contained 1K2N cells (Fig 1C). Effects on kinetoplast duplication were assessed by enumeration of the number of kinetoplasts (K) and nuclei (N) per trypanosome in cells cultured in the absence or presence of tetracycline (1 µg/mL, 24 h) ("Other" indicates cells with >2 or <1 K/N). Error bars represent standard deviation of four independent biological experiments (n = 110-268/experimental sample). A χ 2 test was used to determine whether the difference in distribution of kinetoplasts and nuclei was statistically significant after knockdown of TbCK1.2 (p = 3.96 x 10 −7 ). Inset: SR-SIM example image of a 1K2N trypanosome following 24 h of RNAi against TbCK1.2. Cell membranes were labeled with mCLING and DNA was detected with DAPI. (B) Effect of knockdown of TbCK1.2 on kinetoplast DNA (kDNA) content. ImageJ was used to measure the fluorescence intensity of individual DAPI-stained kDNA in trypanosomes with one or two kinetoplasts in control (-Tet) or one kinetoplast in TbCK1.2 RNAi (+ Tet) cells. Scatter dot plot relates kDNA fluorescence intensities measured in different trypanosome cell types. The Mann-Whitney U test was used to compare the distribution of fluorescence intensity of DAPI-stained kDNA between -Tet 1K1N and +Tet 1K1N or 1K2N trypanosomes (p = 5.6 x 10 −5 , and p = < 10 −15 , respectively). The 95 th percentile of the -Tet 1K1N kDNA content is indicated by the horizontal dotted line. Descriptive statistics corresponding to each sample are aligned beneath the graphs. The effect of TbCK1.2 RNAi on kinetoplast duplication was assessed by enumeration of the number of kinetoplasts (K) and nuclei (N) per trypanosome in cells cultured in the absence or presence of tetracycline (1 µg/mL) for 12 h (C) or 18 h (D). ("Other" indicates cells with >2 or <1 K/N). Error bars represent standard deviation of three independent biological experiments (n = >100/ experiment). A χ 2 test was used to determine whether the difference in distribution of kinetoplasts and nuclei was statistically significant after knockdown of TbCK1.2 for 12 h (p = 0.647) or 18 h (p = 7.94 x 10 −5 ). (E) ImageJ was used to measure DAPI intensity of kDNA fluorescence in trypanosomes with one kinetoplast following 12 h RNAi in control (-Tet) or TbCK1.2 RNAi (+Tet) cells. Violin plot shows the distribution of kDNA fluorescence intensities. A Mann-Whitney U test was used to compare the median fluorescence intensity of DAPI-stained kinetoplasts between -Tet 1K1N (n = 183) and +Tet 1K1N (n = 198) trypanosomes (p = 0.0023). (F) Cartoon explaining the likely origin of 1K2N trypanosomes from 1K1N cells. Kinetoplast DNA is synthesized in S-phase, forming a cell with an undivided kinetoplast and one nucleus (1K U 1N) This observation presented an opportunity to determine whether over-replication of kDNA occurred in 1K1N cells or was restricted to 1K2N trypanosomes, given that kinetoplasts normally divide in 1K1N trypanosomes prior to mitosis [23,27]. We hypothesized that cells with over-replicated kDNA were present in the 1K1N population prior to emergence of 1K2N. To test this concept, we analyzed kDNA content of 1K1N cells at 12-h in control and knockdown cells. Kinetoplasts in 1K1N cells after 12-h RNAi contained more kDNA than control cells ( Fig 1E); median fluorescence increased from 8,618 to 10,470 (A.U.), and the difference in distribution of kDNA content was statistically significant (p = 0.0023, Mann-Whitney U-test). Furthermore, fourteen percent of the kinetoplasts in knockdown cells contained more kDNA than the 95 th percentile of the control population ( Fig 1B). Thus, over-replication of kDNA was detectable in 1K1N trypanosomes before nuclear division. These data are consistent with a model in which 1K1N trypanosomes with over-replicated kDNA (detected at 12-h) convert, after nuclear division, to 1K2N cells observed 18 h after knockdown of TbCK1.2 (Fig 1F). At the 18-h timepoint there was a decrease in the proportion of 1K1N cells, and 1K2N trypanosomes appeared in the population. Differences in the distribution of cell types was statistically significant (p = 7.94 x 10 −5 ) ( Fig 1D). We conclude that it takes more than 12 h of TbCK1.2 knockdown to produce 1K2N cells. Scission of kDNA is inhibited after knockdown of TbCK1.2 Since DNA synthesis occurred in kinetoplasts of 1K2N trypanosomes (Fig 1B), we reasoned that kDNA was either uncleaved or had divided but failed to separate by more than 250 nm after scission, resulting in their detection as one kinetoplast by fluorescence microscopy, because of the resolution limit of light [28]. To determine which of these theories was correct we used transmission electron microscopy (TEM) to measure lengths of 632 randomly selected kinetoplasts in multiple fields from 20 ultrathin sections in three independent TEM experiments, a representative of which is presented in Fig 2A ( For control trypanosomes (i.e., uninduced RNAi line for TbCK1.2), the median length of kinetoplasts was 405 nm (and the 5 th -to-95 th percentile range was 250-630 nm) (Fig 2A). After knockdown of TbCK1.2, the median kinetoplast length increased to 467 nm (the 5 th -to-95 th percentile range was 240-930 nm) (Fig 2A) (p = 3.1 x 10 −4 , Mann Whitney U test). Using the 95 th percentile length of controls as the limit of normal length (630 nm), we found that 19% of kDNA exceeded this length after knockdown of TbCK1.2 (Fig 2B), representing a four-fold increase in the proportion of uncleaved kDNA (p = 5 x 10 −3 , Unpaired Student's t test). Widths of kinetoplasts were unchanged after knockdown of TbCK1.2 (S3B Fig). These data indicate that knockdown of TbCK1.2 prevents scission of kDNA. An alternative explanation for these data is that knockdown of TbCK1.2 causes elongation of all kinetoplasts. This possibility is not supported by our data, because the entire distribution of kinetoplast lengths did not shift up after knockdown of TbCK1.2 (Fig 2A). Duplication of pro-basal bodies, and kinetics of their separation from mature basal bodies Basal body (centriole) separation is proposed as a mechanism for segregation of kinetoplasts [1,23,29]. As employed in the literature, "segregation" of kDNA includes the process of dividing kDNA in two (S1 Fig), as well as the post-division movements of kinetoplasts [10,12,30]. For this reason, we investigated a possibility that failed kinetoplast division in 1K2N trypanosomes was caused by inability to duplicate or separate basal bodies. Control 1K1N trypanosomes had one or two basal bodies ( Fig 3A) [22,23]. After knockdown of TbCK1.2, most 1K1N and 1K2N trypanosomes had two (or more) basal bodies ( Fig 3A and 3C). Hence, impaired kinetoplast division is not the result of failed duplication of basal bodies. Interestingly, 30% of 1K1N cells (15% of the total cell population) (Fig 3B), and 50% of 1K2N trypanosomes (10% of the total population of trypanosomes) ( Fig 3C) had more than two basal bodies. Thus TbCK1.2 regulates copy number of basal bodies, in addition to separation of the organelle (see next section, and also Discussion). Centrioles (basal bodies) are typically found as a mother and daughter pair, each of which has a mature centriole (basal body) and a procentriole (pro-basal body) [31,32]. During cell proliferation, procentrioles disengage, separate from mature centrioles, and mature by acquiring other proteins and structures (e.g. appendages) [33,34]. In T. brucei, separation of probasal bodies (pBBs) from mature basal bodies (mBBs) has not been studied. Addressing this topic experimentally in T. brucei calls for a system in which duplication and maturation of pro-basal bodies may be controlled experimentally. The small-molecule AEE788 [35] may be used to block biogenesis of mature basal bodies in bloodstream T. brucei [23]. Washing off AEE788 allows maturation and duplication of pBBs after a lag of 2 h [23]. Mature basal bodies (mBBs) were stained with YL1/2 antibody (that recognizes TbRP2 protein in the transition zone) [17]. Both mBBs and pBBs were detected with anti-SAS6 antibody [15], since they both possess cartwheel protein SAS6 [36,37] (Fig 4A and 4B). In cells with 1 mBB and 1 pBB (Fig 4C), the median distance between mBBs and pBBs was 421 nm (the 5 th -to-95 th percentile range was 321-to-597 nm) in control (i.e., DMSO-treated) trypanosomes ( Fig 4C, S2A Table), and 401 nm (with a 5 th -to-95 th percentile range of 277-to-535 nm) for AEE788-treated cells ( Fig 4C). Differences between the distribution of distances was statistically significant (p = 0.024, Mann-Whitney U test). We tracked changes in separation of basal bodies from 1.5-3 h after AEE788 was rinsed off, because S-phase entry begins 1 h after washing off the drug and probasal body maturation is detected between 2-3 h thereafter [23]. The median separation between mBBs and pBBs decreased to 377 nm at 1.5 h, and then increased to 443 nm at the 2-h point ( Fig 4C, S2A Table). The difference in the median distances at 1.5 h and 2 h was statistically significant (p = 9.1 � 10 −12 , Mann-Whitney U test). Despite these statistically significant differences in medians, we are reluctant to make major biological inferences from the data, because of extensive overlap of distances in the 5 th -to-95 th percentile (S2A Table). We next determined distances between pro-basal bodies in cells with two mature basal bodies ( Fig 4D, S2B Table). At the end of the AEE788 incubation, pBBs were separated by 895 nm (median) (5 th -to-95 th percentile range = 388-to-1706 nm), whereas in DMSO-treated controls pBBs were separated by 905 nm (5 th -to-95 th percentile range was 428-to-1425 nm); this difference was not significant statistically (p = 0.93, Mann-Whitney U test) compared to the AEE788-treated cells. After 1.5 h of recovery from AEE788 treatment, the median distance between two pro-basal bodies rose to 1122 nm, an increase that was statistically significant (p = 0.021, Mann-Whitney test) compared to the distance at 0 h. At two hours post-drug release, the median distance between pro-basal bodies was 1015 nm (5 th -to-95 th percentile range = 312 nm-to-1524 nm). The difference in the distances between 1.5 and 2 h recovery time points was not statistically significant. At 3 h, the median pBB distance was 844 nm (5 thto-95 th percentile range 445-to-1169 nm). The decrease in distances between pro-basal bodies at 1.5 h and 3 h was statistically significant (p = 2.4 � 10 −5 ; Mann-Whitney U test) (S2B Table). Finally, we determined distances between pairs of mBBs associated with single (undivided) kDNAs ( Fig 4D, numbers in red, S2C Table). Following AEE788 treatment, mBBs were separated by 1336 nm (5 th -to-95 th percentile range 611-1922 nm). In trypanosomes incubated with DMSO, mBBs were separated by 1375 nm (median; 5 th -to-95 th percentile range was 783-to-1949 nm). The difference in mBB separation between these two populations was not significant statistically (Mann-Whitney U test). After 1.5 h of release from AEE788 treatment, the median separation increased to 1535 nm (5 th -to-95 th percentile range = 683-2306 nm), a nonsignificant change compared to data from trypanosomes at the end of exposure to AEE788 (S2C Table). Between 1.5 and 2 h of cell recovery from AEE788 exposure, the median separation of mBBs was 1511 nm (5 th -to-95 th percentile range was 794-2033 nm). At 3 h post -AEE788 withdrawal, median separation between mBBs was 1304 nm (5 th -to-95 th percentile range = 864-1700 nm), which was statistically significant when compared to distances measured at both 1.5 h recovery (p = 0.013, Mann-Whitney U test) and 2 h recovery (p = 0.0074, Mann-Whitney U test) (S2C Table). In summary, we documented separation of pBBs from mBBs (Fig 4C), as well as their distance-dependent maturation (Fig 4D). The surprising results are; (i) nascent pBBs are found > 400 nm from mBBs (S2A Table), and (ii) maturation normally occurs after pBBs separate > 895 nm from mBBs (S2B Table). These data indicate that current models of basal body biogenesis in which pBBs mature in close proximity to mBBs may need to be revisited. Distances between mBBs is reduced after knockdown of TbCK1.2 Separation of basal bodies has been proposed as a mechanism for segregation (which encompasses division as well as partitioning of kDNA into daughter cells [10,30]) of kinetoplasts Trypanosomes were treated with AEE788 (5 µM) or DMSO (control) for 4 hours, released from drug pressure, and allowed to recover for 1.5, 2, or 3 hours. Antibodies against TbRP2 (YL1/2) and TbSAS6 were used to identify basal bodies via immunofluorescence microscopy. ImageJ was used to measure inter-basal body distances by tracking separation between centers of TbSAS6 puncta. (B) Representative images of cells from AEE788-treated group, and cells allowed to recover from drug for 1.5 h. Separation between basal bodies is highlighted in yellow. Scale bar = 5 µm. (C) Plot shows distances between pro-basal bodies (TbSAS6 positive) and mature basal bodies (TbRP2/TbSAS6 positive) in cells with one mature basal body (mBB). Bars on graph indicate median and inter-quartile range. Numbers to the right indicate median inter-basal body distances for each group. Trypanosomes were drawn from a single experiment. Cells analyzed = 41 (DMSO), 131 (AEE788), 99 (1.5 h recovery), 106 (2 h recovery), 62 (3 h recovery). Inter-basal body distances were compared between groups with a Mann-Whitney U test. The difference in distribution of inter-basal body distances between DMSO treated group and AEE788 treated group was statistically significant (p = 2.4 � 10 −2 ). The difference in inter-basal body distances between the group harvested immediately after AEE788 treatment and the population given 1.5 h to recover was statistically significant (p = 4.8 � 10 −2 ). The difference between the 1.5 h recovery and 2 h recovery groups was highly statistically significant (p = 9.1 � 10 −12 ). The difference in inter-basal body distance between the 2 h and 3 h group was statistically significant (p = 4.1 � 10 −5 ). (D) Distances between probasal bodies (pairs of TbSAS6-positive foci) in cells with two mature basal bodies (mBB) are plotted. Bars on graph show median and inter-quartile range. Numbers to the right in black indicate median distances between a mature basal body and a pro-basal body for each group. Numbers in red denote distances (median) between pairs of mature basal bodies for each group. Cells analyzed = 59 (DMSO), 29 (AEE788), 16 (1.5 h recovery), 32 (2 h recovery), 92 (3 h recovery). Inter-basal body distances were compared between groups with a Mann-Whitney U test. The difference in distribution of inter-basal body distances between DMSO treated group and AEE788 treated group was not statistically significant. The difference in distribution of inter-basal body distances between the AEE788 treatment group and the group at 1.5 h recovery was statistically significant (p = 2.1 � 10 −2 ). The difference in inter-basal body distances in 1.5 h recovery and 2 h recovery groups was not statistically significant. The difference in inter-basal body distance between the 2 h and 3 h group was statistically significant (p = 2.4 � 10 −5 ). Distances between mature basal bodies (pairs of TbRP2-positive foci) in the same cells are listed to the right in red. Overall, these data are consistent with successful separation of mBBs in T. brucei after knockdown of TbCK1.2, despite the decreased median inter-basal body distances, since the range of inter-basal body distances (5 th -to-95 th percentile) are practically identical before and after knockdown of TbCK1.2. We conclude that separation of mBBs per se is not sufficient for division of kinetoplasts, since 1K2N cells fail at scission of kinetoplasts although they contain clearly-separated mBBs. Nevertheless, the distance between separated mBBs decreased by 0.2 µm (median) after knockdown of TbCK1.2, suggesting that mBBs may need to separate beyond 1.2 µm before division of kDNA takes place in a trypanosome. Two hypotheses are proposed to reconcile these data (see Discussion). Our data also show that, unlike basal bodies in insect stage (procyclic) T. brucei [14,29], mBBs in bloodstream trypanosomes do not migrate further apart in 2K2N (compared to 2K1N) cells (Fig 5). Basal bodies nucleate axonemal microtubules of flagella/cilia [20]. For that reason, we evaluated competence of basal bodies in 1K2N trypanosomes to form flagella (S4A Fig). The majority (75%) of 1K2N cells had two flagella (S4B Fig) indicating that the basal bodies retain competence for microtubule nucleation. A small percentage of trypanosomes had more than two flagella, indicating that some supernumerary basal bodies (Fig 3) produce flagella. TbCK1.2 is detected in the cytoplasm We considered a possibility that TbCK1.2's effect on kinetoplast division could be explained, at least in part, by its intracellular location. Using a V5-epitope tagged TbCK1.2 RNAi line (S2A Fig) we localized TbCK1.2 to cytoplasmic puncta ( S2E Fig and Fig 6). TbCK1.2 protein sequence lacks a mitochondrial targeting signal at its N-terminus that could have been disrupted by a V5-tag. In control experiments, similar data were obtained when a C-terminal HA-tagged version of TbCK1.2 was used in immunofluorescence studies (S2F Fig). Since TbCK1.2 was not detected predominantly in mitochondria (Fig 6A), these data suggest that the effect of the enzyme on kDNA scission is most likely transmitted by other factors (i.e., effectors) (see next section). Following knockdown of TbCK1.2 the abundance of 65 phospho-peptides (corresponding to 53 unique gene IDs [42]) decreased at least two-fold in each phospho-proteomic study, and 144 phospho-peptides (corresponding to 109 unique gene IDs) increased at least two-fold in each phospho-proteomic study, as compared to the uninduced controls (Fig 6B, S3 and S4 Tables). Phospho-peptides that changed in abundance after knockdown are considered "TbCK1.2-pathway proteins"; those that decreased in abundance are potential substrates of the enzyme. Some TbCK1.2-pathway proteins might localize to the mitochondrion TbCK1.2 regulates division of kinetoplasts, which are inside mitochondria (Fig 1). However, the enzyme is found predominantly in the cytoplasm (Fig 6A). Since the vast majority of mitochondrial proteins are produced in the cytoplasm before their import into the organelle [43,44], we hypothesized that TbCK1.2's modulation of kinetoplast division might involve "effector proteins" that are phosphorylated in the cytoplasm prior to their movement into the mitochondrion. This hypothesis has precedence: Three cytoplasmic protein kinases in Saccharomyces cerevisiae have substrates that are imported into mitochondria [45][46][47]. Consequently, we inquired whether any TbCK1.2-pathway proteins were potentially mitochondrial. "TbCK1.2-Pathway Proteins" (i.e., 65 de-phosphorylated peptides (S3 Table) (corresponding to 53 gene IDs) and 144 hyper-phosphorylated peptides (S4 Table) (corresponding to 109 gene IDs) were combined into one data set, and were analyzed for possible mitochondrial association as follows. Gene identities (IDs) were compared to proteins that localize to mitochondria in trypanosomes (as reported by TrypTag, an in vivo protein-tagging and localization database [48]). Two bona fide mitochondrial proteins were found among TbCK1.2-Pathway proteins (S5A Table). In a second approach, TbCK1.2 pathway proteins were compared to two PLOS ONE mitochondrial proteomes containing 1730 proteins [49,50] in TryTripDB (release 49 beta) [51]. Polypeptides found in both data sets were filtered by eliminating glycosomal or nuclear proteins [52,53], resulting in 13 proteins (S5B Table). (Proteins are imported post-translationally into nuclei and glycosomes [54,55]). Thus, there is a total of 15 mitochondrial proteins, two of which have been verified at the cellular level, among TbCK1.2-pathway proteins. Future work will address possible contributions of these proteins to kDNA division. Casein kinase TbCK1.2 has multiple functions in the African trypanosome Protein kinases are considered "biological switches" that regulate physiological pathways instead of metabolic enzymes optimized to act on single substrates (reviewed in [56]). Typically, a single protein kinase affects multiple pathways in a cell. For example, JAK kinase is involved in IL2 synthesis [57], thyrotropin signaling [58], and centrosomal protein phosphorylation [59]. Similarly, vertebrate casein kinase 1δ stabilizes mature axons, [60], governs periodicity of mammalian circadian rhythms [61], and regulates cognitive-affective behavior in mice [62]. Protein kinases exhibit this myriad of functions because they have multiple substrates, some of which are effectors for pathways regulated by the enzymes [56]. Efforts to understand the contributions of protein kinases to biology have been most fruitful when single pathways are studied in detail to identify participants that are eventually ordered into signaling cascades, as shown for [57,58,63], EGFR [64][65][66], and CDKs [67][68][69]. In T. brucei, we find that TbCK1.2 affects cytokinesis (Fig 1), separation of mBBs (Fig 5), and scission of kDNA (Fig 2). Following the lead of investigators in other biological systems [57,58,63,[70][71][72] (discussed above), we focus this manuscript on a single pathway affected by TbCK1.2, kinetoplast division (Figs 1 and 2). This decision is not meant to minimize the importance of other pathways affected by TbCK1.2. Neither is it a suggestion that failure of kinetoplast scission after knockdown of TbCK1.2 causes the other effects mentioned above. Instead, the decision is an acknowledgement of the futility of attempting to provide a comprehensive account of all three physiological pathways affected by the enzyme in one publication. Our working hypothesis is that all pathways affected by TbCK1.2 are impacted concurrently, because the enzyme is found in multiple regions of the cell (Figs 6, S2E and S2F) where it engages different effector proteins (S3 Table) for each of the pathways affected by the enzyme. Kinetoplast division factor hypothesis A kinetoplast biogenesis cycle has five steps, minimally; kDNA synthesis, selection of scission sites, cleavage/scission, separation of kinetoplasts, and sorting of cleaved kDNAs (S1 Fig) (see Introduction for definition of terms). Division (i.e., scission/cleavage and initial separation) of kDNA is poorly characterized; no protein that mediates the process has been identified to date (reviewed in [12]). In this report we show that mutant 1K2N trypanosomes obtained after knockdown of TbCK1.2 (Fig 1) have two well-separated basal bodies (Figs 3 and 5 and S5) and yet fail to divide kDNA (Fig 1). Hence knockdown of TbCK1.2 de-couples basal body separation from division of kinetoplasts, so that basal bodies move apart without scission of kDNA. Separation of basal bodies is not sufficient to divide a kinetoplast, although that event precedes segregation of kinetoplasts [1,13,14,20]. (The original use of the word "segregation" in kinetoplastid biology [10] referred to the process that we term "scission" (S1 Fig) [11]. However, "segregation" is now used for all events associated with kDNA inheritance [10][11][12]. Based on our data (summarized above) we propose that trypanosomes with one kinetoplast and one nucleus can have a post-basal body separation "kDNA intermediate" containing uncleaved kDNA (K U ). The intermediate (K U ) is short-lived under normal circumstances. Conversion of K U into two cleaved kDNA networks (in 2K1N trypanosomes) is arrested after knockdown of TbCK1.2, making it possible to detect the normally elusive 1K U 1N intermediate (Fig 1E and 1F). In DAPI staining of kDNA, the intermediate may be detected as a 1K1N trypanosome with "over-replicated" kDNA ( Fig 1E); quantitative electron microscopy studies document scission failure of K U (Fig 2A). With time, a 1K U 1N produces 1K U 2N trypanosomes after mitosis, (Fig 1D and 1F). In G1, trypanosomes have one mature basal body and one pro-basal body (Fig 7). During S-phase, kDNA synthesis (Step 1) is accompanied by separation of the pro-basal body from the mature basal body (Step 2), and maturation of pro-basal bodies (Step 3) [23,78] producing trypanosomes with two basal bodies and a double-length kDNA (Figs 1E and 2A). Mature basal bodies are separated between 0.55-to-2.11 microns (Fig 5). KDFs [11] are recruited (or activated) when mBBs are separated greater than 1.2 microns (Step 4), leading to scission of kDNA (Step 5). Basal body separation beyond a threshold of 1.2 microns may be a "licensing step" for scission of kDNA when KDFs are either activated or recruited to kinetoplasts. In G2, cleaved and separated kDNAs are visible microscopically (Step 6), and are sorted into daughter trypanosomes during cytokinesis (Step 7). Phenotypes accompanying knockdown of genes for KDFs or Tripartite Associated Complex (TAC)-associated proteins are distinguishable Some properties of kinetoplasts in cells where TbCK1.2 (a KDF) was knocked down (Figs 1, 1E and 2) appeared to resemble those obtained after knockdown of TAC-associated proteins (TACAPs) [12]. A closer examination shows that mutants of KDFs and TAC-associated proteins have different properties. First, early phenotypes of kDNAs (i.e., observed within 24-h after knockdown of a gene in bloodstream T. brucei) are distinguishable between KDFs and TAC-associated proteins (discussed in [11]). KDF loss prevents scission of kDNA (Figs 1 and 2) whereas knockdown of TACAPs, best illustrated by RNAi of p166, the first reported TACAP [79], does not prevent cleavage of kDNA [11]. Second, we observed an increase in 2K2N (post-mitotic) trypanosomes after knockdown of TbCK1.2 (Fig 1, and see first paragraph of Discussion), pointing to defective cytokinesis, whereas knockdown of TACAPs does not inhibit cytokinesis [12]. Third, KDF knockdown reduces separation of mBBs whereas RNAi of TAC genes does not shorten distances between basal bodies. Finally, TAC gene mutations lead to loss of kDNA from proliferating trypanosomes whereas KDF knockdown is not associated with loss of kinetoplasts from T. brucei. Candidate effector proteins for TbCK1.2 regulation of kinetoplast scission Although TbCK1.2 is a KDF (Fig 1), the protein is not detected selectively at the kinetoplast ( Fig 6A). Therefore, TbCK1.2's modulation of kinetoplast scission is likely to be mediated by "effector proteins" that localize to mitochondria. In T. brucei we found fifteen putative mitochondrial proteins among TbCK1.2-pathway proteins (S5 Table). This observation is not unlike that in Saccharomyces cerevisiae where three cytoplasmic protein kinases have substrates that are imported into mitochondria [45][46][47], and a CK1 regulates activity of the protein import pore of mitochondria [80]. In future studies, will test whether or not the trypanosome proteins localize to mitochondria, and whether their knockdown (or overexpression) affects scission of kDNA. [78]. Quantitation of fluorescence intensity from DAPI-stained kinetoplasts. After knockdown of TbCK1.2, images of DAPI-stained control and RNAi-treated trypanosomes were acquired using under the same conditions using a DeltaVision II microscope system. Additionally, the brightness and contrast settings of display images during post-processing were kept identical. Using ImageJ [82], a box was drawn over each kinetoplast and the sum of the pixels in the selection was measured (raw integrated density). To control for background fluorescence, a box with the same dimensions used for each kinetoplast was drawn at two areas near the organelle of interest, and the raw integrated density determined. The average of the two background fluorescence measurements was then subtracted from the integrated density of the respective kinetoplast. Cells were pooled from three independent experiments. For the 12 hour time point, 1K1N minus Tet n = 183, 1K1N plus Tet n = 198. Transmission electron microscopy (TEM). TEM of kinetoplast network was performed as described previously [83] with modifications. TbCK1.2 RNAi cells were induced for 24 h with tetracycline (1μg/ml). Trypanosomes (1x10 8 ) were harvested from induced and uninduced cultures, and washed once with 15 ml of chilled PBS-G (glucose 10 mM). Cells were resuspended in 5 ml of fixative solution; 0.07 M cacodylate buffer (pH 7.4) containing 2% paraformaldehyde (EM grade) and 0.5% glutaraldehyde (EM grade), and incubated at 4˚C for 1 h. Cells were washed twice with cacodylate buffer (0.07 M, pH 7.4), and were encapsulated in 100 μl of 4% low-melting-point agarose at 4˚C for 2 h. The agarose enrobed trypanosomes were post-fixed with 4% aqueous OsO 4 for 1 h, followed by dehydration in an ascending series of ethanol (25,50,75,85,95 and 100%, 15 mins at each step), and anhydrous acetone (10 mins, two times). After the dehydration, trypanosomes were incubated with an ascending series of Eponate 12 resin solution (Ted Pella Inc.) in acetone (25,50, 75%) at room temperature for 2 h at each step. Finally, cells were embedded in Eponate 12 resin and polymerized at 60˚C for 24 h. Ultrathin sections (60 to 70 nm) were cut, transferred onto copper grids, and stained with 2% aqueous uranyl acetate for 5 min. Sections were visualized with a JEOL JEM-1011 transmission electron microscope at 80 kV and 10,000 X magnification. Quantitation of kinetoplast network length from TEM images. For both uninduced and induced trypanosomes (i.e., TbCK1.2 RNAi), 632 kinetoplast images were captured from three independent experiments (200 from first experiment, and 216 from second and third experiments) for quantitation. The length and width of each kinetoplast network was obtained using a measurement line tool in Fiji software [84]. Immunofluorescence assays. For detection of TbCK1.2, basal bodies, and flagella, trypanosomes (8 x 10 5 ) were fixed with methanol or paraformaldehyde and labeled [78] with the appropriate antibodies (see Supplemental Material for antibody details). Double-staining with YL1/2 and anti-TbSAS6 was used to determine basal body number. Three biological replicates were performed (n = 108-128/sample). Anti-PFR2 was used to enumerate flagella number in three independent experiments (n = 96-130/sample). Anti-V5 antibody was used to detect V5-TbCK1.2. The YL1/2 antibody was used to detect basal bodies, and the 20H5 antibody to detect basal bodies and bilobes when localizing TbCK1.2. For visualization by fluorescence microscopy, mCLING was added to a final concentration of 1 µM, and cells were incubated on ice in the dark for 60 s and then fixed with 100 µL 4% PFA/0.05% glutaraldehyde and 0.01% saponin [6]. Twenty µL of the solution was transferred to a coverslip coated with poly-L-lysine, and dried on a bead bath heated to 50 o C. The cover slips were immersed in PBS for 5 min for re-hydration, briefly rinsed with deionized H 2 O, gently dried with a Kim-Wipe, and mounted on a slide with VectaShield mounting medium (Vector Laboratories, Burlingame, CA) containing 5 µM 4',6-diamidino-2-phenylindole (DAPI). Images were captured on an EVOS-FL microscope (ThermoFisher), and numbers of kinetoplasts and nuclei per cell (n = 150 per experiment) were counted. Three independent experiments were performed. Statistical analysis was performed as described below. Measurement of inter-basal body distances. The distance between basal bodies in trypanosomes with two mature basal bodies was determined by drawing a line between the center of YL1/2-positive basal bodies and measuring the distance using ImageJ. Trypanosomes from five independent experiments were analyzed (97 uninduced 1K1N cells, 94 induced 1K1N cells, 95 induced 1K2N cells, 81 uninduced 2K1N cells, 61 induced 2K1N cells, 84 uninduced 2K2N cells, 51 induced 2K2N cells). The distance between pro-basal body and mature basal body in trypanosomes with one mature basal body, and distance between two pro-basal bodies in cells that had two mature basal bodies was determined by drawing a line between centers of anti-SAS6-positive objects, and measuring the distance using ImageJ. All cells analyzed had one kinetoplast. For AEE788 experiments, the numbers of cells with one mature basal body were as follows: 41 from DMSO treated group, 131 from AEE788 treated group, 99 cells from population treated with AEE788 and allowed to recover from drug pressure for 1.5 h, 106 cells from 2 h recovery, and 62 cells from 3 h recovery. For cells with two mature basal bodies, numbers analyzed were respectively: 58 cells from DMSO treatment, 29 from AEE788 treatment, 16 from 1.5 h recovery, 32 from 2 h recovery, and 92 from 3 h recovery group. Western blotting. Total cell lysate from trypanosomes (8 x 10 5 per sample) was used for western blotting [85]. A Stain-Free labeled gel was activated [86,87] before transfer of proteins to a PVDF membrane for normalization of total protein (see Supplemental Material). SILAC and label-free preparation of trypanosome peptides for LC-MS/MS. Three mass spectrometry experiments were performed. Label-free phosphopeptides were isolated and analyzed in two biological replicates as described in Supplemental Materials. An inclusion list [88] was used during analysis of the second label-free experiment (see Supplemental Material). Additionally, a tetracycline-inducible TbCK1.2 RNAi line was cultured in HMI-9 medium modified for SILAC [89,90]. Induced (light medium) and uninduced (heavy medium) trypanosomes (3 x 10 7 cells per sample) were combined and processed as described [91]. Phospho-peptide enrichment and LC-MS/MS analysis is described in Supplemental Material. Spectral counts from the two label-free experiments were combined, and phospho-peptides that showed at least a 2-fold decrease (or increase) in SILAC and the label-free strategies were considered putative TbCK1.2-pathway proteins. Statistical analysis Unless otherwise stated, Excel (Microsoft) and Graphpad Prism were used for Student's t test. Chi-squared (x 2 ), Mann-Whitney, and one-way ANOVA tests were executed using GraphPad Prism. For all statistical analysis, α = 0.05. Exact p-values for most statistical tests are calculated in Prism to 15 significant digits. In some cases, exact p-values were unavailable due to being smaller than 1x10 -15 . Exact p-values for Dunnett's multiple comparisons test are calculated to 4 significant digits. Exact p-values smaller than 1x10 -3 are not calculated. (E) Representative images from immunofluorescence assays performed with anti-V5 antibody. TbRP2, a basal body transition zone protein, was visualized using YL1/2 antibody. Bottom row shows tagged cells not exposed to primary antibodies. Scale bar = 5 µm. (F) One allele of TbCK1.2 was tagged with an HA epitope (C-terminal), and used for immunofluorescence assays with anti-HA antibody following paraformaldehyde fixation. TbRP2 protein was visualized using YL1/2 antibody. Scale bar = 5 µm. 1K2N trypanosome. Membranes were stained with mCLING [5], basal bodies were labeled with YL1/2 antibody, and DNA was stained with DAPI. Maximum intensity projections of zstack of images were acquired with an SR-SIM microscope. (TIF) S1 Table. Descriptive statistics for kDNA intensity measurements before and after knockdown of TbCK1.2. Experimental details are presented in the legend to Fig 1B. (DOCX) S2 Table. Inter-basal body distances following cell cycle synchronization. RUMP528 cells were treated with 5 µM of AEE788 or 0.1% (v/v) DMSO. After this, cells were released from drug treatment. Median distances and 95% confidence intervals of median between mature basal body and probasal body (for cells with a single mature basal body), and between two probasal bodies (for cells in which development of a second mature basal body has occurred) are displayed. (DOCX) S3 Table. TbCK1.2 pathway proteins with decreased phospho-peptide abundance after knockdown of TbCK1.2. Following a 24-h knockdown of TbCK1.2, phospho-peptides were harvested from uninduced and induced cells and phospho-peptides enriched over an IMAC column (see materials and methods). Phospho-peptide abundance was calculated in each sample using a labeled proteomics (SILAC) (n = 1) and label-free approach (spectral counting (SC)) (n = 2). Phospho-peptides identified with decreased abundance (at least 2-fold) in each phosphoproteomics strategy are listed. Phosphorylation sites are indicated in red (PhosphoRS [6] value >79%). � indicates the number of phospho-sites which could not be accurately assigned. The fold change in phospho-peptide abundance, as compared to the uninduced control, is shown.~99 indicates that the phospho-peptide was only present in the control or induced population, preventing calculation of an abundance ratio. All listed peptides had a PEP value (probability that spectra-peptide match was incorrect) of 6% or less. N/A indicates that the exact phospho-isoform of the indicated peptide was not identified. A control experiment comparing the abundance ratio of phospho-peptides from uninduced cells grown in heavy or light SILAC medium was performed. Peptides that showed a 2-fold change in abundance in both the control and experimental group are not reported as TbCK1.2 pathway proteins. (DOCX) S4 Table. Putative TbCK1.2 effectors with increased phospho-peptide abundance after knockdown of TbCK1.2. Following a 24-h knockdown of TbCK1.2, phospho-peptides were harvested from uninduced and induced cells and phospho-peptides enriched over an IMAC column (see materials and methods). Phospho-peptide abundance was calculated in each sample using a labeled proteomics (SILAC) (n = 1) and label-free approach (spectral counting (SC)) (n = 2). Phospho-peptides identified with increased abundance (at least 2-fold) in each phosphoproteomics strategy are listed. Phosphorylation sites are indicated in red (PhosphoRS [6] value >79%). � indicates the number of phospho-sites which could not be accurately assigned. The fold change in phospho-peptide abundance, as compared to the uninduced control, is shown.~99 indicates that the phospho-peptide was only present in the control or induced population, preventing calculation of an abundance ratio. All listed peptides had a PEP value (probability that spectra-peptide match was incorrect) of 6% or less. N/A indicates that the exact phospho-isoform of the indicated peptide was not identified. A control experiment comparing the abundance ratio of phospho-peptides from uninduced cells grown in heavy or light SILAC medium was performed. Peptides that showed a 2-fold change in abundance in both the control and experimental group are not reported as putative TbCK1.2 effectors. (DOCX) S5 Table. Putative mitochondrial TbCK1.2 pathway proteins. "TbCK1.2-Pathway Proteins" (S4 and S5 Tables) were re-analyzed in search for mitochondrial proteins as follows. (A) Gene IDs for TbCK1.2-pathway proteins were compared to proteins that localize to the mitochondrion in the TrypTag database. (B) Gene IDs for TbCK1.2-pathway proteins were compared to two mitochondrial proteomes (combined) containing 1730 proteins [7,8] available in Try-TripDB (release 41) [9]. Polypeptides found in both data sets were filtered by eliminating proteins in glycosome or nucleus proteomes [10,11], resulting in 21 proteins. (Proteins are imported post-translationally into both nuclei and glycosomes [12,13]).
9,982
2021-04-16T00:00:00.000
[ "Biology" ]
Will colleges survive the storm of declining enrollments? A computational model The approaching decline in the U.S. college-age population, sometimes referred to as a “demographic storm,” has been identified as an existential threat to the future of American colleges and universities. This article conducts a model-driven analysis of three plausible college-level responses to declining applications. It draws on systems theory to conceptualize a tuition-dependent college as a complex service system and to develop a system dynamics model that captures key causal interrelationships and multiple feedback effects between faculty, facilities, tuition revenue, financials, reputation, and outcomes. Simulations with the college model suggest that common solutions such as reducing faculty or adding campus facilities may improve the college’s short-term financial position, but they are insufficient to ensure its long-term viability. This model contributes to the research literature on the economics of higher education, and model-driven academic management and strategy. It also provides useful implications and insights that can inform policy-makers and college leaders. Introduction There are nearly four thousand degree-granting institutions in the United States [1]. They range from highly selective global research universities with tens of thousands of students to small community colleges with open admission. These institutions of higher education face many challenges. One existential threat is the approaching decline in the U.S. college-age population, sometimes referred to as a "demographic storm" [2]. The prospects are especially dire for tuition-dependent private colleges [3,4], and some observers have predicted that half of American colleges and universities will soon perish [5]. In this context, college leaders seek to understand how to adapt to declining student applications [4,[6][7][8][9]. State governments and policy-makers would also benefit from insight into this problem to prevent escalating college closures [10] and the associated negative impact on the U.S. economy. We describe the problem of declining enrollments and frame it using a computational model. The article approaches colleges as complex systems that provide educational services [11][12][13][14][15][16]. The system view of academic institutions studies the dynamics of education provision by understanding how the elements of the institution interact in response to changes in the operating environment. The theory developed here is implemented as a system dynamics computational model [17,18] that includes causal feedback mechanisms between students, faculty, facilities, and college financials. Thus, the article contributes to the literature on the economics of higher education [2,4,12], and model-driven academic management and strategy [19][20][21]. The proposed model allows us to conduct a model-driven analysis of three plausible collegelevel responses to declining applications. The "do-nothing" scenario serves as the base case. The remaining two scenarios investigate strategies aimed at cutting costs and increasing revenue. A common cost-cutting strategy is to reduce the number of teaching faculty [3,7,22,23]. The third scenario examines a revenue increase strategy, according to which a college attempts to attract more students by offering better facilities [3,6,7,22,24]. Overall, this analysis provides useful implications and insights that can inform policy-makers and college leaders. The next section describes the problem of declining student applications and how colleges have tried to overcome it. Then, we review the study method. Next, we build a system dynamics model of a representative tuition-dependent college. Lastly, we use the computational college model to analyze three scenarios, discuss results and insights, and propose extensions for future research. The problem of declining enrollments Nationwide trends indicate that the college-age population in the U.S. will drop between 13 and 29 percent depending on the state in the next ten years [2,8]. For example, in Massachusetts, the number of high school graduates is projected to drop by about 15 percent within a decade [7]. The demographic decline is likely to translate into lower enrollments and operating deficits at tuition-dependent colleges [4]. Moreover, operating expenditures per-student will increase because the costs of running a college will spread across fewer students [4]. Declining enrollments is terrible news for many private colleges that are often teetering on the brink of closure [22]. Historically, institutions attempted to resolve operational deficits by increasing revenue and cutting expenses, as the following examples demonstrate. Looking back to the 1990s, Townsley [22] recounts stories of several colleges that struggled with declining enrolments and failed. For example, Bradford College in Massachusetts tried maintaining enrollments through the 1980s and 1990s by adding new majors and offering generous financial aid. Before permanently closing, it offered 40 majors while having only 35 faculty members. Between 1988 and 1998, the share of the revenue from tuition and fees given back as financial aid, called the discount rate, increased from 19 to 48 percent. Operating deficits continued through the 1990s. In 1999, the deficit was $6.1 million on an annual budget of $14 million. In 1998, the college took an $18 million loan to refinance old debt and to build a new dormitory hoping that the new building would attract students and increase enrollment. However, enrollments did not improve, and the college closed in 2000. Another failed college reviewed by Townsley [22] is Trinity College, a women's college, which operated in Vermont. From 1990 to 1999, enrollment in continuing education and undergraduate programs dropped by about 30 percent, despite a discount rate as high as 45 percent. When the operating deficit reached $2.7 million, the college cut 20 of 30 majors and kept only ten faculty members. In academic year 1999-2000, about 60 students enrolled, which forced the college to close in 2000. More recently, Rivard [23] provides an account of 15 small private colleges that responded to financial troubles due to lackluster student recruitment with dramatic cuts in faculty and staff accompanied by program closures. For example, Midway College in Kentucky laid off about 30 percent of its 54 faculty. Holy Family University in Philadelphia let go of 20 percent of its 100 faculty members in addition to cutting staff positions. Similarly, Anderson University in Indiana reduced its faculty by four percent. Wittenberg University in Ohio reduced faculty positions by 21 percent. Hampshire College in Amherst, Massachusetts, provides a striking recent example of the challenges that tuition-dependent colleges face. It admitted its first students in 1970. The college is known as an experiment in self-directed education because it has no grades, majors, or traditional departments [25][26][27]. Nearly 90 percent of the revenue comes from tuition and fees [28]. As student enrollments declined by 20 percent between 2014 and 2019, revenue dropped from $60 million to $49 million (Fig 1). Due to decreasing enrollments, the college has been experiencing operating deficits since 2016. The college used major gifts and emergency endowment withdrawals to address these deficits. Moreover, it responded by reducing faculty and staff and by cutting operating expenses [28]. It still expects a deficit of $20 million by 2022, which might lead to a closure or a merger with another institution. Method Higher education management and resource planning is a complex task that involves balancing the wishes of multiple stakeholders [12,14,22]. Despite its complexity, academic planning is still often performed with minimal analytical backing. Model-driven academic planning is an improvement over traditional methods because it allows academic stakeholders to consider alternatives and review the dynamics of plausible scenarios before making a decision [19]. The first models for academic planning were developed in the 1960s [30,31]. Early planning tools relied on spreadsheet models [32]. Eliman [33] combined a statistical regression model and a linear programming model to estimate the supply of school graduates, demand for university spots, and determine the allocation of students. Strategic planning tools also used the Markov chain models to simulate student performance [34] and economic input-output models for resource planning on campus [35]. Researchers have been advocating for using the systems approach for academic planning because it is well-suited for modeling the complex and dynamic nature of higher education [12,13,20,[36][37][38]. Therefore, this article adopts system dynamics to model operations of a college. System dynamics is a modeling methodology that recognizes circular chains of causality that form feedback loops and introduce delays [17,39]. Besides quantitative variables, system dynamics models can include qualitative measures, such as the reputation of an [28,29]. https://doi.org/10.1371/journal.pone.0236872.g001 The system dynamics approach [17,18,53] involves building a computational model in several iterative steps. First, the problem is clearly stated, which means that the simulated time range and behaviors that need to be examined are identified. The time range and the behaviors determine the level of analysis and the model boundary. During the second step, the modeler lists variables to be included in the model. For example, this article performs analysis at the college level, and therefore environmental factors that are beyond college control are assumed as external to the model. Third, based on the research literature, field work or interviews with domain experts, causal relationships between variables are documented using the pictorial notation [54] similar to the notation used for signed directed graphs. In the fourth step, the causal structure developed in the previous step is implemented as a computational model. System dynamics models are usually built and simulated in specialized modeling software such as Stella Architect (sold by isee systems), Vensim (offered by Ventana Systems), or Powersim (sold by Powersim Software). This study uses Stella Architect. Mathematically, a system dynamics model is a set of nonlinear, non-stochastic integral equations that are solved numerically by the modeling software. The computational model is used to simulate scenarios that test public policies or management strategies. System dynamics has been used for high-level policy planning as well as studies at the college level [55]. For example, Galbraith [21] analyzed the effects of national educational policies in Australia. Strauss and Borenstein [56] built a system dynamics model to explore difficulties in achieving national educational goals in Brazil. Bergland et al. [57] forewarned the administration of a college of an upcoming budgetary collapse due to the student admission policies. Zaini et al. [58] modeled the strategic resource allocation at a new university in Russia. Barlas and Diker [59] used an interactive system dynamics model to analyze long-term management of enrollment, number of faculty, teaching quality, research output, and outside consulting. Oyo et al. [60] studied the impact of government funding schemas on university capacity and productivity in a developing country. Sahay and Kumar [61] used system dynamics to investigate "what-if" scenarios for teaching quality improvement at a university. This article adapts and extends an earlier model [62], which was built, validated and calibrated in consultations with key stakeholders at a private university. We generalize the previous model in order to study the financial viability of tuition-dependent colleges. We add financial details, which are informed by the relevant literature on college economics and management. The following section develops the college model that describes the operation of a typical tuition-based college. The college model The college model consists of four interconnected sectors: Students, Faculty, Facilities, and Financials. This section describes the causal structure of each sector, while the mathematical equations and parameter values are in the Appendix B. Students. The students sector (Fig 2) represents student enrollments and several factors that affect it. Arrows indicate causal directionality [54]. The arrow is positive when the cause and effect variables change in the same direction. When the cause and effect move in the opposite directions, the link is negative. Rectangles indicate variables that accumulate, called stocks. Stocks describe the state of the system. Mathematically, stocks are integral equations, which introduce inertia and delays in the system. Circular causal connections form feedback loops. The letter B indicates a balancing (negative) feedback loop, while the letter R indicates a reinforcing (positive) loop. Fig 2 shows two balancing loops. Balancing loops add stability to the system. The model assumes that there is an exogenous number of applications every year. Of the admitted applicants, only a fraction, called yield, eventually enrolls at the college [63]. The new students join the existing stock of students. The model includes two factors that affect the enrollment decision: academic reputation of the college and the adequacy of campus facilities expressed as facilities shortage. Facilities shortage is reduced when the stock of facilities increases. Colleges compete for students by investing in facilities [6, 7,22]. Academic reputation depends on the faculty [4]. Assuming that this is an undergraduate college, we exclude research. The number of faculty and students determine the faculty teaching load, which increases as more students arrive on campus and decreases as the college hires more faculty. High faculty teaching loads lower the academic experience of faculty, student satisfaction, and the college reputation. Faculty. The faculty teaching load increases when there are more students (Fig 3). If the college does not address the high faculty load issue, then the faculty academic experience deteriorates, which affects morale leading to faculty attrition. As professors leave the university, the stock of faculty decreases, and therefore the teaching load of the remaining faculty increases even further, which again degrades the academic experience of the faculty. This circular causation forms a reinforcing loop marked by the letter R, a vicious cycle, which can drag down the academic experience on campus. To lower the teaching load, the college can hire more professors-this is the balancing loop in Fig 3. Facilities. Facilities planning is one of the primary strategic responsibilities of academic leadership [24]. Facilities include dorm rooms for students, classrooms, and laboratories for teaching, and office space for faculty. More faculty and students may lead to a facilities shortage, a problem that the college can address through new construction (Fig 4). However, because capital projects are complex undertakings that involve many stakeholders and take many years of planning, fundraising, and construction, available space often lags behind the desired space, especially in times of growing or declining enrollment [22,24]. Most of the capital funding comes in the form of loans [24]. Therefore, this model assumes that the college constructs facilities with borrowed funds. Maintenance and operation of the facilities add to the operating cost. Facilities shortage negatively impacts faculty academic experience. Financials. While college finances are complex and intertwined [12,22,64], for simplicity, this model includes only three financial stocks: the emergency reserve of cash, endowment, and debt ( Fig 5). Tuition, room, board, and fees are the main contributors to the revenue and are assumed to be constant. Future versions of the model can relax this assumption. The discount rate is the fraction of tuition, room, board, and fees given back to some students in the form of financial aid. Providing financial aid reduces the amount of money that the college has for operational expenses [4,22]. Some unrestricted gifts can be used for operations. Expenses include faculty salaries, cost of operating facilities, and debt payments. The difference between revenue and expenses constitutes the net revenue. Maintaining a cash reserve for a "rainy day" is one of the approaches to strengthening the financial health of a college [3]. The model assumes that the college maintains a stock of cash, which is replenished when the college has an operating surplus and depleted when the college needs to use the cash for operations. When the college borrows for operations or new construction, new loans add to the stock of the existing debt. In the example in Fig 6A, the revenue of $60 million is spent on salaries, operating facilities, and paying off debt; the remainder is the surplus. If operating expenses are higher than the revenue, then the net revenue is negative, and it is called the operating deficit. It can be covered by drawing from the cash reserve, unrestricted gifts, endowment withdrawals, and borrowing for operations (Fig 6B). We assume that the internal university rules limit the percentage of the endowment that can be withdrawn every year. The endowment can be increased with new gifts. Complete model. Fig 7 shows the causal structure of the entire college model, which consists of the four sectors detailed above. This model captures the many causal and feedback effects between elements of a representative tuition-based college, which include faculty, facilities, tuition revenue, endowment, debt, reputation, and educational outcomes. The system model tracks numerous simultaneous effects triggered by the external operating environment and by the management decisions. Scenario analysis This section uses the college model to study three possible responses to declining student applications and operating deficits. The first simulated scenario examines the "do nothing" strategy, PLOS ONE Will colleges survive the storm of declining enrollments? which serves as the base case. The second strategy aims to lower operating costs by reducing the number of faculty. The goal of the third strategy is to increase revenue by attracting more students when the college improves its facilities. These cost and revenue strategies are popular with stressed colleges, as has been discussed earlier in this article. Colleges may pursue additional strategies, but they are not considered in this article and will be studied in future research. We model the demographic decline as an external variable using the function in Fig 8. Here, we consider a 15 percent application drop, which is the situation predicted for Massachusetts [7]. Massachusetts has recently seen a slew of college closures, which warranted serious concerns at the regulatory level [10]. Appendix A shows simulation results for different decline rates. All simulations start in equilibrium in 2010 when the college receives A 0 = 6,500 applications per year. Simulations run for 15 periods through 2025. We assume that applications start to decline in year t 1 = 2015 and they decline over the following ten years, that is Δ = 10 years. Since this section assumes a decline rate of 15 percent, i.e. β = 0.15, after 10 years applications drop to (1−β)A 0 = 0.85 � 6,500 = 5,525 applications per year. To isolate the effects of the three strategies, we assume away gifts, interest on debt, and market returns on endowment. In all scenario simulations, we introduce interventions in 2015, ceteris paribus. We compare the short-term (two years) and long-term (10 years) outcomes. Scenario 1: Do nothing. The first run simulates the situation when the college does not actively mitigate the declining applications. The "do nothing" scenario demonstrates the adverse effects of the demographic decline. As applications drop (solid line in Fig 9), so does the number of enrolled students (dashed curve in Fig 9). In this scenario, the stocks of faculty and facilities remain constant. Lower student enrollments hurt the revenue (dashed curve in Fig 10) and cause yearly deficits after 2018 (see the gap between revenue and expenses in Fig 10). The college resolves annual deficits by withdrawing from the endowment and, when the endowment draw is not sufficient beyond 2022, by borrowing (see Fig 11). A bigger debt requires higher interest payments that add to the operating expenses. The annual spending per student increases (dashed curve in Fig 12) as expenses are distributed over fewer students. Note that this model assumes that tuition, room, board, and fees (solid line in Fig 12) stay constant. At colleges that do not have significant endowments, fullpaying students subsidize students who receive scholarships [4]. When spending per student approaches the sticker price, the college's ability to offer financial aid is reduced. • Curve 1 in all figures is the base run from the "do nothing" scenario, during which all desired searchers are permitted. The run starts in the steady state with zero net revenue (Fig 13). After 2019, the college experiences budget deficits (negative net revenue). It never recovers, as curve 1 stays below zero. • Curve 2 is for the experiment when 75 percent of all desired searches are allowed. Despite the decline in revenue due to lower enrollments, the expenditure cuts are sufficient for the college to experience a temporary surplus surge until 2023 (Fig 13). After 2023, the college runs an operating deficit. The few remaining faculty experience higher teaching loads (curve 2 in Fig 14). Student enrollments are not significantly different from the base run (Fig 15). • Curve 3 is a trajectory for the case that allows for 50 percent of searches. Net revenue stays positive through the simulation (Fig 13). At this level of hiring, the faculty teaching load is higher than during the base run (Fig 14), which encourages more faculty attritions. Student enrollments are not significantly different from the base run (Fig 15). • Curve 4 is for the case when only 25 percent of faculty searches are allowed by the administration. The college experiences a spike in surplus (Fig 13). The faculty teaching load increases dramatically (Fig 14). Student enrollments are the lowest (Fig 15) of all the cases due to the low reputation of the college. PLOS ONE Will colleges survive the storm of declining enrollments? This set of experiments suggests that carefully tuned faculty reductions may offer a shortlived financial reprieve. However, reducing faculty may have long-term adverse effects, and therefore the college may need to find other long-term solutions. Note that this analysis understates the adverse effects of faculty cuts. The model does not consider several secondary effects of the cost strategy such as fewer courses, a modest selection of majors and minors, scarce academic support, and negative morale on campus and across alumni. Moreover, reporting by the media and public discussions on social media are likely to heighten the harmful effects of faculty cuts. Scenario 3: Invest in facilities (revenue strategy). Colleges invest in facilities to improve their competitive standing, which they hope will attract more students and improve their revenue [6,22]. As the president of one troubled liberal arts college stated, the college ". . .could invest in new facilities to improve its application and retention rates, ultimately reversing the vicious cycle of underwhelming enrollment trends and tuition dependence into a virtuous one of growing demand and a diversified financial portfolio" [7]. The following set of experiments explores this scenario. These simulations assume that to increase its competitiveness and improve enrollments, the college leadership commits to expanding the classroom space per student by 10 percent. In the simulation, we model this decision as a step function in 2015 (dotted line in Fig 16). This decision leads to new construction (dashed curve in Fig 16) that increases the stock of campus PLOS ONE facilities over time (solid curve in Fig 16). Following the common practice [24], the model assumes that the college borrows funds for new construction. In Figs 17-20, curve 1 is the base run, the "do nothing" scenario. Curve 2 shows a case when, despite better facilities, enrollments do not increase over the base run. Curves 3, 4, and 5 show cases when due to new facilities the incoming classes are 5, 10, and 20 percent larger than in the base case. Curve 5 is the most optimistic case for the college. Fig 17 depicts changes in student enrollments. Curve 1 of the base run coincides with curve 2, as expected. Curves 3, 4, and 5 show increased enrollments. Fig 18 shows corresponding revenue changes. When there are no changes in enrollments, revenue does not change (curve 2) from the base run (curve 1). When enrollments increase (curves 3, 4, and 5), tuition revenue increases. The college earns the highest revenue when enrollments increase by 20 percent (curve 5). Fig 19 compares operating expenses for the base run (curve 1) to four additional cases (curves 2-5). The operating expenses increase due to the interest on new debt that financed construction, and the cost of operating the new facilities. In addition, more students imply that the college needs to maintain a larger teaching staff, which also adds to the operating cost as salaries. Fig 20 shows the net revenue for the five simulations, including the base run (curve 1). In the short term, the college experiences surpluses if enrollments increase by at least 10 percent (curves 4 and 5). However, in the long term, expenses wipe out the new revenue from additional student enrollments. Summary of results. Table 1 summarizes results for the three scenarios discussed above over the short and long term. Enrollments begin to decline in 2015. This is also the year when the model implements policies aimed at mitigating the decline. The next column shows indicator values for 2017, two years into the policies. Ten-year values are in the last column; this is the long term. For the cost strategy, we show the performance when 75 percent of faculty searches were allowed. The revenue strategy shows the case when the incoming class jumps by 20 percent. All three scenarios start in the same state in 2015. However, the short-term and long-term outcomes are different for the three strategies. The "do nothing" scenario demonstrates the negative consequences feared by colleges. In the long run, it shows a significantly lower incoming class, lower student enrollments, negative net revenue (i.e. operating deficit), higher expenditure per student, lower endowment, and a substantial debt. In the base case, neither faculty nor facilities change over the ten years. While the college can function for a few more years if it funds operations from the endowment and gifts, the trend is not sustainable in the long run. The two other scenarios are attempts to improve the situation. In the second scenario, the faculty levels are the lowest of the three scenarios. In the short term, by having fewer faculty, the college improves its financial situation, as there is a surplus, and spending per student is lower. Even after ten years, no withdrawals from the endowment are necessary, and the college does not borrow for operations. The deficit, which is the smallest of the three scenarios, is paid from the cash reserve accumulated over the prior years. The long-term expenditure per student increases as compared to 2015; however, it is the lowest of the three scenarios. In the third scenario, the college increases facilities per student by 10 percent, which it hopes would improve its competitiveness and lead to more enrollments and revenue. In the short term, the college attracts the incoming class of 900 students, which is the largest class in the three scenarios. To accommodate more students, the college hires more faculty. In the short run, due to the surge in tuition revenue, college experiences a significant surplus. As the college borrows funds for construction, in the long term, the expansion results in a substantial debt. Due to the surplus in preceding years, the college amasses a significant cash reserve that it uses to cover the operational deficit. The college manages to preserve its endowment intact. Discussion We now discuss implications for practice and insights from our results in a form that is most relevant to academic leaders and policy-makers. It is important to note that our simulations do not provide precise forecasts for any given college. However, the model explains the general effects and consequences of the two strategies aimed at mitigating declining college enrollments. Table 2 highlights the pros and cons of the three scenarios. Simulations suggest that the "do nothing" strategy allows maintaining the status quo in the short term; however, it is not sustainable in the long run. Because the college does not have any surplus, it cannot accumulate cash reserves. Hence, it must draw from the endowment and borrow for operations when it experiences a deficit. The cost strategy reduces the number of faculty, creates a surplus, and, in the short term, lowers the spending per student. The number of students in the long run is the same, as in the "do nothing" scenario. The college manages to preserve the endowment and accumulates no debt. However, the college still runs operating deficit, even though the deficit is the smallest of the three scenarios. In the third scenario, the college reverses declining enrollments. It attracts more students, but it incurs debt as the college borrows for construction. Within ten years, the college has more faculty, more students, and more facilities. However, it also runs a deficit, which is not sustainable in the long run, unless tuition increases sufficiently to cover the operating deficit. While the cost strategy leads to the least damaging financial situation for the college, neither of the strategies are sustainable in the long term because each of them results in an operating deficit. Moreover, after 10 years, average spending per student is higher under each strategy, which would add to the pressure for the college to increase tuition. While continuous tuition escalation has sustained colleges in the past [4,22], economic theory [65] suggests that increasing tuition might be a counterproductive approach at the time when demand for college is shrinking. Possible better solutions include encouraging higher college attendance rates [2], reengineering universities as data-driven institutions [12,66], encouraging campus innovation for additional revenue streams [5], or completely redesigning the higher education business model [67]. Conclusion There is an approaching "storm" in the U.S. undergraduate student market. As the college-age population declines, the tuition-dependent colleges need to adapt to the demographic change. Motivated by this problem, this article conducts a model-driven analysis of three plausible scenarios. It draws on systems theory to conceptualize a college as a complex service system and to develop a system dynamics computational model that captures core causal interrelationships and multiple feedback effects between faculty, facilities, tuition revenue, financials, reputation, and outcomes. The resulting college model allows performing simulations that test the shortterm and long-term financial viability of a college. The analysis suggests that common solutions such as cutting cost by reducing faculty or improving campus facilities to attract students and increase revenue may improve the college's short-term financial position. However, these strategies are insufficient to ensure the long-term viability of the college without the continuous tuition hikes. The main contribution of this article is a computational model that adds to our understanding of higher education economics, management and strategy. It can be used for model-driven academic management that supplements traditional planning at colleges. The analysis of this feedback-rich model provides insights that can inform college leaders and policy-makers. The computational model and model-driven analysis can be used together with such strategy tools as SWOT (Strengths, Weaknesses, Opportunities and Threats) [3, 68] and PESTLE (Political, Economic, Social, Technical, Legal and Environmental) [69] that examine the impact of external environmental factors on colleges. The computational model can also provide value as part of an interactive learning environment that can help with dynamic decision making [70]. System-based learning and planning environments can improve performance and decision-making on several scales including decision heuristics, structural knowledge, decision time and decision strategy [71,72], especially when combined with prior exploration [73] and debriefing [74][75][76][77]. Limitations of this model can provide fruitful topics for future research. This version of the model does not analyze the effects of interest rates, market returns, marketing, yield determination, and discount rates. These variables might be critical for marginally viable colleges. Therefore, we plan to consider these variables in the future extensions of the model that would allow new strategies in addition to the ones studied in this article. The student sector can be expanded to include academic advising and co-curricular activities that influence the retention rates. We could also review combined strategies. Future research could also consider how proliferation and improvement of digital technologies [78] may alter demand for on-campus education. Appendix A: Alternative application decline rates To examine how the results vary if the decline rate is lower than 15 percent, we have performed additional simulations. Table 3 provides performance outcomes for a college when applications decline by five percent and Table 4 shows results when applications drop by 10 percent. The simulations show that the college can easily weather a five percent decline. When applications drop by five percent (Table 3), the college can still maintain its status quo in the short and long term without any strategic adjustments. This is because the college still receives enough applications to recruit a sufficiently large incoming class. Considering that the student population stays constant, the cost strategy leads to a smaller faculty size, which implies a greater than normal faculty workload-not a desirable outcome. Because the college earns a surplus, there is no need to borrow for operations. Under the revenue strategy, the institution grows-there are more students, faculty, and facilities. In the short term, the college earns a significant operating surplus, which, however, comes at the cost of significant debt due to additional construction. In the long run, continued construction adds to the debt, and, without a tuition hike, the college would face operating deficit (negative net revenue). The endowment can be preserved in the short and long term. To sum up, if applications decline by five percent, the "do nothing" strategy is sustainable. A 10 percent decline in applications worsens the situation so that a "do nothing" strategy is no longer acceptable (Table 4). In the long term, the college experiences operating deficit that it covers by drawing from the endowment and borrowing. The cost strategy allows the college to have a positive new revenue (a surplus) for the next 10 years, which eliminates the need to draw from the endowment and borrow for operations. The revenue strategy always leads to the largest student body and the most faculty among the three scenarios. The college earns a significant surplus in the short run, but the surplus turns into a large operating deficit by 2025. The college accumulates a significant debt. In summary, if faced with a 10 percent drop in applications, the "do nothing" strategy is not sustainable and the college would be well advised to pursue the cost strategy.
7,747.6
2020-08-10T00:00:00.000
[ "Education", "Computer Science", "Economics" ]
Components of iron–Sulfur cluster assembly machineries are robust phylogenetic markers to trace the origin of mitochondria and plastids Establishing the origin of mitochondria and plastids is key to understand 2 founding events in the origin and early evolution of eukaryotes. Recent advances in the exploration of microbial diversity and in phylogenomics approaches have indicated a deep origin of mitochondria and plastids during the diversification of Alphaproteobacteria and Cyanobacteria, respectively. Here, we strongly support these placements by analyzing the machineries for assembly of iron–sulfur ([Fe–S]) clusters, an essential function in eukaryotic cells that is carried out in mitochondria by the ISC machinery and in plastids by the SUF machinery. We assessed the taxonomic distribution of ISC and SUF in representatives of major eukaryotic supergroups and analyzed the phylogenetic relationships with their prokaryotic homologues. Concatenation datasets of core ISC proteins show an early branching of mitochondria within Alphaproteobacteria, right after the emergence of Magnetococcales. Similar analyses with the SUF machinery place primary plastids as sister to Gloeomargarita within Cyanobacteria. Our results add to the growing evidence of an early emergence of primary organelles and show that the analysis of essential machineries of endosymbiotic origin provide a robust signal to resolve ancient and fundamental steps in eukaryotic evolution. taxonomic distribution and evolution of the characterized machineries, SUF and ISC, in Bacteria [16].In eukaryotes, SUF and ISC operate in plastids and mitochondria, respectively [17], and catalyze similar biochemical steps but involve different proteins (Figs 1A and 2A).The components of the ISC machinery are encoded by nuclear genes and are translocated to mitochondria to synthesize [Fe-S] clusters [18].The SUF components are also mostly encoded by nuclear genes, to the exception of SufB and SufC, which can be encoded in plastid genomes [19,20].From previous phylogenetic analyses of some components, it is generally accepted that these machineries originated from the 2 endosymbionts [17,[21][22][23][24].However, the phylogenetic signal brought by these systems as a whole has not been used to investigate the placement of mitochondria and plastids. We used our recent datasets [16] to analyze in detail the taxonomic distribution of the ISC and SUF machineries in 1,191 genomes covering the current diversity of Alphaproteobacteria (for a list of taxa, see S1 Table ).The presence of ISC and SUF homologues in Alphaproteobacteria is patchy, ISC being identified mostly in the basal lineages (Magnetococcales, Marine Proteobacteria 1, MC1) and in Rickettsiales, whereas it is absent in most of the other Alphaproteobacteria that have the SUF system (Fig 1B and S2 Table).The pattern of mutual exclusion between ISC and SUF has been observed in other bacterial groups, especially Gammaproteobacteria [16].A phylogeny based on concatenation of 5 core ISC proteins (IscS, HscA, Fdx, CyaY, and IscA, 1,306 amino acid positions) is largely consistent with the reference Alphaproteobacteria phylogeny, showing the monophyly of all major orders (S1 Fig) .These results indicate that the ISC system was present in the ancestor of Alphaproteobacteria and was inherited mainly vertically, while it was subsequently lost in many lineages during diversification of this phylum.This distribution de facto excludes an origin of the mitochondrial ISC from many lineages. We then investigated the presence of the SUF and ISC systems in 66 genomes covering the main eukaryotic phyla [25] (for a list of taxa, see S3 Table ).Homologues of the 8 ISC machinery components were identified in most eukaryotic taxa-except for IscX-and are encoded in nuclear genomes (S4 Table ).Preliminary single-gene tree analyses allowed to clearly identify the eukaryotic orthologues of mitochondrial origin by their branching with Alphaproteobacteria (S1 Data).We therefore added these ISC components in the Alphaproteobacteria dataset to investigate their placement.Bayesian analysis using the CAT+GTR model supports the monophyly of eukaryotic sequences (PP = 1) and their deep branching, just after Magnetococcales (PP = 0.82).These results strengthen the Alphaproteobacteria-deep hypothesis, although the position of eukaryotes is more basal than previously observed [4,6], branching before the Marine Proteobacteria 1 group (Fig 1C and 1D).Moreover, the internal topology of Alphaproteobacteria agrees with the reference phylogeny of this phylum, notably with Rickettsiales branching after Magnetococcales and sister to all remaining Alphaproteobacteria (Fig 1D). We investigated the robustness of the Alphaproteobacteria-deep placement by analyzing the ISC dataset with a panel of alternative models and methodologies (Fig 1C) Given that ISC components are all nuclear encoded, the "Rickettsiales-sister" placement is unlikely to be due to a convergent compositional bias toward AT-rich genomes between Rickettsiales and mitochondria.An AminoGC plot (S3A Fig) shows that the GC bias of eukaryotic sequences is not particularly similar neither to Rickettsiales or Magnetoccocales.The removal of compositional heterogeneous sites by a stationary-based method resulted in a tree consistent with the Bayesian tree, placing again Eukaryotes at the base of Alphaproteobacteria, and after the Magnetococcales, although with low support (BV = 23%), but shows a correct internal phylogeny of Alphaproteobacteria (Figs 1C and S4).An AminoGC plot of the trimmed dataset shows a reduced difference between the groups, suggesting that lowering the GC bias between clades might remove the incorrect placement of Eukaryotes with Rickettsiales (S3B Fig) .Surprisingly, despite the fact that nucleic sequences are usually more prompt to compositional bias, an ML tree obtained from the nucleic acid version of the original alignment (GTR model) also strongly supports the "Alphaproteobacteria-deep" placement (BV = 100%), although with a poorly resolved internal topology of Alphaproteobacteria (S5 Fig) .A %GC plot of this dataset shows a similar pattern as the full protein dataset (S3C Fig) .Altogether, these results suggest that sequence compositional bias is at least partially responsible for the placement of Eukaryotes with the Rickettsiales, in agreement with recent analyses [4,6,7].This tree reconstruction artefact can be counterbalanced either by removing compositional heterogeneous positions or by using the GTR model, whereas ML site heterogeneous models such as LG+C60+PMSF fail to tackle this issue. We then used the same approach to investigate the origin of plastids by analyzing the SUF system (Fig 2).We used the dataset from our previous study [16], originating from 95 genomes covering the current diversity of Cyanobacteria (for a list of taxa, see S5 Table ).Whereas the SUF system is largely present in most genomes, the ISC system is absent (Fig 2B and S6 Table).An ML tree obtained from concatenation of the 4 most conserved SUF markers (SufB, SufC, SufD, and SufS, 1,561 amino acid positions) is largely consistent with the reference Cyanobacteria phylogeny, except a few misplacements likely due to specific intra-phylum HGTs (S6 Fig) .We did not include Gloeobacter in the dataset as we have recently shown that its original SUF was replaced by one laterally acquired from other bacteria [16]. Among Eukaryotes, SUF homologues are present only in photosynthetic lineages, either nuclear or plastid encoded (S4 Table ).Preliminary single-gene tree analyses allowed us to identify the eukaryotic orthologues of plastid origin by their branching with Cyanobacteria (S1 Data).When included in the cyanobacterial dataset, Bayesian analysis (CAT+GTR) (Fig 2D ) and ML analysis (LG+C60+PMSF) without and with stationary-based trimming (S7 and S8 Figs, respectively) support a deep placement of plastids within Cyanobacteria (position #1 in Fig 2C and 2D), in agreement with previous analyses [9,10].Interestingly, these trees favor the placement of plastids as sister to Gloeomargarita lithophora (PP = 0.65, BV = 70%, BV = 54%), strengthening recent data [11][12][13].Puzzlingly, ML analysis with nucleic acids (S9 Fig) infers a highly incongruent tree where Eukaryotes are not monophyletic and branch with different cyanobacterial clades (positions #2 and #2' in Fig 2C and 2D), which strongly indicates that this tree is not reliable.Finally, our data nicely confirm the independent acquisition of a primary plastid in the amoeba Paulinella microspora from a member of Prochlorococcales [26,27] (Figs 2D and S7-S9). The origins of mitochondria and plastids is a difficult question to address phylogenetically, due to the antiquity of such events and the potential biases in composition and evolutionary patterns arising from profound adaptations that occurred during the endosymbiosis process.Recent advances in genomic coverage from Alphaproteobacteria and Cyanobacteria, together with improvement of phylogenomics approaches and evolutionary models, have allowed to clarify the timing and origin of these endosymbioses.We show here that the [Fe-S] cluster biosynthesis ISC and SUF machineries provide a robust additional dataset to infer the origin of organelles.Four criteria concur to the pertinence of using these machineries: (i) they were inherited from Alphaproteobacteria and Cyanobacteria at the origin of Eukaryotes and primary photosynthetic lineages, respectively; (ii) they carry out an essential cellular function; (iii) most of their components are encoded in the nucleus, reducing the problem of compositional biases; and (iv) being part of a highly integrated process, they were likely subjected to similar evolutionary constraints. The complexity of the large protein families, including the ISC and SUF components, may have prevented their selection in previous large-scale automated analyses searching for organelle orthologues.Therefore, a similar approach focusing on the detailed analysis of other fundamental eukaryotic systems of mitochondrial and plastid origin, coupled with increase in genomic coverage from deep branches of the Alphaproteobacteria and Cyanobacteria, will surely provide further key information on our most ancient past. Assembly of datasets Datasets of prokaryotic homologues of IscA, IscS, IscU, IscX, CyaY, HscA, HscB, Fdx, SufB, SufC, SufD, SufE, SufS, SufT, SufU were already assembled as described in [16].Here, for eukaryote sequences, we used the same procedure.Briefly, we used HMM profiles of each component from [16] to perform an HMM search using HMMER v3.2.1 [28] on the eukaryotic database, selecting hits with an e-value < 0.01.These hits were then added to the prokaryotic homologue datasets.Sequences were aligned using MAFFT v7.419 [29] (auto option), the alignments were manually curated to eliminate nonhomologous sequences, and preliminary phylogenies were inferred using FASTTREE v2.1.10[30] (LG+G4) and with and without trimming using BMGE v1.12 [31] (BLOSUM30).For each component, eukaryotic orthologue subfamilies were delineated manually based on the branching of sequences within Cyanobacteria or Alphaproteobacteria, taxonomic distribution, domain composition, and length of sequences.We did not find any homologs of the archaeal SMS system [16], to the exception of SmsB and SmsC, which are fused in the same ORF in the genome of Blastocystis sp.ATCC 50177, as previously reported [32].The SufB and SufC of the 4 Glaucophyta plastid genomes were identified using tBLASTn [33] and added manually.All preliminary trees used for the delineation of eukaryote orthologue groups are available in S1 Data. For the concatenation, we selected the markers based on different criteria.Markers that were not broadly distributed (IscX in eukaryotes, SufU and SufT in Cyanobacteria) were eliminated.We also discarded HscB and IscU as they did not form clear monophyletic groups in preliminary trees.Finally, in eukaryotes, 2 homologues of IscA (ISA1 and ISA2 belonging to the large ATC-II and ATC-I protein subfamilies, respectively [22]), SufE and SufS, were identified.For IscA (ISA) and SufS, we selected the paralogues distributed similarly to other ISC/ SUF components and which branch with Alphaproteobacteria and Cyanobacteria, respectively, in the preliminary phylogenies (ISA1 and SufS1 in S4 Table ).We discarded the whole SufE family, as it included 2 clades (SufE1 and SufE2 in S4 Table ) either containing multiple paralogues or not widely conserved in Eukaryotes.Although the ATC-II family is shared by the ISC and SUF systems (IscA and SufA) in Alphaproteobacteria, we selected IscA for the ISC concatenation as we observed that it is in vicinity of the rest of ISC system [16] and follows the reference tree of Alphaproteobacteria group. Finally, alignments and each protein family dataset were aligned using MAFFT v7.419 [29] (LINSI option) and trimmed using BMGE v1.12 [31] (entropy threshold = 0.95, minimum length = 1, matrix = BLOSUM30) and individual duplicated sequences (paralogues, isoforms, assembly artifacts) were removed after visual inspection of trees/alignments.The nucleic sequences were back-aligned on amino acid sequences by converting each amino acid in its respective codon by a custom script (S2 Data).For trimming of the nucleic alignments, we used the -t CODON option of BMGE.For the concatenations, we kept a taxon if it possessed n (markers) � 3 for both ISC and SUF, except for Rhodophyta and Glaucophyta (SUF) for which we retained the only 2 detected markers.The highly divergent sequences from amitochondriate eukaryotes (Metanomada and Entamoeba) were removed to avoid tree reconstruction artefacts. For the reference trees of Alphaproteobacteria and Cyanobacteria, we assembled supermatrices using IF2, RpoB, and RpoC as markers with the same procedure as described above. Phylogenetic inference The ML phylogenies of ISC and SUF systems based on protein sequences were inferred using IQ-TREE v1.6.10 [34], with the best model according to BIC criteria and with the PMSF method (posterior mean site frequency) option with 60 mixture categories [35], with the starting phylogenies inferred by homogeneous models.To assess the robustness of branches, 100 nonparametric bootstrap replicates were used.The tree of SUF was rooted using Nitrosomonadales and Balneolaeota, as SUF was anciently acquired by horizontal gene transfer from these organisms [16] (S7 Table ).SUF homologues from Gloeobacter were removed as the original system was replaced in this bacterium by HGT from other bacteria [16].The ML phylogenies based on nucleic sequences were inferred using IQ-TREE, with the GTR/SYM models.The Bayesian phylogenies were inferred using Phylobayes v4.1c [36], with the GTR+CAT model.For ISC, 4 chains were run for around 99,000 iterations each.The convergence between chains was tested using bpdiff with a sampling of 1,885 and 1,878 trees (every 50 trees) and a burn-in of 5,000.Two chains with a maxdiff < 0.15 (0.13) were used to infer the consensus tree.The other maxdiff results correspond to lower but acceptable convergence (0.23, 0.25, 0.22, 0.32, 0.32).For SUF, 4 chains were run for around 86,000 iterations each.The convergence between chains was tested using bpdiff with a sampling of 14,259 and 14,235 trees (every 5 trees) and a burn-in of 15,000.Two chains with a maxdiff � 0.3 (0.3079) were used to infer the consensus tree.The rest of the other maxdiff results correspond to nonconvergent runs (0.75, 0.45, 0.87, 0.58, 0.60). The ML reference phylogenies of Alphaproteobacteria and Cyanobacteria were inferred using IQ-TREE v1.6.10 [34], with the best model according to BIC criteria and with the PMSF method (posterior mean site frequency) option with 60 mixture categories [35] with the starting phylogenies inferred by homogeneous models.To assess the robustness of branches, 1,000 fast-bootstrap replicates were used.The two reference trees were rooted using as outgroup other Proteobacteria and Melainabacteria, respectively (S7 Table ). Fig 1 . Fig 1. (A) Schematic view of [Fe-S] cluster biosynthesis by the ISC system and the corresponding components.(B) Taxonomic distribution of the ISC and SUF systems mapped on the Alphaproteobacteria reference tree (IQ-TREE, LG+R10+C60+PMSF.IF2+RpoB+RpoC.3,429 amino acid positions, 1,193 sequences).Dots at branches indicate ultrafast-bootstrap values � 95%.The scale bar indicates average number of substitutions per site.(C) Summary of the placement of mitochondria using different approaches and models.Two alternative positions are indicated and reported in (D).(D) Bayesian phylogeny from the concatenated ISC dataset including alphaproteobacterial and eukaryote homologues (IscA+IscS+HscA+Fdx+CyaY, Phylobayes, GTR+CAT, 1,306 amino acid positions, 149 sequences).Dots at branches indicate posterior probabilities � 0.95.The scale bar indicates average number of substitutions per site.Numbers at the tips indicate the taxonomy IDs from NCBI.The data underlying this Figure can be found in S1 Data.https://doi.org/10.1371/journal.pbio.3002374.g001 . An ML phylogeny with the LG+R8+C60+PMSF model indeed shows Eukaryotes as sister group of Rickettsiales, although with low support (bootstrap value (BV) = 48%) (Fig 1C, position #2 in Fig 1D, S2 Fig).Moreover, the internal phylogeny of Alphaproteobacteria shows incongruencies with both the reference phylogeny of the phylum and the alphaproteobacterial ISC concatenation tree, notably with the split of the Rickettsiales in 2 groups (S2 Fig), which strongly suggest the presence of a tree reconstruction artefact using this model when eukaryotic sequences are included. Fig 2 . Fig 2. (A) Schematic view of [Fe-S] cluster biosynthesis by the SUF system and the corresponding components.(B) Taxonomic distribution of the ISC and SUF systems mapped on the Cyanobacteria reference tree (IQ-TREE, LG+R10+C60+PMSF.IF2+RpoB+RpoC.3,026 amino acid positions, 107 sequences).Dots at the branches indicate ultrafast-bootstrap values � 95%.The scale bar indicates average number of substitutions per site.(C) Summary of the placement of mitochondria using different approaches and models.Three alternative positions are indicated and reported in (D).(D) Bayesian phylogeny from the concatenated SUF dataset including cyanobacterial and eukaryote homologues (SufB+SufC+SufD+SufS) (Phylobayes, GTR+CAT, 1,565 amino acid positions, 112 sequences).Dots at branches indicate posterior probabilities � 0.95.The scale bar indicates average number of substitutions per site.Numbers at the tips indicate the taxonomy IDs from NCBI.The data underlying this Figure can be found in S1 Data.https://doi.org/10.1371/journal.pbio.3002374.g002
3,780
2023-11-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Graph-theoretical comparison of normal and tumor networks in identifying BRCA genes Background Identification of driver genes related to certain types of cancer is an important research topic. Several systems biology approaches have been suggested, in particular for the identification of breast cancer (BRCA) related genes. Such approaches usually rely on differential gene expression and/or mutational landscape data. In some cases interaction network data is also integrated to identify cancer-related modules computationally. Results We provide a framework for the comparative graph-theoretical analysis of networks integrating the relevant gene expression, mutations, and potein-protein interaction network data. The comparisons involve a graph-theoretical analysis of normal and tumor network pairs across all instances of a given set of breast cancer samples. The network measures under consideration are based on appropriate formulations of various centrality measures: betweenness, clustering coefficients, degree centrality, random walk distances, graph-theoretical distances, and Jaccard index centrality. Conclusions Among all the studied centrality-based graph-theoretical properties, we show that a betweenness-based measure differentiates BRCA genes across all normal versus tumor network pairs, than the rest of the popular centrality-based measures. The AUROC and AUPR values of the gene lists ordered with respect to the measures under study as compared to NCBI BioSystems pathway and the COSMIC database of cancer genes are the largest with the betweenness-based differentiation, followed by the measure based on degree centrality. In order to test the robustness of the suggested measures in prioritizing cancer genes, we further tested the two most promising measures, those based on betweenness and degree centralities, on randomly rewired networks. We show that both measures are quite resilient to noise in the input interaction network. We also compared the same measures against a state-of-the-art alternative disease gene prioritization method, MUFFFINN. We show that both our graph-theoretical measures outperform MUFFINN prioritizations in terms of ROC and precions/recall analysis. Finally, we filter the ordered list of the best measure, the betweenness-based differentiation, via a maximum-weight independent set formulation and investigate the top 50 genes in regards to literature verification. We show that almost all genes in the list are verified by the breast cancer literature and three genes are presented as novel genes that may potentialy be BRCA-related but missing in literature. examination of recurrent mutations, whose observed frequency in a large cohort of cancer patients is much higher than expected. However usually a significantly low overlap in alterations of the alternative driver genes is observed, giving rise to what is known as mutual exclusivity. Several approaches relying on mutations data thus have developed specialized techniques to deal with the issue of exclusivity [3][4][5][6][7]. A second class of approaches consist of those employing gene expression data in the form of expression profiling, gene coexpression, or differential expression analysis [1,[8][9][10]. Recent integrative approaches employ one or both types of expression and mutations data together with interactions network data in the form of genetic or proteinprotein interactions (PPI) [11][12][13][14]. Approaches combining gene expression data with the relevant interactions data in the context of long non-coding RNAs (lncRNA) have shown promising results in identfying lncRNA-disease associations [15][16][17][18][19]. Particularly, the interactome has demonstrated its usefulness in explaining the observed patterns of mutations either in healthy or in diseased individuals [20]. Rather than identifying a set of cancerrelated genes, the goal of the integrative computational approaches usually is to extract modules deemed central to the cancer. HotNet2 employs a random-walk on the PPI network distributing the mutation frequencies of genes throughout the network, giving rise to a directed graph where the strongly connected components represent the output modules [21]. MEMCover combines mutual exclusivity data of mutations across several tissue types with the PPI network data to produce modules of cancer genes [22]. Although potentially useful for pan-cancer analysis, such approaches have limited use for specific cancer types where relatively small number of samples does not provide adequate information in the form of mutual exclusivity of the mutations. Furthermore they focus on the discovery of cancer modules rather than prioritizing individual genes as cancer drivers. By contrast, a recent cancer gene prioritization method, MUFFINN, applies a network-centric analysis of mutation data thereby integrating mutational information for individual genes and their neighbors in functional/interaction networks. It is suggested that MUFFINN's cancer gene prioritization has good performance even in the setting where only data from a limited number of samples is employed [23]. We employ mutations data, gene expression data, as well as network data in the form of PPI networks, to identify individual driver genes related to breast cancer. The general framework consists of a comparative analysis of graph-theoretical measures. It is based on differential identification of breast cancer genes via a pairwise comparison of the values attained for a specific graph-theoretical measure applied on a normal and a tumor tissue sample over all available samples. Although recent studies comparing normal and tumor samples with regards to changes in genetic data including those in the form of mRNA expression, miRNA expression, or methylation alterations have beeen suggested, our study extends these approaches by introducing a network aspect and several common graph centrality measures, into the comparison [24][25][26]. We note that graph centralities have been employed in the context of identifying breast cancer genes in the past [27]. Such an approach has been revisited recently and an extension employing two different machine learning classifiers on computed centrality scores have been suggested [28]. However rather than incorporating gene expression and mutations data, as is done in our study, these approaches are limited to gene signatures; a set of centrality measures have been applied to PPI networks limited to genes already known to be related to breast cancer, to assign a degree of importance. Furthermore, our framework involving a comparative analysis of network centralities in pairs of graphs generated from normal and tumor tissue samples introduces a novelty that enables a differential analysis of genes involved in breast cancer. Methods We summarize the overall methodology in Fig. 1. Three main components consist of data preparation, algorithmic computations, and analysis and evaluation of results. Data preparation involves necessary preprocessing of gene expression, mutations, and network data. This is followed by the algorithmic computations step involving several graph-theoretical distance measures. The output consisting of lists ordering genes with respect to their degrees of involvement in breast cancer is evaluated in the final step. This involves ROC and precision/recall analysis as compared to two golden standard databases, COSMIC and NCBI BioSystems, and gene ontology analysis with respect to the GO database, in addition to these two golden standard datasets. The output list of the best performing measure is further filtered and a detailed review of its top genes is done through literature verfication. Input data sets and data preparation We gather the breast cancer data from The Cancer Genome Atlas Project (TCGA). There are 99 instances; each instance contains data in the form of expression levels of genes in the normal and tumor tissue samples of a patient, and relevant mutation information regarding the tumor samples. For gene expression, we consider the RPKM (Reads per kilo base per million mapped reads) normalization which includes a gene length normalization of RNA-seq data and apply a threshold of 1 to assign a gene as expressed. All somatic mutations other than those marked as silent are taken into account. In a b c Fig. 1 Flowchart summarizing the overall methodology. Flowchart summarizing the overall methodology. The first step depicted in part-a consists of data processing and necessary filtrations of the input databases TCGA and IntAct. The second step depicted in part-b involves generation of pairs of normal/tumor graphs based on expression, mutations, and interactions data. Measures based on graph-centralities are employed on resulting graphs. Ten lists of genes, eight from centrality measures and two from control measures, ordering genes with respect to their computed weights are provided as output. The final step depicted in part-c consists of analyzing the ten lists with regards to ROC, precision/recall (P/R), and GO consistencies (GOC). Two datasets, NCBI BioSystems [37] and COSMIC [38] are employed in all three analysis, whereas for the GOC analysis an additional database, the GO database [39] is also employed. Among all tested centrality-based measures M bw provides the best performance in all three analyis. The M bw list is further analyzed in more detail by filtering it based on a maximum weight independent set (MWIS) formulation, and the top genes from the resulting filtration go through a final literature verification step. a Data preparation, b Algorithmic computations, c Analysis and evaluation addition, we employ the H. Sapiens protein-protein interaction network of the the October, 2016 version of the IntAct database [29]. The PPI network is filtered so that each interacting pair is a protein and each interaction is a physical interaction. Graph-theoretical framework Let H be the H. Sapiens PPI network. Employing the TCGA data, for each instance i of the available 99 instances, we create a pair of graphs, N i , T i , corresponding to normal and tumor graphs respectively. The graph N i is the subgraph of H induced by the node set corresponding to the set of genes expressed in the normal instance of i, whereas T i is the subgraph induced by expressed and non-mutated genes in the tumor instance of the same sample i. Let P be a list of pairs of graphs such that P = (N 1 , T 1 ), . . . , (N r , T r ), where each N i , T i corresponds respectively to normal and tumor graphs of the instance where V G denotes the node set of a graph G. A measure M x is a function defined on P that orders the nodes in V, according to some graph-theoretical property x. The performance of a measure depends on how well the position of each gene in this ordering matches its revelance to the cancer under study. The measures we consider are based on the following graph-theoretical properties commonly employed in network analysis studies: betweenness centrality, random walk distances, graph-theoretical distances, clustering coefficient, degree centrality, and Jaccard indices. All of these measures are defined on the nodes of a graph. According to the traditional classification of graphtheoretical properties, the first three are global measures, whereas the last three are local measures. A global measure defined on a node is a function of the whole graph globally, whereas a local measure defined on the node usually is a function of some locality centered around the node. For the purposes of this study, we introduce a novel classification, that of unlabeled versus labeled measures. A measure of the former type on a node considers all the rest of the graph as unlabeled; the topology of the network matters but not the relationships between specific node pairs. For the latter, the node labels are important as well as the network topology. The betweenness centrality, the clustering coefficient, and the degree centrality are unlabeled measures, whereas the random walk distance, the graph-theoretical distance, and the Jaccard index based neighborhood overlaps are labeled measures. Once an ordering of the nodes with respect to a measure is determined, we apply a filtering based on maximum weight independent sets (MWIS) to select a subset of crucial nodes deemed important for the cancer under study. Unlabeled graph-theoretical measures In what follows we provide detailed descriptions of the employed measures. For each measure we provide a node weight assignment scheme, which defines the ordering of the measure. For the following let G = (V , E) be an undirected graph where V denotes the node set and E denotes the edge set of the graph G. We first provide the definitions of four unlabeled graph-theoretical measures. M bw : This measure is based on the betweenness centrality. Given G = (V , E), the betweenness of a node v ∈ V is defined as bw G (v) = ∀s,t∈V ,s =v =t where σ st is the number of shortest paths between nodes s, t and σ st (v) is the number of such paths that go through the node v. This value is divided by 2 (|V |−1)(|V |−2) for normalization. Note that for a node v / ∈ V , bw G (v) = 0 trivially. Our first measure M bw sorts the nodes of V in non-increasing order of the node weight function W bw , defined for a node v as, M cc : This measure is based on the clustering coefficient. For a node v in a graph G = (V , E) the clustering coefficient of v, cc G (v) is defined as, We note that for a node v / ∈ V , cc G (v) = 0 trivially. The measure M cc sorts the nodes of V in non-increasing order of the weight function W cc , defined for a node v as, M deg1 , M deg2 : These measures are based on the degree centrality. Let Ne G (v) denote the set of neighbors of v in G and let Ne 2 G (v) denote the set consisting of Ne G (v) together with the neighbors of all nodes in Ne G (v). The measure M deg1 sorts the nodes of V in non-increasing order of the node weight, defined for a node v as, whereas the measure M deg2 employs the weighting defined as, Labeled graph-theoretical measures We provide the definitions of four labeled graphtheoretical measures. M rw : We employ proximity matrices based on random walks of the networks for this measure. We note that similar methods have been employed in many previous PPI network analysis studies [30][31][32] Assuming the origin of the walk is node u, let Pr G [ u, v] denote the probability that the random walker is at node v after a certain number of time steps and Pr G [ u, v] denote the same probability after one more time step. Initially is decremented from this contribution to increase the chances of the walker remaining close to the origin. Each probability is normalized by dividing it with v∈V Pr G [ u, v]. The procedure is repeated until the sum of the differences of probabilities with those of the previous time step does not exceed a pre- The measure M rw based on random walk distances sorts the nodes of V in non-decreasing order of the node weight W rw , defined for a node v as, where Pr G [ −, v] denotes the column vector corresponding to v in the random walks-based proximity matrix Pr G and PCC(x, y) denotes the Pearson correlation coefficient of the vectors x, y. Pr G [ p, q] = 0 trivially, if p / ∈ G or q / ∈ G. M gt : Our next measure M gt is based on graphtheoretical distances and is defined in exactly the same way as the previous measure M rw , except now an entry Pr G [ u, v] of the proximity matrix Pr G defines the graph theoretical distance between nodes u, v in G, that is the length of the shortest path between u, v. M j1 , M j2 : We define two measures based on Jaccard indices with respect to neighborhood overlaps. The measure M j1 sorts the nodes of V in non-decreasing order of the node weight, defined for a node v as, whereas the measure M j2 employs the weighting defined as, Filtering based on maximum weight independent sets The graph-theoretical measures of the previous subsections provide a node weight assignment scheme in a way that the weight of a node represents the importance of the protein corresponding to the node regarding the cancer under study. However due to the network influence-based nature of some of these measures, they maybe susceptible to guilt by association; a node may end up with a large weight designating it a crucial protein, only because some of its neighbors have large weights. This is especially evident in measures based on betweenness centrality, random-walks, or graph-theoretical distances, as the weight of a node is dependent on the weights of its neighbors in the PPI network. In order to alleviate this issue and produce only a small set of crucial proteins, we apply a filtering on the node-weighted PPI network. The network consists of all the proteins involved in all normal, tumor instances under study and the node weights are assigned as those resulting from applying one of the mentioned graph-theoretical measures. Given a nodeweighted graph G, the maximum weight independent set (MWIS) of G, is the set of nodes with maximum total weight such that no two nodes are neighbors in G. We note that the computational problem is NP-complete [33]. Several greedy heuristics have been investigated in [34]. The GWMIN2 heuristic which selects the node u in the conflict graph C that maximizes denotes the neighborhood of u in C together with the node u itself, provides better results than the rest of the known heuristics [35]. Furthermore it provides a theoretical guarantee that the weight of the output independent set is at least where V C denotes the vertex set of the conflict graph C. Therefore the filtration step is implemented via the GWMIN2 heuristic for the MWIS problem. Results and discussion We implemented the described measures in C++ using the LEDA library [36]. We show that in determining the quality of a graph-theoretical measure for identifying genes related to breast cancer, the labeled/unlabeled classification is more important than the traditional local/global classification of the measures. Furthermore we show that under this classification, the unlabeled measures perform better than the labeled measures in extracting breast cancer genes via comparison of normal/tumor network instance pairs− contrary to the intuition that the latter employs more information in the form of labeled networks. Our evaluations indicate that the measure based on betweenness centrality is the best performer in terms differential identification of breast cancer genes across all normal/tumor samples. Evaluations with respect to known cancer databases Comparing against known cancer databases taken as golden standards, we measure the performances based on Receiver Operating Characteristic (ROC) and Precision/Recall (PR). As the golden standard to compare against the gene list of each of the graph-theoretical measures under study, we employ two separate databases. One is the integrated breast cancer pathway from the NCBI BioSystems database [37] and the other is the cancer Gene Census of the COSMIC database [38]. We note that whereas NCBI BioSytems data is specific to breast cancer, the COSMIC database covers genes relevant to all types of cancer. Thus we can evaluate how well each of the defined measures can identify both breast cancer-specific genes and cancer genes not specific to any certain type. Every evaluated measure is designed so that it orders the genes from most relevant to the least. We extract the top k% genes from the list of each of the defined graphtheoretical measures, for every k between 1 and 100 at the increments of 1. In addition to the measures under study, we introduce two additional control measures. The first one is the expression difference (ED) measure which orders the genes with respect to the ED values. ED(v) for a gene v is defined as the absolute value of the difference between the number of normal and tumor samples including v as an expressed gene. The second control measure is the mutation frequency (MT) which orders the genes with respect to the number of tumor samples including them as mutated genes. Figure 2 provides the ROC curves of all the employed graph-theoretical and control measures. In the left plot, the true positives and false positives are computed based on the comparison of the top k% genes of the output list of each measure against the NCBI BioSystems database, whereas in the right plot the reference database is COS-MIC. The respective PR curves are provided in Fig. 3. The corresponding AUROC and AUPR values are provided in Table 1. With respect to the ROC/PR curves and the AUROC/AUPR values the best performing measure is M bw . The AUROC value of the M bw list as compared to the NCBI BioSystems dataset is 0.77 and its AUPR value in the same setting is 0.042. With regards to the COSMIC dataset the AUROC value of the M bw list is 0.709, whereas its AUPR value is 0.091. It is clear that the rest of the unlabeled measures also perform better than the labeled measures for most values of k. It is interesting to note that a measure as simple as degree differentiation between normal and tumor samples across all samples, that is M deg1 , provides a better recognition of cancer-related genes than those of the more complicated measures making use of extra information in the form of labels, such as graph-theoretical distances or Jaccard index based measures. Note also that all the unlabeled measures perform consistently better than the control measures ED and MF with respect to both of the employed golden standard cancer gene databases. Evaluations based on gene ontology An additional database is employed in setting up the next evaluation; the Gene Ontology (GO) database [39]. The GO database annotates proteins from several species with appropriate GO categories organized as a directed acyclic graph (DAG). In order to standardize the GO annotations of proteins, similar to the evaluation methods of [40][41][42], we restrict the protein annotations to level 5 of the GO DAG by ignoring the higher-level annotations and replacing the deeper-level category annotations with their ancestors at the restricted level. For a node u ∈ V , let GO(u) indicate the set of standard GO annotations of the protein corresponding to u. For a given list T of genes to be tested and a reference list R, we define a GO Consistency (GOC) score as, The list T consists of the top k% of the genes provided by one of the graph-theoretical measures under study or one Fig. 4. We only show the plot when the golden standard list R is the NCBI BioSystems pathway; the plot resulting from the GOC evaluations with respect to the COSMIC database is almost the same. It is clear that the performance trends of the evaluated measures are almost the same as those of the previous metrics based on ROC and PR, although with less emphasized differences. Further detailed simultaneous inspection of the top two lists, M bw and M deg1 , and the GO consistency analysis with respect to the NCBI BioSystems data reveals that the top contributors to the corresponding GOC scores show significant overlap. At k = 5, that is when the top 5% of the gene lists are considered, the four genes contributing most to the GOC score in both lists, M bw and M deg1 , are IGF1R, RAF1, YWHAB, and MYC. Note that none of these are directly listed in the golden standard gene list of the NCBI BioSystems. Among the notable GO categories they commonly or independently share with those associated with the golden standard genes are GO:0008284 (positive regulation of cell proliferation), GO:0009890 (negative regulation of biosynthetic process), GO:0016310 (phosphorylation), GO:0031325 (positive regulation of cellular metabolic process), and GO:0010648 (negative regulation of cell communication). Same analysis with respect to the COSMIC database provides CTBP2, ATF3, FHL2, NFKB2 as shared top contributors in both lists M bw and M deg1 . It is worth emphasizing that other than the last one, none of these genes is listed in the COSMIC database itself. Evaluations with rewired networks Employing the criteria of the previous subsections, that is the criteria based on the ROC analysis and the GO consistency analysis with respect to the two golden standards, we further tested the two best-performing measures, M bw and M deg1 , on different networks. The networks under consideration are again based on the IntAct PPI network but modified with the introduction of varying degrees of random error via rewirings: r% of the existing edges are removed randomly and the same number of edges are inserted between random pairs of nodes not adjacent in the original network. This procedure is repeated four times giving rise to four randomly rewired networks for each value of r = 5, 10, 15, 20. For each rewired network the rest of the framework is the same; a pair of normal and tumor networks is generated based on the expression and mutation information of each instance by taking the induced subnetwork of the rewired network, and the relevant functions M bw , M deg1 are computed throughout all the networks. Thus, considering the induced graphs of all the samples, 99 normal and 99 tumor, in total 3168 graphs are generated and the suggested measures execute on all these graphs. The experiments on the rewired networks serve also the purpose of testing how sensitive the suggested graph-theoretical measures are to the noise in the network data. We present the resulting AUROC and AUPR values in Table 2. Note that the true positives, false positives, precision, and recall values are computed as an average of respective values attained in four randomly rewired networks generated with the same ratio r. As expected the general tendency for AUROC and AUPR values with respect to both golden standard datasets is to decrease as the random rewiring ratio r increases. The slight discrepancies are due to the randomness in the rewirings. It should be noted that even though there is a performance decrease with growing random error in the network, this degradation in the performance is relatively small. For M bw , the AUROC values decrease by only 4.5% and 4.9%, respectively, for the NCBI and COSMIC databases, even with a 20% random rewiring of the original network. The respective percetages of degradation in the AUROC values of M deg1 are 2.2% and 3.3%. The performance degradations with respect to the AUPR values are slightly higher; for M bw they are 7.1% and 9.9%, and for M deg1 they are 8.1% and 6.7%. This is an indication that in addition to providing good performance, the suggested measures for cancer gene prioritization are also relatively robust to random noise in the interaction network data. A closer comparative look at the rates of degradation in performances in terms of AUROC, AUPR values of M bw and M deg1 reveals that the former gets more error-prone as the degree of noise in the network increases. The same phenomenon is also evident in the GO consistency analysis. The plot of GOC values of prioritized lists of M bw and M deg1 on randomly rewired networks, for each ratio r, with respect to the NCBI database is provided in Fig. 5. Since the plot with respect to the COSMIC database is almost the same we do not present it. Note again that the plotted values are those averaged over the values resulting from experimental runs of four randomly rewired networks, for each r. As with the ROC analysis, it is clear that M bw and M deg1 are both quite resilient to noise in the interaction network simulated via random rewirings, with M deg1 even more so than M bw . Comparisons against an alternative gene prioritization We compare the results of the two measures performing the best, M bw , M deg1 against an alternative method for cancer gene prioritization. MUFFINN is similar to the gene prioritization methods suggested in this study both in terms of the employed data and the goal of disease gene prioritization in the presence of data from a limited number of patient samples [23]. In terms of input datasets, it also employs mutation data from patient samples and network data in the form of functional networks or interaction networks. The underlying hypothesis of MUFFINN is that a gene is more likely to represent a true cancer driver if it is functionally associated with other genes in an interaction network. For such a network-based mutation data analysis, they consider two ways to take into account mutational information among direct neighbors in the network. One is to consider mutations in the most frequently mutated neighbor and the second is to consider mutations in all direct neighbors with normalization by their degree connectivity. We call the former MUFFIN max and the latter MUFFIN sum . We executed both MUFFIN max and MUFFIN sum with the same data employed in this study, that is the interaction network is the same IntAct network and the samples are the same TCGA samples as those used by our graph-theoretical prioritization methods. We extract the top k% genes from the list of each of the prioritization methods under comparison M bw , M deg1 and MUFFIN max , MUFFIN sum , for every k between 1 and 100 at the increments of 1. We then apply ROC and precision/recall analysis. In the left plot of Fig. 6 the true positives and false positives are computed based on the comparison of the top k% genes of the output list of each method against the NCBI BioSystems database, whereas in the right plot the reference database is COSMIC. The numbers in parantheses indicate the AUROC values of the relevant methods. The respective PR curves are provided in Fig. 7 and the numbers in parantheses indicate the corresponding AUPR values. Our proposed graph-theoretical measure M bw provides the largest AUROC and AUPR values with respect to Table 2 AUROC and AUPR values for M bw (multicolumns in the middle) and M deg1 (multicolumns on the right) on randomly rewired networks with rewiring ratio r = 5%, 10%, 15%, 20%. For a fixed ratio r, each value is computed as an average of four randomly rewired networks Table 1. This is due to the fact that MUFFINN uses only genes in Concensus CDS. We filtered the reference golden standard databases to remove the rest of the genes not considered by MUFFFINN for a fair comparison, which led to slight differences in the values attained in the tests of M bw and M deg1 . Filtering the M bw list Since M bw is the best performer among all the employed measures, we employ a detailed inspection of its output. The top 50 genes with respect to M bw are listed in Table 3 in descending order of their weights, as shown in the W bw column. We first apply the MWIS heuristic on the nodeweighted PPI network to implement the filtration. The rows of Table 3 that are marked with bold correspond to filtered nodes, that is they are in the MWIS output. The column marked with N provides the number of normal samples including the gene as an expressed gene, the column marked with T provides the corresponding number for tumor samples, the column marked with M provides the number of tumor samples the gene occurs as mutated, the column marked with GS 1 indicates whether the gene is listed in the first golden standard dataset, NCBI BioSystems, the column marked with GS 2 provides the analogous information regarding the COSMIC database, and finally the last column provides the list of genes presented in the table that are in the MWIS of the W bwweighted PPI network and that are neighbors of the given gene in the network. As a sample Fig. 8 provides the neighborhood subgraphs of the top four MWIS genes of the list. Each subgraph is induced by the protein corresponding to the center node and its neighbors in the PPI network. Nodes are weighted with corresponding W bw values. The labeled nodes in the periphery are those in the top 50 list, but are filtered out from MWIS since the central node is included in MWIS. A literature review of the proteins resulting from filtration that are marked in bold in the table reveals that almost all of them play significant roles in breast cancer. We provide a review of each such protein not verified by either of the employed golden standard datasets. IKBKE has been shown to be a breast cancer oncogene via integrative genomic approaches [43]. More recently, Sang Bae et al. have shown that CK2/CSNK2A1 phosphorylates SIRT6 and is involved in the progression of breast carcinoma [44]. MDFI is considered a candidate tumor suppressor gene involved in cellular and viral transcriptional regulation [45]. TK1 is a widely accepted biomarker for cancer [46]. Roosmalen et al. have suggested SRPK1 as a breast cancer metastasis determinant via tumor cell migration screen [47]. The relationship between MAP3K1 and breast cancer detailing the possible mechanisms MAP3K1 mutations affect pathways important in breast carcinoma has been discussed in [48]. The role of PTN in the malignant progression of breast cancer is well established since early work [49]. The role of TNFRSF1B in triple-negative breast cancer (TNBC) has been studied in [50]. It is suggested that MAP3K3 contributes to breast carcinogenesis and MAP3K3 may prove to be a valuable therapeutic target in patients MAP3K3-amplified breast cancers [51]. KDM1A/LSD1 is suggested as a predictive marker for breast carcinogenesis and a novel attractive therapeutic target for treatment of ER-negative breast cancers. PIK3R3 is identified as one of the crucial genes for regulating triple negative breast cancer cell migration [52]. It is shown that HLA class I expression, including HLA-B, in breast cancer was significantly associated with nodal metastasis, TNM, lymphatic invasion, and venous invasion [53]. Furlan et al. have shown, in vitro and in vivo, an unsuspected facet of ETS1 in breast tumorigenesis. They show that while promoting malignancy through the acquisition of invasive features, ETS1 also attenuates breast tumor cell growth and could therefore repress the growth of primary tumors and metastases [54]. Due to the NR4A1-dependent regulation of TGFβ signaling, NR4A1 is considered to promote breast cancer invasion and metastasis [55]. It is shown that PLSCR1 binds to onzin, a negative transcriptional regulatory target of c-Myc regulating cell proliferation which potentially implicates the role of PLSCR1 in cancer cell survival and proliferation [56]. HSPB1 downregulation in human breast cancer cells has been shown to induce upregulation of PTEN, a tumor suppressor gene [57]. Human Pirh2 (p53-induced RING-H2 protein) is encoded by the RCHY1 gene. Decrease of Pirh2 expression in the breast cancer cells result in reduced tumor cell growth via the inhibition of cell proliferation and the interruption of cell cycle transition [58]. It is suggested that TFAP2C overexpression correlates with poor overall survival after 10 years of diagnosis of breast cancer [59]. Koo et al. have proposed that RIPK3 deficiency is positively selected during tumor growth/development in breast cancer [60]. The first column provides the Uniprot id of the gene, the second column provides the gene name. The third column provides the weight of each gene based on M bw . The fourth and the fifth columns provide the number of instances each gene is expressed in the normal and tumor samples respectively. The sixth column provides the number of mutations of a gene observed throughout all the tumor samples in the dataset. The seventh column indicates whether the gene is listed in the breast cancer pathway of the first golden standard, the NCBI BioSystems, whereas the eight columnd indicates whether it is listed in the second golden standard, the COSMIC database. The last column provides the set of PPI network neighbors of the corresponding gene from the top 50 list that are also in MWIS In addition to these genes already verified by relevant literature, the MWIS genes in the top 50 list contains three novel genes with indefinite associations to breast cancer: MAP3K14, MAPK8IP2, and PRKAB1. Although not verified by literature, the M bw measure suggests these three as candidate breast cancer genes that deserve further investigation. Conclusion We defined a framework to evaluate the performances of several network measures in differentially identifying cancer-related genes on tumor versus normal network instance pairs. We applied this framework on the breast cancer data. Two separate classifications of the network measures are defined; local/global and /unlabeled. We demonstrate that on the available data, the local/global classification is not as reliable a source for separating the good performing measures from bad ones as the labeled/unlabeled classification. Unlabeled network measures surprisingly outperform labeled ones. The best performing measure is based on betweenness centrality, a global and unlabeled network measure. Applying the measures employed in this study to instances from various other types of cancer is part of the planned future work. Extending the defined measures to nodeweighted, edge-weighted graphs, where a node weight represents the expression level of the corresponding gene and the edge weight represents the confidence attributed to the corresponding interaction in the PPI network may also provide valuable information in terms of cancerrelated genes identification. We finally note that the main purpose of MWIS filtration is to compress the list of all scored genes into a shorter list of genes, for detailed inspection, such as in the form of literature verification as is done in this study. Although such a compression is not done blindly, by simply taking the top 50 genes for instance, and the effects of guilt-by-association are taken into consideration through the heuristic idea of independent sets for providing true positives, the compressed list can be susceptible to error in terms of false negatives. Due to the nature of independent sets, at most one of the two possibly high scoring genes is provided for every interacting pair. Thus further biological evaluations could focus on such high scoring pairs with one gene present, the other absent in the compressed list, and the significant genes in gene neighborhoods as in Fig. 8 Submit your next manuscript to BioMed Central and we will help you at every step:
8,772.4
2017-11-22T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
RefSoil+: a Reference Database for Genes and Traits of Soil Plasmids Soil-associated plasmids have the potential to transfer antibiotic resistance genes from environmental to clinical microbial strains, which is a public health concern. A specific resource is needed to aggregate the knowledge of soil plasmid characteristics so that the content, host associations, and dynamics of antibiotic resistance genes can be assessed and then tracked between the environment and the clinic. Here, we present RefSoil+, a database of soil-associated plasmids. RefSoil+ presents a contemporary snapshot of antibiotic resistance genes in soil that can serve as a reference as novel plasmids and transferred antibiotic resistances are discovered. Our study broadens our understanding of plasmids in soil and provides a community resource of important plasmid-associated genes, including antibiotic resistance genes. S oil is a unique and ancient environment that harbors immense microbial biodiversity. The soil microbiome has functional consequences for ecosystems, such as supporting plant growth (1,2) and mediating key biogeochemical transformations (3). It also serves as a reservoir of microbial functional genes of interest to human and animal welfare. Within microbial genomes, important functions can be encoded on both chromosomes and extrachromosomal mobile genetic elements such as plasmids. Plasmids can be laterally transferred among community members, both among and between phyla (4)(5)(6). This causes a propagation of plasmid functional genes and allows them to spread among divergent host strains. Within microbial communities, plasmids influence microbial diversification (7) and contribute to functional gene pools (4). Plasmids can alter the fitness of individuals in a community as they can be gained or lost in the environment, which alters their functional gene content and can have consequences for their local competitiveness. Antibiotic resistance genes (ARGs) provide a prime example of the importance that functional genes encoded on plasmids can have. ARGs can undergo plasmid-mediated horizontal gene transfer (HGT) (8,9). There is particular concern about the potential for spread of ARGs between environmental and clinically relevant bacterial strains. Studies of ARGs in soil have shown overlap between environmental and clinical strains that suggests HGT (10)(11)(12). For example, plasmid-encoded quinolone resistance (qnrA) in clinical Enterobacteriaceae strains likely originated from the environmental strain Shewanella algae (11). The extent of the impact of environmental reservoirs of ARGs is unknown (13), but studies have shown evidence for predominantly vertical, rather than horizontal, transfer of these genes (14). Additionally, it is speculated that rates of transfer in bulk soil are low compared to that in environments with higher population densities, such as the rhizosphere, phyllosphere, and gut microbiomes of soil microorganisms (15). In the case of antibiotic resistance, mobilization is a public health risk. Broadly, the ability of plasmids to rapidly move genes both between and among memberships is linked to diversification in complex systems, especially soils (7). Despite their ecological and functional relevance, plasmids are not well characterized in soil. Plasmids vary in copy number, host range, transfer potential, and genetic makeup (4,16), making them difficult to assemble and characterize from complex soil metagenomes that contain tens of thousands of bacteria and archaea (17). Plasmid extraction from soil is biased toward smaller plasmids and excludes linear plasmids (4). Additionally, mosaic gene content on plasmids makes their assembly from metagenomes difficult (4). Though new methods for plasmid assembly from metagenomes are being developed (18,19), the resulting contigs represent a population average of plasmid gene content and size because they are very likely not derived from an individual cell. Thus, the size ranges of plasmids in soils are largely unknown but of consequence, because size is one factor reported to contribute to plasmid potential for transferability (5). Furthermore, "plasmidome" analysis and plasmid assembly from metagenomes do not provide host information. New methods, such as single-cell analysis and proximity ligation of chromosomes to plasmids prior to sequencing (20), are still expected to assemble plasmids with some degree of mosaicism. However, whole genomes sequenced from soil-associated microorganisms, inclusive of both chromosomes and plasmids, could provide plasmid host and size information. A database including this information could also provide information as to the extent functional genes encoded on plasmids overlap with the host cell chromosome(s). To aid in the study of plasmids and their associated functional genes in soil, we established a resource to compare genetic locations of functional genes in soil microorganisms. We extended the RefSoil database (21) of 922 soil microorganisms to include their plasmids. We used this database to test whether soil-associated plasmids are distinct from plasmids from a broad general database of microorganisms, RefSeq (22). We focused our comparisons on plasmid size and the content, diversity, and location of ARGs on plasmids and chromosomes. We used hidden Markov models from the ResFams database (23) to search for ARGs in the extended soil database, RefSoilϩ, and RefSeq. RefSoilϩ provides insights into the range of plasmid sizes and their functional potential within soil microorganisms. RefSoilϩ can be used to inform and test hypotheses about the traits, functional gene content, and spread of soil-associated plasmids and can serve as a reference for plasmid assembly from metagenomes. RESULTS AND DISCUSSION Plasmid characterization. RefSoilϩ is an extension of the RefSoil database inclusive of soil-associated plasmids. RefSoilϩ includes taxonomic information, amino acid sequences, coding nucleotide sequences, and GenBank files for a curated set of 922 soil-associated microorganisms. A total of 928 plasmids were associated with RefSoil microorganisms, and 370 RefSoil microorganisms (40.1%) had at least one plasmid (Fig. 1A). This is high compared to the proportion of noneukaryotic plasmids in the general RefSeq database (34%; Mann-Whitney U, P Ͻ 0.01). The mean number of plasmids per RefSoil organism was 1.01, but the number of plasmids per organism varied greatly (variance, 3.2) (Fig. 1B). For example, strain Bacillus thuringiensis serovar thuringiensis (RefSoil 738) had 14 plasmids, ranging from 6,880 to 328,151 bp. The mean number of plasmids per RefSoil organism was also greater than for RefSeq (Mann-Whitney U, P Ͻ 0.01). The abundance of plasmids found in RefSoil genomes highlights plasmids as an important component of soil microbiomes (7,24). Soil-associated plasmids tended to be larger than plasmids from other environments (Mann-Whitney U, P Ͻ 0.01). Plasmid size in RefSoil microorganisms ranged from 1,286 bp to 2.58 Mbp ( Fig. 2A), which rivals the range of all known plasmids from various environments (744 bp to 2.58 Mbp) (16). In the distribution of plasmid size, both upper and lower extremes had representatives from soil. Plasmids from all habitats were previously shown to have a characteristic bimodal size distribution with peaks at 5 kb and 35 kb (15)(16)(17). In this analysis, the subset RefSeq plasmids had a multimodal distribution (Hartigans' dip test, P Ͻ 0.01; bimodality coefficient, 0.745) and modes at 3 kb and 59 kb (Fig. 2). Soil-associated plasmids in RefSoilϩ also had a multimodal size distribution (Hartigans' dip test, P Ͻ 0.05; bimodality coefficient, 0.800) but had modes at 1 kb, 3 kb, 49 kb, and 183 kb. Additionally, RefSoilϩ plasmids were larger than RefSeq plasmids (Mann Whitney U, P Ͻ 0.01) (Fig. 2). Specifically, RefSoilϩ proportionally contained more plasmids of Ͼ100 kb (Fig. 2B). Thus, while soil-associated plasmids vary in size, they are, on average, large. This is of particular importance because of the established differences in mobility of plasmids in different size ranges (5). Smillie and colleagues showed that mobilizable plasmids, which have relaxases, tend to be larger than nontransmissible plasmids, with median values of 35 and 11 kbp, respectively (5). The majority of soil-associated plasmids (68.2%) were Ͼ35 kbp (Fig. 2), suggesting they are more likely to be mobile. Additionally, conjugative plasmids, which encode type IV coupling proteins, have a larger median size (181 kbp) (5). Similarly, RefSoilϩ plasmids had a mode of 183 kb (Fig. 2), suggesting that these soil-associated plasmids are more likely to be conjugative. Future works should examine the genetic potential for the transfer of plasmids associated with different ecosystems to test this hypothesis. Plasmid size may vary in the environment. To estimate the environmental size distributions of plasmids, we used estimates of the environmental abundance of RefSoil microorganisms (21). We focused on soil orders previously shown to include the most RefSoil representatives (alfisols, mollisols, and vertisols) (21). We found that plasmid size distributions varied based on soil order (Kruskal-Wallis, P Ͻ 0.01) (Fig. 2C). True environmental abundance may vary based on plasmid copy number within individuals and plasmids from uncultivated microorganisms, but this estimation gives a rough idea of plasmid size distributions in the environment and provides some baseline information because there are methodological challenges to accurately measuring plasmid size in situ (4,18,19). Genome size, inclusive of chromosomes and plasmids, is an important ecological trait that is difficult to estimate from metagenomes (25). Due to incomplete assemblies, genome size must be approximated based on the estimated number of individuals through single-copy gene abundance (26). Extrachromosomal elements, however, inflate these estimated genome sizes, because they contribute to the sequence information of the metagenome often without contributing single-copy genes (27). While our methodologies do not account for plasmid copy number (28), we examined the relationship between genome size and plasmid size in soil-associated microorganisms and found a weak but significant correlation (Spearman's ϭ 0.12; P Ͻ 0.001) (Fig. 3). Additionally, chromosome size was not predictive of the number of plasmids ( Fig. 3; see also Fig. S1 in the supplemental material). For example, Bacillus thuringiensis serovar thuringiensis strain IS5056 had the most plasmids in RefSoilϩ, but these plasmids spanned the size range of 6.8 to 328 kbp. This strain's plasmids make up 19% of its coding sequences (29), but its chromosome (5.4 Mbp) is average for soils (27). Despite the weak relationship between genome size and plasmid characteristics within these data, the plasmid database can be used to inform estimates of average genome sizes from close relatives detected within metagenomes. ARGs on soil plasmids. It is unclear whether soil ARGs are predominantly on chromosomes or mobile genetic elements. While mobile gene pools are not static, there is evidence to suggest low transfer of ARGs in soil (14,15,30). For example, bulk soils are not a "hot spot" for HGT because they are often resource-limited (31), and surveys of ARGs in soil metagenomes have suggested a predominance of vertical transfer, rather than horizontal transfer, of ARGs (14,30). Using RefSoilϩ sequences and ResFams hidden Markov models (HMMs) (23), we examined 174 genes encoding resistance to beta-lactams, tetracyclines, aminoglycosides, chloramphenicol, glycopeptides, macrolides, quinolones, and trimethoprim. After quality filtering, we detected 154,392 ARG sequences in RefSoil chromosomes and plasmids ( Fig. 4; see also Table S1). Adding plasmids to the RefSoil database increased the number of functional gene types, or genes that have functional potential (32), represented in the database, as 7 ARGs (16S rRNA methyltransferase, AAC6-Ib, ANT6, CTXM, ErmC, KPC, and TetD) were only detected on plasmids. Notably, these functional genes would be missed if only chromosomes were considered. However, the majority of ARGs were chromosomally encoded in RefSoilϩ microorganisms ( Fig. 4A and B) (chromosome versus plasmid; Mann Whitney U, P Ͻ 0.01). We next examined the genomic distributions of ARGs in RefSoilϩ based on taxonomy ( Fig. 4C and D). Proteobacteria had the most plasmidassociated ARGs, which has been reported previously (33). We were curious whether ARGs were more commonly detected on chromosomes than plasmids in general or if this trend was specific to soil microorganisms. We found that the number of ARGs per genome was comparable for RefSoil and RefSeq (Mann Whitney U, P Ͼ 0.05), but RefSoil plasmids had fewer ARGs than RefSeq plasmids (Mann Whitney U, P Ͻ 0.05) (Fig. 5). Normalizing to individual microorganisms is biased toward chromosomes, however, because chromosomes typically have more base pairs than plasmids. To account for this, we also normalized ARGs to base pairs, and there were more ARGs in plasmids from both databases than in chromosomes (Mann Whitney U, P Ͻ 0.05). Notably, RefSoilϩ had fewer ARGs than RefSeq (Mann Whitney U, P Ͻ 0.01) (Fig. S3). This suggests that plasmid-mediated HGT rates of ARGs may be relatively low in these soil microorganisms. We note that the RefSoil database is limited in representatives of Verrucomicrobia and Acidobacteria, which may change these estimates (21); however, this will improve as the database grows. We examined this trend for each antibiotic class and observed a greater proportion of ARG sequences on plasmids in RefSeq than in RefSoilϩ for genes encoding glycopeptide and tetracycline resistance (see Fig. S2). Gibson and colleagues also found a lack of tetracycline resistance genes in soil-associated isolates compared to that in water-and human-associated strains (23). By determining whether ARGs were encoded on plasmids or chromosomes, our analysis suggests that these patterns were due to chromosomal genes and more likely vertically transferred (Fig. 5). Thus, these soil bacteria harbor relatively few ARGs on plasmids, suggesting that RefSoilϩ microorganisms have limited capacity for plasmid-mediated transfer of these genes. Future assessments of functional gene content on chromosomes and plasmids together will help to delineate changes in transfer potential and reveal selective or environmental factors that impact transfer potential. While genome data from isolates cannot inform on the environmental abundance of ARGs, our data support observations of ARGs in mobile genetic elements in soil from cultivation-independent studies as well. Luo and colleagues observed a low abundance of chloramphenicol, quinolone, and tetracycline resistance genes in soil mobile genetic elements (24), and Xiong and colleagues (34) also observed low abundance of qnr genes. Similarly, we observed fewer plasmid-encoded tetracycline resistance genes in soil-associated microorganisms than in RefSeq microorganisms (Fig. S2). We did not observe significant differences for genes encoding quinolone or chloramphenicol resistance; however, these had small sample sizes (n ϭ 2 and 3, respectively). Mobile genetic elements in soil have also been shown to have an abundance of genes encoding multidrug efflux pumps and resistance to beta-lactams, aminoglycosides, and glycopeptides (24). Genes encoding beta-lactam and aminoglycoside resistance were comparable between RefSoilϩ and RefSeq (Kruskal-Wallis, P Ͼ 0.05) (Fig. S2). However, plasmid-borne glycopeptide resistance genes were less common in RefSoilϩ plasmids (Mann Whitney U, P Ͻ 0.05). RefSoil؉ applications. RefSoilϩ is publicly available on GitHub (https://github .com/ShadeLab/RefSoil_plasmids). It includes an excel file linking RefSoilϩ organism taxonomy with accession numbers for corresponding chromosomes and plasmids. It also contains several fasta files with coding DNA sequence (CDS) and amino acid sequences. These files can be downloaded directly from GitHub. RefSoilϩ has been used to better estimate genome sizes in soil (27) and to estimate the distribution of arsenic resistance genes in soil-associated chromosomes and plasmids (35). Our results show that soil-associated plasmids have distinctive traits and can harbor functional genes that are not encoded on host chromosomes. RefSoilϩ expands the knowledge of functional genes with potential for transfer among soil microorganisms and offers insights into plasmid size and host ranges in soil (and improves the accuracy of estimates of their genome sizes). Because it is populated by the chromosomes and plasmids of isolates, RefSoilϩ links host taxonomy to plasmid content. This linkage is important especially for heterogeneous ecosystems with high microbial richness, such as soils, which rely heavily on cultivation-independent methods for observing microbially diverse populations. Ref-Soilϩ can guide the assembly and support the annotation of plasmids from soil metagenomes and also direct hypotheses of host identity (18,36). Notably, plasmid gene content is not static (37), and individuals can gain or lose plasmids (38,39). Despite this, historical data of the genetic makeup and host range of plasmids can be used to better understand plasmid ecology, and to serve as an important reference to understand by how much host plasmid numbers and contents change in the future. This information contributes to information needed to understand patterns of plasmid dissemination, both across environments and among hosts. RefSoilϩ can be used as a reference database or as a database for primer design to target plasmids in the environment. Advances microbiome sequencing methods such as presequencing proximity linkage (e.g., Hi-C [20]), long-read technology (40), or single cell sequencing (41) could add to and leverage RefSoilϩ to improve the characterization of plasmid-host relationships in soil. As movements of ARGs are observed in the clinic and the environment, RefSoilϩ can also serve as a reference for comparison with legacy plasmid and chromosome contents and distributions. Novel genomes and plasmids could be added in future RefSoilϩ versions, and plasmid-host relationships as well as encoded functions could be compared between cultivation-dependent and -independent methodologies. RefSoilϩ provides a rich community resource for research frontiers in plasmid ecology and evolution within wild microbiomes. MATERIALS AND METHODS RefSoil plasmid database generation. Accession numbers from RefSoil genomes were used to collect assembly accession numbers for all 922 strains. Assembly accession numbers were then used to obtain a list of all genetic elements from the assembly of one strain. Because all RefSoil microorganisms have completed genomes, all plasmids present at the time of sequencing are included in the assembly. Plasmid accession numbers were compiled for each strain and added to the RefSoil database to make RefSoilϩ (see Table S1 in the supplemental material). Plasmid accession numbers were used to download amino acid sequences, coding nucleotide sequences, and GenBank files. To ease comparisons between genome and plasmid sequence information, sequence descriptors for plasmid protein sequences were adjusted to mirror the format used for bacterial and archaeal RefSoil files. Accessing RefSeq genomes and plasmids. Complete RefSeq genomes and plasmids were downloaded from NCBI to compare with RefSoil. All RefSeq bacteria and archaea protein sequences were downloaded from release 89 (ftp://ftp.ncbi.nlm.nih.gov/refseq/release). All GenBank files for complete RefSeq assemblies were downloaded from NCBI. A total of 10,270 bacterial and 259 archaeal assemblies were downloaded. GenBank files were used to extract plasmid size and to compile a list of chromosomal and plasmid accession numbers. GenBank information was read into R, and accession numbers for plasmids and chromosomes were separated. Additionally, all RefSoil accession numbers were removed from the RefSeq accession numbers. Ultimately, 10,335 chromosome and 8,271 plasmids were collected to represent non-RefSoil microorganisms. Protein files were downloaded and tidied using the protocol for RefSoil plasmids as described above. Plasmid characterization. We summarized the RefSoilϩ and RefSeq plasmids in several ways. Plasmid size was extracted from GenBank files for each RefSoil genome and plasmid. For comparison, size was also extracted from RefSeq plasmids. These data were compiled and analyzed in the R statistical environment for computing (42). The RefSoil metadata (Table S1), which contains host information for each plasmid, was used to calculate proportions of RefSoil microorganisms with plasmids. Both the number of plasmids per organism and the number of RefSoil microorganisms with one plasmid were examined. Plasmid size distributions were compared using Mann Whitney U tests, Hartigan's dip test (43), and bimodality coefficients (44). The environmental abundances of RefSoil plasmids were calculated using estimations of RefSoil organism environmental abundance (21). Only soil orders with the most RefSoilϩ representatives (alfisols, mollisols, and vertisols [21]) were included in the analysis. Antibiotic resistance gene detection. We examined ARGs from the ResFams database (174 total [23] in RefSoilϩ) (see Table S3). We then used HMMs from the ResFams database (23) to search amino acid sequence data from RefSoil genomes and plasmids with a publicly available custom script and HMMER (45). To perform the search, hmmsearch (45) was used with -cut_ga and -tblout parameters. These steps were repeated for protein sequence data from the complete RefSeq database (accessed 24 July 2018). Tabular outputs from both data sets were analyzed in R. Quality scores and percent alignments were plotted to determine quality cutoff values for each gene (Fig. S1). All final hits were required to be within 10% of the model length and to have a score of at least 30% of the maximum score for that gene. When one amino acid sequence was annotated twice (i.e., for similar genes), the hit with the lower score was discarded. The final quality filtered hits were used to plot the distribution of ARGs in RefSoil genomes and plasmids. Data availability. All data and workflows are publicly available on GitHub (https://github.com/ ShadeLab/RefSoil_plasmids). A table of all RefSoil microorganisms with genome and plasmid accession numbers is available in Table S2 and GitHub in the DATABASE_plasmids repository. This repository also hosts amino acid and nucleotide sequences for RefSoilϩ genomes and plasmids. Plasmid retrieval workflows are included in the BIN_retrieve_plasmids directory. All workflows are included on GitHub as well in the ANALYSIS_antibiotic_resistance repository.
4,705.4
2019-02-26T00:00:00.000
[ "Environmental Science", "Biology" ]
First confirmed record of Trichobilharzia franki Müller & Kimmig, 1994, from Radix auricularia (Linnaeus, 1758) for Austria Avian schistosomes are of medical and veterinary importance as they are responsible for the annually occurring cercarial dermatitis outbreaks. For Austria, so far, only Trichobilharzia szidati Neuhaus 1952 was confirmed on species level as causative agent of cercarial dermatitis. Here we present the first record of Trichobilharzia franki Müller & Kimmig 1994 in Austria. The species was detected during a survey of digenean trematodes in Upper Austrian water bodies. Furthermore, we provide DNA barcodes of T. franki as well as measurements of several parasite individuals to indicate the intraspecific diversity. We also recommend the usage of an alternative primer pair, since the “standard COI primer pair” previously used for Schistosomatidae amplified an aberrant fragment in the sequence of T. franki. Overall, our study shows how limited our knowledge about occurrence and distribution of avian schistosomes in Austria is and how important it is to acquire such a knowledge to estimate ecological and epidemiological risks in the future. Outbreaks of cercarial dermatitis in Austria have been reported since 1970 (Graefe 1971). In nearly all provinces (except in Vorarlberg), cercarial dermatitis was recorded (Auer and Aspöck 2014). In the first records of schistosomatid cercariae from the lake Neusiedler See in Eastern Austria (Graefe 1971), cercariae shed from Lymnaea stagnalis (Linnaeus 1758) were assigned to T. szidati. This assignment was based on general morphology and phototactic behavior but lacked confirmation by detailed morphological analyses of either cercariae or adult trematodes. These cercariae were proved to cause dermatitis in humans experimentally and indirectly by records of dermatitis in fishermen and biologists at the site of occurrence of infected snails (Graefe et al. 1973). Cercariae which were gathered in the same study and location from Planorbarius corneus (Linnaeus 1758), and assigned to Bilharziella polonica (Kowalewski 1895), were also applied to the skin of a test person but did not prove to cause dermatitis (Graefe 1971). Subsequent records of schistosomatid cercariae from different sites in eastern Austria again were assigned to T. szidati (Graefe et al. 1973), but with reservation since experimental infections of ducks were not successful. In another study in Eastern Austria, T. szidati from L. stagnalis was morphologically confirmed by adults from successful infections of ducks, whereas cercariae from Radix balthica (Linnaeus 1758) (syn. Radix ovata (Draparnaud 1805)) were assigned to the genus Trichobilharzia but could not be determined to species level, because snails had died before infection experiments of ducks could be started (Dvořák et al. 1999). More recently, the occurrence of T. szidati from L. stagnalis from Lower Austria was confirmed by molecular genetic analyses (Gaub 2014). Here, we present the first confirmed record of T. franki for Austria detected during a survey of digenean trematodes in Upper Austria. Furthermore, we provided corresponding DNA barcode sequences and compared them with already published haplotypes from different European countries. Besides the new record of T. franki, we provide an overview of the status of knowledge concerning avian schistosomes in Austria with implications for future research. Material and methods During a survey of digenean trematodes performed in Upper Austrian water bodies, 229 freshwater snails of different species were collected at Reichersberger Au (European Nature Reserve Lower River Inn; 48.340399 N 13.360308 E; May 27, 2019). Snails were isolated in glasses exposed to daylight and observed for cercarial release. Of those, one individual of R. auricularia (out of 10) released schistosomatid cercariae at room temperatures 1 day later. The released cercariae were subsequently put into 80% ethanol for further analyses. Five specimens were measured and documented and deposited in the collection Evertebrata varia of the NHMW, and another five specimens were analyzed genetically. Since the latter specimens were completely consumed for the genetic analysis, the five preserved specimens serve as para-vouchers. Morphological analysis For determination, we measured body length and width, stem length and width, and furca length of five specimens of the released cercariae in NIS elements (Nikon Instruments Inc., New York, USA). Microphotographs were taken with a Nikon Eclipse Ni-U microscope equipped with a Nikon DS-Ri2 microscope camera. Molecular genetic analysis DNA extraction was performed in a clean room with the QIAmp DNeasy Blood and Tissue Kit (QIAGEN, Hilden, Germany) by following the protocol of the manufacturer. To perform the final elution step in 15 μl AE buffer we used QIAmp MinElute columns of the QIAamp DNA Micro Kit (QIAGEN, Hilden, Germany). The five COI sequences determined in the present study did not show any sign of nuclear pseudogenes (e.g., insertions/deletions or nonsense mutations) and were deposited in Barcode of Life Data Systems (BOLD) and GenBank under the accession numbers CDOE-001-20-005-20 (BOLD) and MT763194-MT763198 (GenBank). All available COI sequences of the genus Trichobilharzia in GenBank and BOLD were batch downloaded using the package PrimerMiner (Elbrecht and Leese 2017) in R 3.6.3 (R Core Team 2018). Subsequently, sequences were loaded into Geneious 2.10.3 (https://www.geneious.com) and aligned using MAFFT (Katoh and Standley 2013). The alignment was trimmed to 665 bp, and all sequences shorter than this threshold were excluded from the final alignment. The final alignment contained 120 sequences, including the five sequences processed in this study. ModelFinder (Kalyaanamoorthy et al. 2017) implemented in PhyloSuite (Zhang et al. 2020) was used to select the best-fit model (GTR+F+I+G4) using BIC criterion. Bayesian inference was conducted using MrBayes 3.2 (Ronquist et al. 2012) with two runs, each having four chains, and run for 5 × 10 6 generations each. Trees and parameters were sampled every 250th generation. After discarding the first 25% of trees as burn-in, a 50% majority rule consensus tree was built from the remaining trees. A median-joining haplotype network (Bandelt et al. 1999) using PopART 1.7 (http://www.popart.otago.ac.nz) was produced for the species T. franki to illustrate the variability of specimens in Austria among other specimens from Europe. Therefore, COI sequences of T. franki from different European countries (accession numbers HM131197-HM131205, FJ174530) were included into the alignment. Due to different lengths of the sequences, we trimmed the final alignment to a length of 682 bp. Networks were graphically processed in InkScape 0.92 (https://inkscape. org). The haplotypes were classified according to their collection countries. Haplotype diversity (Hd) and nucleotide diversity (π) were calculated in DnaSP v5 (http://www.ub. edu/dnasp; Librado and Rozas 2009). A comparison of our results with already published measurements of Trichobilharzia spp. (Müller and Kimmig 1994;Podhorský et al. 2009;Jouet et al. 2010) revealed variations within and between species and highlight the difficulties and limitations of a solely morphological determination (Table 1). Although measured body lengths of our specimens fall in the range of measures for T. franki from the mentioned previous studies, there is an overlap in size between all species (Table 1). Nevertheless, most of our measurements fall in the range of the measurements of the original species description of T. franki (Table 1; Müller and Kimmig 1994). Molecular genetic analyses When sequencing the 3′ end of the amplicon generated of the five cercariae analyzed using the primer pair previously used in other studies (SchistoCox1-5′/SchistoCox1-3′), we faced difficulties: The reverse read (sequencing primer SchistoCox1-3′) delivered a mixed sequence downstream of site 1000 in the alignment; there were many, albeit small, double peaks, which occurred also after the repetitions of sequencing. Although, no similarity was found in this section with the expected reference sequences, a BLAST search of this aberrant fragment showed that it was clearly similar to T. franki (97% identity score). Further examination revealed that this result was due to an additional internal primer binding of the primer SchistoCox1-3′ in the 5′ part of the COI gene (sites 123 to 142 of the alignment). Consequently, in the sequencing reaction, two amplicons of different lengths were sequenced simultaneously. We did overcome this problem by designing a new reverse primer (ZDOE-COI-rv) to exclude this unintended unspecific binding. This primer was used as PCR primer and as sequencing primer. The resulting amplicon (amplified in combination with primer SchistoCox1-5′) is 67 bp shorter (1058 bp). The BI tree revealed two main clusters of Trichobilharzia spp. The first cluster contains T. szidati, T. stagnicolae, and T. anseri and three clades of undetermined sequences of Trichobilharzia. The second main clade includes sequences of T. franki, Trichobilharzia querquedulae McLeod 1937, Trichobilharzia physellae (Talbot 1936) McMullen & Beaver 1945, and three clades of undetermined sequences of Trichobilharzia. The five specimens processed in this study cluster together with sequences from GenBank determined as T. franki (Fig. 1a). Most importantly, the sequences of this study are clearly distinguished from T. szidati, until now the only species of Trichobilharzia known from Austria. The second species expected in Austria, T. regenti, falls outside the two main clades of Trichobilharzia spp. as well as Anserobilharzia brantae (Fig. 1a). To summarize, the results of the COI sequence comparisons clearly confirmed the tentative morphological assignment to T. franki The network in Fig. 1 shows genetic diversity of the COI sequences determined in the present study in more detail. Among the five Austrian specimens of T. franki, two COI haplotypes were present, separated by three mutation steps (Fig. 1b). The additional seven COI sequences of T. franki from other European countries contained six haplotypes (Fig. 1b). We identified a considerable high haplotype diversity (Hd 0.97) and a nucleotide diversity (π) of 0.04. The similarities among and within the sequences of the different countries are low, and no clearly separated geographic haplogroups were distinguished. Furthermore, no haplotype sharing among individuals from different countries was observed. Discussion As outlined above, the knowledge of Trichobilharzia spp. occurring in Austria is low (Dvořák et al. 1999;Sattmann et al. 2004;Hörweg et al. 2006) despite of their medical and veterinary importance. Thus, it is essential to determine the diversity of avian schistosomes, as well as their distribution, occurrence, and intermediate and final host range . In this study, we present the first confirmed record of T. franki in Austria. Species assignment was straightforward combining DNA sequence comparisons with morphological features. It has long been suspected that, apart from T. szidati, also other representatives of the genus occur in Austria (Auer and Aspöck 2002). Trichobilharzia franki was first described from Southern Germany (Müller and Kimmig 1994) and subsequently has been reported from many European countries (reviewed in Jouet et al. 2010) but was not yet detected in Austria. The first record of T. franki in Austria shows the importance of DNA-based methods in combination with classical morphological analyses. Morphological diversity of T. franki is reflected in the variation of body measurements evaluated in different studies (Müller and Kimmig 1994;Podhorský et al. 2009;Jouet et al. 2010) and was found also in the present study (Table 1). The variation in measurements might to some extent be caused by the contractibility of the body of cercariae of avian schistosomes (Podhorský et al. 2009;Jouet et al. 2010). Furthermore, measurements might also vary because of different fixation and preservation conditions. These factors may, in addition to the true morphological diversity, hamper a reliable identification (Jouet et al. 2010), wherefore we do not suggest species assignment based solely on morphological characters. Also, environmental and/ or host-related factors may influence the size of cercariae, e.g. temperature (Dönges 1964) or snail age (Neuhaus 1952). Podhorský et al. (2009) conducted morphological comparisons between cercariae of T. szidati, T. franki, and T. regenti and concluded that the species could be differentiated only by specific distribution of sensory papillae, but not by cercarial body dimensions. In the present study, due to our workflow of sampling and studying, we were not able to study the papillae. Therefore, to enable a reliable taxonomic assignment, DNA sequence analysis and comparison with published sequences is crucial. Sequences that were derived from determined laboratory strains are important as they allow a reliable assignment based on DNA sequences. To date, there are only very few published COI sequences (n = 10) of T. franki available, while mostly sequences of the ribosomal RNA gene cluster (mostly internal transcribed spacer 1 and 2 (ITS1 and ITS2)) were analyzed. Nevertheless, studies, where COI and ITS sequences were generated from the same specimens (Jouet et al. 2010), enable a comparison Fig. 1 Phylogenetic relationships between species of Trichobilharzia. a Bayesian inference (BI) tree including 120 COI sequences of different species of Trichobilharzia. Only Bayesian probabilities ≥ 0.9 are given next to the nodes. The clade containing sequences of Trichobilharzia franki processed in this study is colored. b Medianjoining network (MJ) of COI sequences of T. franki from different European countries. Haplotypes are constituted of one to four samples (see legend). Sequences within the haplogroups are separated by one to three mutation steps. Mutation steps are indicated by vertical lines. Black dots represent missing haplotypes with a large database of T. franki sequences in GenBank. Thus, we can trust the assignment of COI sequences, although the number of comparison sequences is comparatively low. We found a high haplotype diversity of 0.97 among the analyzed sequences of T. franki, similar to the diversity detected in previous studies for the species T. franki, T. szidati, and T. regenti (Lopatkin et al. 2010;Korsunenko et al. 2012). The population structure of parasites is often shaped by the migration of the final hosts (Jarne and Theron 2001), which are anatid birds in the case of Trichobilharzia spp. The avian mobility might enable gene flow between parasites populations even over large distances (Korsunenko et al. 2012). The presence of two haplotypes in cercariae from one snail is an interesting finding, which suggests that snail individuals may be infected by several miracidae from different origins, which then co-exist and propagate within one individual. The establishment of a DNA barcode reference database will be of great advantage in the identification of these avian schistosomes. With one exception (Gaub 2014), molecular genetic methods had so far never been applied in previous studies on Austrian schistosomatid cercariae. On an international scale, available data are also scarce. The problems we faced with unspecific primer binding exemplify the problems that widely used "universal" primers may cause. Sequence comparison showed that the unspecific binding had eight mismatches (out of 20 sites) in T. franki and still worked well as a sequencing primer (in addition to the correct binding site). Once both primer binding sites were detected, both sequences could be readily distinguished as the sequence derived from the internal site produced largely higher peaks resulting in a well readable sequence. The more reliable DNA barcodes are available, the better are the prerequisites for further studies: for straightforward taxonomic assignment, for detection of presumably cryptic species, for additional primer design, as well as for critical data evaluation. At the present state of knowledge, it is not recommended to determine species of Trichobilharzia based on presumed host specificity, especially since reports in the literature regarding host range are not consistent. The specimens identified in the present study parasitized R. auricularia, the type host of T. franki. Yet, in some studies, also R. labiata (Rossmässler, 1835) (syn. Radix peregra [O. F. Müller 1774]) (Aldhoun et al. 2009;Jouet et al. 2010) and even L. stagnalis (Rudolfovà et al. 2005) have been reported as intermediate hosts of T. franki. Nevertheless, these results must be considered with caution. High genetic differences were detected by Jouet et al. (2010) between specimens of T. franki obtained in France from R. labiata and from R. auricularia, which suggested that the specimens obtained from R. labiata were a hitherto undetected cryptic species. Furthermore, the considerably high number of undetermined clades containing species of Trichobilharzia found in the BI tree (Fig. 1a) suggest that more cryptic species exist. In general, it has been shown that more cryptic species tend to be uncovered among trematodes compared to other helminth taxa (Pérez-Ponce de León and Poulin 2018), which might not only be due to the frequent lack of suitable morphological structures or their complex life cycles but also to the way in which trematode species are described. In a previous study in Austria, an unidentified species of Trichobilharzia was obtained from R. balthica (syn. R. ovata) (Dvořák et al. 1999). Two scenarios are plausible regarding the species assignment of this unidentified species: (1) it was T. franki, which could not be identified properly by morphology. (2) It is another previously unknown species or a species so far not detected in Austria. Therefore, to aim at a complete inventory of Trichobilharzia spp. in Austria, intensive sampling of a broader geographic range covering the Table 1 Measurements (in μm, means) of T. franki of this study compared with measurements of T. franki, T. szidati, T. regenti, and an undetermined species from previous studies (Müller and Kimmig 1994;Podhorský et al. 2009;Jouet et al. 2010 Podhorský et al. (2009) known distribution and considering more potential intermediate hosts is required. There is a high probability that also T. regenti occurs in Austria, which also uses Radix spp. as intermediate hosts. Conclusion Besides the first report of T. franki in Austria, we provided further insights of intraspecific morphological and genetic diversity. Moreover, the study showed that the analysis of digenean trematodes with complex life cycles is extremely challenging, both morphologically and genetically. Therefore, an integrative taxonomic approach is essential to identify species and to assess their distribution, since more species can be expected to occur in Austria. As soon as a database including determined specimens is established, the usage of environmental DNA (eDNA) for the monitoring of cercarial dermatitis outbreaks can be applied in addition to conventional methods. Eventually, putting all the information from various countries together will allow to accurately assess distribution ranges and species diversity. The more we know, the better we are prepared regarding epidemiological and ecological risks, efficient control of dermatitis outbreaks, and potential future changes in parasite composition. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
4,298
2020-11-05T00:00:00.000
[ "Environmental Science", "Biology" ]
Studies on Annihilation and Coreactant Electrochemiluminescence of Thermally Activated Delayed Fluorescent Molecules in Organic Medium Very recently, there is a great research interest in electrochemiluminescence (ECL) featuring thermally activated delayed fluorescence (TADF) properties, i.e., TADF-ECL. It is appealing since the earlier reports in this topic well-confirmed that this strategy has a great potential in achieving all-exciton-harvesting ECL efficiency under electrochemical excitation, which is a breakthrough in the topic of organic ECL. However, organic phase electrochemistry and ECL studies surrounding TADF-ECL are still extremely rare. Especially, the ECL spectra of previous reported TADF emitters are still very different from their PL spectra. In this work, we systematically measure and discuss the liquid electrochemistry and ECL behavior of two typical TADF molecules in organic medium. Most importantly, we verify for the first time that the ECL spectra of them (coreactant ECL mode) are identical to their PL spectra counterparts, which confirms the effectiveness of TADF photophysical properties in the coreactant ECL mode in practice. Introduction It is well-known that electrochemiluminescence (ECL) is one promising analytical method, in which photon signals are electrogenerated by electrochemical excitation in electrolytic cells [1,2]. Up to the present, a great deal of applications has been realized for such ECL techniques, such as ultrasensitive life analysis, environmental analysis, highresolution activity mapping on nanocatalysts and more recently single-photon-level tissue and cell imaging, etc. [1][2][3][4][5][6][7]. Despite these successes, it is obvious in that the scarcity of qualified ECL luminophores largely hampers the development of ECL. Until now, except for some peculiar cases [8], almost all ECL applications use the state-of-the-art tris(2,2 -bipyridyl)ruthenium(II) Ru(bpy) 3 2+ or its analogues [2,7,9]. This system has been well-applied since it possesses satisfactory electrochemical stability, high ECL efficiency and mature labelling methodology towards various analyzing targets [9]. However, there are still so many drawbacks to be resolved for this system, such as the high potential for ECL, hard to tailor ECL spectra, high cost and very limited room for further enhancing of ECL efficiency (Φ ECL ). To accelerate the development of ECL, it is excepted to enrich qualified ECL luminophores. In this way, much better overall ECL performance is anticipated. Nowadays, alongside such demands, a great deal of attention is focusing on developing advanced organic ECL luminophores [10][11][12][13][14][15]. Especially, two peculiar strategies have been launched with great developing potential [16], that is: (i) organic aggregation-induced ECL (AIECL) [17] and (ii) thermally activated delayed fluorescent ECL (TADF-ECL) [18,19], respectively. As for AIECL [20], it is appealing since the notorious aggregation-caused induced ECL (AIECL) [17] and (ii) thermally activated delayed fluorescent ECL (TADF-ECL) [18,19], respectively. As for AIECL [20], it is appealing since the notorious aggregation-caused quenching (ACQ) limiting issue is largely restrained or even removed by the aggregation-induced emission (AIE) effect [21], which guarantees the achievements of a much higher ΦECL for them [10,12,17,[22][23][24]. For instance, a series of organic AIECL systems were developed, e.g., tetraphenylethylene (TPE)-based nanocrystals or polymer dots [17,24], carbon dots [23], silole or triphenylporphyrin-based compounds [10,22]. Most of them showed excellent ECL stabilities and satisfactory ΦECL and even were applied in various biosensing applications with satisfactory effects [14,15]. As for the topic of TADF-ECL [18,19], it is significant in that such a strategy is characteristic of all-exciton-harvesting for ECL emission. Compared to every reported organic ECL system [17,22,[25][26][27], the theoretical ΦECL of organic TADF-ECL systems is increased by a factor of four, i.e., from 25% to 100%. It is reasonable since common organic ECL systems belong to fluorescence in photophysics. In this case, it is merely electrochemically generated singlets (ca. 25% in total) that are harvested for ECL emissions. At the best condition, i.e., when the PL quantum efficiency (ΦPL) of those ECL fluorophores equals to 100%, the maximized ΦECL of those fluorescent-type organic ECL systems can achieve a value of 25%. By contrast, TADF-ECL outperforms in ΦECL since all those dark triplets (~75% in total under electrochemical excitation) [18] can be emissive through the delayed fluorescent route, i.e., DF-ECL (Scheme 1). Such processes can be efficient since the exchange energy (ΔEST) between the first singlet level (S1) and the first triplet level (T1) is low enough (0.1-0.3 eV) for TADF luminophores [28], which guarantees the achievement of a fast and efficient reverse intersystem crossing (RISC) process from T1 to S1 (Scheme 1). In this case, the maximized ΦECL of the TADF-ECL system can be as high as 100%. Scheme 1. The schematic TADF-ECL mechanism (a) and the chemical structures of TADF molecules used in this work (b). Notation: (The abbreviations S1, T1, S0, ISC, RISC, PF-ECL and DF-ECL correspond to the first singlet level, the first triplet level, the ground state singlet level, intersystem crossing process, reverse intersystem crossing process, prompt fluorescent ECL and delayed fluorescent ECL, respectively). To date, some progress has been achieved in the topic of TADF-ECL, such as the achievements of highly efficient TADF-ECL in organic medium [18,19,[29][30][31], extremely narrow ECL spectra [32], reliable TADF-ECL in aqueous medium [12,33] and TADF-ECL sensing applications [19,34]. In 2014, Ishimatsu et al. reported the first annihilation TADF-ECL, in which a TADF molecular 1,2,3,5-tetrakis(carbazol-9-yl)-4,6-dicyanobenzene (4CzIPN) [35] was used as the ECL luminophore in organic medium [18]. Under highfrequency step potential driving, it achieved a Φ ECL of 47 ± 6.0% in dichloromethane (DCM) medium, which approached the corresponding Φ PL , i.e., 54%. The ECL spectra of the annihilation ECL system containing 4CzIPN resembled the PL spectra of 4CzIPN. These results were combined to confirm the achievement of TADF-ECL in annihilation ECL mode [18,29]. To achieve the practical sensing application of TADF-ECL in real scenarios, our group restarted the studies of TADF-ECL in 2021 [12,19,30,33] and focused on the possibility of coreactant TADF-ECL. As confirmed, stable and efficient oxidative-reduction polymer TADF-ECL [19] or reductive-oxidation polymer TADF-ECL [30] was successfully constructed by using TADF polymer modified glassy carbon electrode (GCE) as the lightemissive surface-modified working electrode of the coreactant TADF-ECL system. Either of them showed much higher Φ ECL (nearly four-fold) as compared to that of the traditional fluorescent ECL counterparts. Under successive electrochemical driving (step potential or linear CV scanning), the ECL stability of those TADF-ECL systems is also satisfactory. Solid-state TADF-ECL sensing on L-cysteine was further performed, showing ultralow detection limits, high sensitivity and good specificity [19]. Very recently, we also realized stable and efficient aqueous TADF-ECL [12,33,34], which is significant to achieve practical TADF-ECL sensing applications in life science in the near future. The corresponding methods are divided into two general categories. Firstly, we have developed TADF-ECL in aqueous media by using aggregation-induced delayed fluorescence (AIDF) luminogens, also called as AIDF-ECL [12]. Since AIDF-ECL integrates the merits of TADF and the AIE effect of organic luminophores in aqueous media, the Φ ECL of such an AIDF-ECL model system distinctly outperformed that of a TPE-based AIECL counterpart. Secondly, we have explored the nanoencapsulation strategy to prepare air-stable and water-soluble TADF-ECL luminophores [33]. Importantly, the oxygen quenching effect [36] on the TADF-ECL system in an aqueous medium is well-removed by this method. Using those aqueous-soluble TADF molecular nanoparticles such as ECL luminophores, stable and efficient aqueous TADF-ECL were realized, irrespective of annihilation or coreactant ECL [33]. In early 2022, using such nanoencapsulation strategies, we reported on the first aqueous coreactant TADF-ECL dopamine biosensing applications, which showed satisfactory linearity, selectivity, repeatability and detection limits [34]. Despite those progresses in the topic of TADF-ECL, it should be noticed that understanding of mechanisms of TADF-ECL is still in its infancy. Especially, as for the coreactant ECL mode, we found that the ECL spectra of the reported TADF luminophores/coreactant couple do not fully resemble its PL spectra counterparts [12,19,30,31,33,34]. It is plausible in that in some cases, more complicated mechanisms could be involved, e.g., the exciplex formation and organic long-persistent emission [31]. To confirm the coreactant TADF-ECL definitively, it is believed that the corresponding ECL and PL spectra of a certain TADF luminophores/coreactant couple should be the same. Moreover, the questions concerning the relationship between the electrochemical properties of TADF luminophores and its ECL efficiency, stability and potential should be further studied. To answer these questions, it is necessary to further perform electrochemical and ECL studies of typical TADF molecules in an organic medium, which is the most simplified condition to disclose such questions. Herein, we present the detailed studies on the annihilation and coreactant ECL of two typical TADF molecules in a DCM medium. First of all, basic photophysical and electrochemical measurements are performed to clarify their intrinsic physical and electrochemical properties. After that, annihilation and coreactant ECL studies are conducted, including the evaluation of Φ ECL , ECL stability, potentials and most importantly their ECL spectra. Very meaningfully, the ECL and PL spectra of those two TADF luminophores in the coreactant ECL mode are identical, which confirms the TADF emission nature of those luminophores in the coreactant ECL mode. Moreover, some peculiar clues were discovered to understand the determining factors of Φ ECL and ECL potentials of coreactant TADF-ECL, which is very meaningful to enrich our understanding on TADF-ECL and accelerate its development towards higher performance and better applications. Measurements of Photophysics All those photophysical studies were performed on DCM solution containing those TADF molecules with a concentration of 25 µM. The absorbance was measured by a UV-Vis spectrophotometer (Shimadzu UV-1780, Shimadzu, Kyoto, Japan). The steady-state PL spectra and transient PL decay curves of those samples in solution were conducted by the Edinburgh FLS1000 spectrofluorometer, in which an Xe2 xenon lamp and a picosecond pulsed LED (EPLED-365) were the light source (365 nm as the excitation wavelength) while it was required. The PL transient delay curves were fitted by bi-exponential functions, which was commonly used to derive the prompt and delayed fluorescent emissions of TADF emitters [38]. The absolute PLQY in the atmosphere of those samples was measured by an integrating sphere that was coupled with Edinburgh FLS1000. Cyclic Voltammetry (CV) and ECL Measurements CV experiments were measured by the CHI 660B electrochemistry workstation (CH Instruments Inc.). Prior to CV measurements, the glassy carbon electrode (GCE) working electrode (4 mm in diameter) was routinely cleaned [12]. A common three-electrode configuration was used for those CV studies, in which GCE, Pt wire and Ag wire were the working electrode, counter electrode and the quasi-reference electrode, respectively (0.1 mM 4CzIPN or 0.1 mM BPAPTC molecules dissolved in 0.1 M tetra-n-butylammonium hexafluorophosphate (TBAPF 6 ) as supporting electrolyte in DCM). The CV tests used the scanning rate of 100 mV/s and ferrocene (Fc)/ferrocenium (Fc + ) as the calibrating reference [39]. ECL studies were measured by the MPI-EII ECL detection system (Remex Electronic Instrument Lt. Co, Xi'an, China), in which the configurations and the condition of the electrolytic cells are the same as that measured in CV studies. The more detailed introductions were presented in our earlier work [19]. The PMT voltage and scanning rate were set at 850 V and 100 mV/s, respectively. For the oxidative-reduction ECL and reductive-oxidation ECL studies in DCM, the used TADF luminophores were 0.1 mM in concentration and the coreactant was tri-n-propylamine (TPrA) (40 mM) or benzoyl peroxide (BPO) (25 mM), respectively. While for determining the relative Φ ECL , these coupled coreactant ECL systems were measured by using the general method [19,40], i.e., using the sample of Ru(bpy) 3 2+ (0.1 mM)/TPrA (40 mM) in acetonitrile solution containing 0.1 M TBAPF 6 as the reference. The ECL spectra of these were measured by Edinburgh FLS1000, in which the electrolytic cells were placed into the sample chamber of Edinburgh FLS1000 and electrochemically triggered by the CHI 660B electrochemistry workstation. Photophysical Properties of TADF Luminophores in DCM Previously, 4CzIPN [35] and BPAPTC [37] were confirmed to be satisfactory TADF emitters and well-applied in organic light-emitting diodes. As a typical charge-transfer-type emitter [41], photophysical properties of these TADF molecules are largely influenced by the solvent that it is dissolved in. Since the subsequent CV and ECL studies are performed in the DCM solvent, we measured the corresponding photophysical properties in DCM. As it is shown in Figure 1a,c, the absorption spectra of those two emitters are basically unchanged as compared to the earlier reports measured in toluene [42]. Either of them show their intrinsic absorption and clear charge-transfer absorption features at the long wavelength range, indicating that the charge-transfer features of these two molecules are maintained in DCM. As for the steady-state PL spectra, some extent of redshift and broadening are observed, e.g., the PL emission peak (λ PL ) of 4CzIPN and BPAPTC in DCM locates at 543 and 585 nm, respectively, rather than 507 and 520 nm in toluene [35,37]. Such a difference is commonly reported for TADF materials and ascribed to the polarity effect of the solvent medium [41]. In this way, the PL spectra of TADF luminophores are gradually redshifted on increasing polarity of solvents. Figure 1b,d showed their respective PL transient behaviors, which displayed the typical two-component PL transient behaviors of TADF emitters [35]. According to the well-known photophysical theory of TADF emission [28,38,43], the corresponding prompt and delayed fluorescent lifetime (τ pf , τ df ) and ratios (Φ pf /Φ df ) were calculated, i.e., 24 ns, 1603 ns, 39.1%/60.9% for 4CzIPN and 30 ns, 328 ns, 63.5%/36.5% for BPAPTC, respectively, surely confirming the TADF emission features of those two samples in DCM. The measured conditions of those PL transient experiments are the same as that used in the subsequent CV/ECL study. Under electrochemical driving in those DCM media, it is anticipated that these two luminophores will emit light via the same TADF mechanism. of them show their intrinsic absorption and clear charge-transfer absorption features at the long wavelength range, indicating that the charge-transfer features of these two molecules are maintained in DCM. As for the steady-state PL spectra, some extent of redshift and broadening are observed, e.g., the PL emission peak (λPL) of 4CzIPN and BPAPTC in DCM locates at 543 and 585 nm, respectively, rather than 507 and 520 nm in toluene [35,37]. Such a difference is commonly reported for TADF materials and ascribed to the polarity effect of the solvent medium [41]. In this way, the PL spectra of TADF luminophores are gradually redshifted on increasing polarity of solvents. Figure 1b,d showed their respective PL transient behaviors, which displayed the typical two-component PL transient behaviors of TADF emitters [35]. According to the well-known photophysical theory of TADF emission [28,38,43], the corresponding prompt and delayed fluorescent lifetime (τpf, τdf) and ratios (Φpf/Φdf) were calculated, i.e., 24 ns, 1603 ns, 39.1%/60.9% for 4CzIPN and 30 ns, 328 ns, 63.5%/36.5% for BPAPTC, respectively, surely confirming the TADF emission features of those two samples in DCM. The measured conditions of those PL transient experiments are the same as that used in the subsequent CV/ECL study. Under electrochemical driving in those DCM media, it is anticipated that these two luminophores will emit light via the same TADF mechanism. Electrochemistry CV was performed for those two molecules in DCM solvent to determine their redox properties, in which a routine three-electrode structure was used, i.e., a GCE as the working electrode, a Pt wire as the counter electrode and an Ag wire as the quasi-reference electrode in DCM media containing 0.1 M TBAPF6 as a supporting electrolyte with a scanning rate of 100 mV/s. As shown in Figure S1a, 4CzIPN displayed a clear reversible cathodic wave with an onset reduction potential at −1.29 V (vs. Ag/Ag+) but irreversible Electrochemistry CV was performed for those two molecules in DCM solvent to determine their redox properties, in which a routine three-electrode structure was used, i.e., a GCE as the working electrode, a Pt wire as the counter electrode and an Ag wire as the quasi-reference electrode in DCM media containing 0.1 M TBAPF 6 as a supporting electrolyte with a scanning rate of 100 mV/s. As shown in Figure S1a, 4CzIPN displayed a clear reversible cathodic wave with an onset reduction potential at −1.29 V (vs. Ag/Ag+) but irreversible anodic wave with an onset oxidation potential at +1.28 V (vs. Ag/Ag+). It is analogous to the earlier results measured in acetonitrile solvent [33]. Accordingly, it indicates that the electroreduction process of 4CzIPN should be reversible and stable while the electrooxidation process is unstable. As mentioned [18,27], the electrochemical oxidation of carbazole would be involved for such an electrooxidation process. As for the CV results of BPAPTC molecules in DCM (shown in Supplementary Figure S1b), it is different in that in this situation, the electrooxidation of BPAPTC is becoming reversible and shows a clear reversible wave with an onset oxidation potential at 0.66 V (vs. Ag/Ag+). There is no distinctive electrochemical reduction wave for BPAPTC, indicating that the corresponding electroreduction process is unstable. Annihilation and Coreactant TADF-ECL At first, annihilation ECL studies of these two TADF luminophores were performed. As shown in Figure 2a, under linear CV scanning, intense ECL emission can be observed for 4CzIPN either in the anodic or cathodic bias condition, which indicates that either radical electrooxidation or electroreduction products of 4CzIPN, i.e., 4CzIPN •+ or 4CzIPN •− , are stable to guarantee exciton formation and then followed by ECL emission. Meanwhile, the ECL onset potentials, irrespective of anodic or cathodic scanning ranges, resemble its redox potentials showing in Figure S1a. We also notice that the anodic ECL intensity of 4CzIPN is higher than its cathodic ECL intensity. This result is consistent with its CV behaviors. Accordingly, it is highly possible that the stability of 4CzIPN •− is superior to that of 4CzIPN •+ . Under step potential operation (±1.6V, 1Hz), the ECL intensity of them (either at +1.6 V or −1.6 V) is distinctly enhanced by ca. 3-fold and becomes comparable. As compared to CV linear scanning (Figure 2a), the step potential driving condition (Figure 2b) shortens the waiting time of those radical intermediate products prior to collision, which accounts for such enhancing effects of ECL intensity. As for BPAPTC, it is different in that the cathodic ECL intensity is dramatically higher than the anodic ECL intensity (Figure 2c). Compared to 4CzIPN (Figure 2a), the anodic and cathodic ECL onset potentials of BPAPTC ( Figure 2c) were changed to ca. 0.6 V and −1.8 V, respectively, which is also consistent with its electrochemical redox characteristics shown in Supplementary Figure S1b. Under step potential driving between +0.8 V and −2 V (1 Hz), the cathodic ECL intensity at −2 V is also dramatically enhanced while the anodic ECL intensity at +0.8 V is still weak (Figure 2d). Accordingly, it indicates that under the electroreduction process, the radical intermediate species of BPAPTC, i.e., BAPTC •− , are highly unstable, while the BAPTC •+ counterpart generated under the electrooxidation process is stable enough. Such deduction is also consistent with the CV characteristics (Supplementary Figure S1b). In short, in conjunction with CV studies, the corresponding annihilation ECL studies of these two TADF molecules well-reflect the intrinsic relationship between its CV characteristics and ECL potential, intensity and stability. To achieve highly intensive ECL emissions in annihilation ECL mode, these TADF molecules should possess satisfactory electrochemical reversibility. Subsequently, coreactant ECL studies are conducted for these two TADF molecules, in which TADF molecular 4CzIPN or BPAPTC in couple with the state-of-the-art coreactant, i.e., TPrA or BPO, are dissolved in DCM medium containing 0.1 M TBAPF 6 supporting electrolyte and a scanning rate of 100 mV/s is used to perform the corresponding ECL measurements (see Experimental Sections for the details). First of all, the results of oxidative-reduction ECL using 0.1 mM 4CzIPN/40 mM TPrA are shown in Figure 3. As depicted, compared to the condition using the bare GCE as the working electrode, the addition of 40 mM TPrA in the electrolytic cells sharply enhances the electrochemical oxidation current. Especially, while the potential is higher than +0.61 V (vs. Ag/Ag+), the anodic current is obviously increased. According to earlier reports [44,45], it corresponds to the electrooxidation process of TPrA. During this scanning ranging from 0 to 1.6 to 0 V, no detectable ECL signals can be observed (Figure 3b). Additionally, the electrooxidation of 0.1 mM 4CzIPN alone does not lead to ECL emission (not shown here). By contrast, as for the situation involving the couple of 0.1 mM 4CzIPN/40 mM TPrA, the anodic current is distinctly increased while the potential is higher than +0.64 V and reaches the peak current at ca. +1.19 V (vs. Ag/Ag+). Therefore, in this case, both TPrA and 4CzIPN are electrochemically oxidized. Meanwhile, the ECL intensity is gradually enhanced as the potential is higher than ca. 1.2 V and reaches the maximum value at +1.6 V. These results are combined to confirm the occurrence of coreactant ECL, in which the oxidized TPrA functions as the reducing agent to inject electrons into the oxidized 4CzIPN radical intermediate species, i.e., 4CzIPN •+ , to generate excitons, which is followed by ECL emission via radiative decay. We further characterize the CV/ECL stability via 36 multicycle anodic scanning of this system containing the 4CzIPN/TPrA couple. As it is shown in Figure 3c, the anodic current is stable under these continuous cycle scanning conditions. Meanwhile, after the first seven cycles, the ECL intensity is also somewhat stable (Figure 3d). We notice that the ECL stability of such oxidative-reduction ECL mode of 4CzIPN is distinctly better than that observed in annihilation ECL mode [18,29]. In that report, the inherently unstable 4CzIPN •+ largely limited the corresponding ECL stability. Our results indicate that such an unstable issue can be solved to a large extent by adding an effective coreactant, e.g., TPrA. In this way, fast and efficient electron transfer processes between the coreactant, e.g., TPrA, and the unstable radical intermediate species of TADF luminophores, e.g., 4CzIPN •+ , could accelerate the process of exciton formation on TADF luminophores, which is critical to enhance its ECL intensity and stability for practical sensing applications. Subsequently, coreactant ECL studies are conducted for these two TADF molecules, in which TADF molecular 4CzIPN or BPAPTC in couple with the state-of-the-art coreactant, i.e., TPrA or BPO, are dissolved in DCM medium containing 0.1 M TBAPF6 supporting electrolyte and a scanning rate of 100 mV/s is used to perform the corresponding ECL measurements (see Experimental Sections for the details). First of all, the results of oxidative-reduction ECL using 0.1 mM 4CzIPN/40 mM TPrA are shown in Figure 3. As depicted, compared to the condition using the bare GCE as the working electrode, the addition of 40 mM TPrA in the electrolytic cells sharply enhances the electrochemical oxidation current. Especially, while the potential is higher than +0.61 V (vs. Ag/Ag+), the anodic current is obviously increased. According to earlier reports [44,45], it corresponds to the electrooxidation process of TPrA. During this scanning ranging from 0 to 1.6 to 0 V, no detectable ECL signals can be observed (Figure 3b). Additionally, the electrooxidation of 0.1 mM 4CzIPN alone does not lead to ECL emission (not shown here). By contrast, as for the situation involving the couple of 0.1 mM 4CzIPN/ 40 mM TPrA, the anodic current is distinctly increased while the potential is higher than +0.64 V and reaches the peak current at ca. +1.19 V (vs. Ag/Ag+). Therefore, in this case, both TPrA and 4CzIPN are electrochemically oxidized. Meanwhile, the ECL intensity is gradually enhanced as the potential is higher than ca. 1.2 V and reaches the maximum value at +1.6 V. These results are combined to confirm the occurrence of coreactant ECL, in which the oxidized TPrA functions Reductive-oxidation ECL studies on 4CzIPN are further conducted by using the traditional BPO as the coreactant. As shown in Figure 4a, no ECL signals can be detected while the electrolytic cell merely containing 25 mM BPO is biased from 0 to −1.7 to 0 V. However, as for the system containing the couple of 0.1 mM 4CzIPN/25 mM BPO in solution, ECL appears at −1.48 V and then reaches an extremely high peak value at ca. 1.7 V. Such phenomena well-confirm that it is reductive-oxidation ECL emission. Moreover, we notice that the peak potential of cathodic current of the 4CzIPN/BPO couple is located at −1.2 V, which is distinctly lower than that of pure 4CzIPN, i.e., ca. −1.5 V. Previously, Zu et al. also observed such a kind of feature and attributed it to the electrocatalysis effect of the coreactant on luminophores [44]. In other words, it is highly possible in that strong charge-transfer actions between those two species would be involved in such cathodic scanning. Despite this phenomenom, we notice that the resultant cathodic ECL onset potential is still as high as −1.48 V, which is followed by a quick increase in ECL intensity at the much higher potential. It thus well-confirms that both sufficient electroreduction of BPO and 4CzIPN are indispensable for such ECL emission. As calculated, such 0.1 mM 4CzIPN/ 25 mM BPO coreactant ECL system shows a high relative Φ ECL of 197% (vs. Ru(bpy) 3 /TPrA reference, taken as 100% as the standard), which is higher than its corresponding oxidativereduction Φ ECL using the couple of 4CzIPN/TPrA, i.e., 12.0%. We speculate that the most possible reason accounting for such differences in Φ ECL is the different electrochemical activity and/or reversibility of 4CzIPN •+ and 4CzIPN •− . As for the CV and ECL stability situations of such an efficient 4CzIPN/BPO couple, we performed the successive CV and ECL scanning (26 cycles). As shown in Figure 4c, the cathodic current is not stable. On increasing the scanning cycles, the cathodic current is gradually reduced, along with a monotonic shift of peak potential to the much lower value. The detailed mechanism is still unclear and in study. However, the corresponding ECL intensity seems stable enough. Especially, after the first seven cycles, the subsequent ECL intensity tends to be stable, without any noticeable fluctuations in ECL intensity. unstable 4CzIPN •+ largely limited the corresponding ECL stability. Our results indicate that such an unstable issue can be solved to a large extent by adding an effective coreactant, e.g., TPrA. In this way, fast and efficient electron transfer processes between the coreactant, e.g., TPrA, and the unstable radical intermediate species of TADF luminophores, e.g., 4CzIPN •+ , could accelerate the process of exciton formation on TADF luminophores, which is critical to enhance its ECL intensity and stability for practical sensing applications. Reductive-oxidation ECL studies on 4CzIPN are further conducted by using the traditional BPO as the coreactant. As shown in Figure 4a, no ECL signals can be detected while the electrolytic cell merely containing 25 mM BPO is biased from 0 to −1.7 to 0 V. However, as for the system containing the couple of 0.1 mM 4CzIPN/25 mM BPO in solution, ECL appears at −1.48 V and then reaches an extremely high peak value at ca. 1.7 V. Such phenomena well-confirm that it is reductive-oxidation ECL emission. Moreover, we notice that the peak potential of cathodic current of the 4CzIPN/BPO couple is located at −1.2 V, which is distinctly lower than that of pure 4CzIPN, i.e., ca. −1.5 V. Previously, Zu et al. also observed such a kind of feature and attributed it to the electrocatalysis effect of the coreactant on luminophores [44]. In other words, it is highly possible in that strong charge-transfer actions between those two species would be involved in such cathodic scanning. Despite this phenomenom, we notice that the resultant cathodic ECL onset potential is still as high as −1.48 V, which is followed by a quick increase in ECL intensity at the much higher potential. It thus well-confirms that both sufficient electroreduction of BPO and 4CzIPN are indispensable for such ECL emission. As calculated, such 0.1 mM 4CzIPN/25 mM BPO coreactant ECL system shows a high relative ΦECL of 197% (vs. Ru(bpy)3/TPrA reference, taken as 100% as the standard), which is higher than its CV and ECL stability situations of such an efficient 4CzIPN/BPO couple, we performed the successive CV and ECL scanning (26 cycles). As shown in Figure 4c, the cathodic current is not stable. On increasing the scanning cycles, the cathodic current is gradually reduced, along with a monotonic shift of peak potential to the much lower value. The detailed mechanism is still unclear and in study. However, the corresponding ECL intensity seems stable enough. Especially, after the first seven cycles, the subsequent ECL intensity tends to be stable, without any noticeable fluctuations in ECL intensity. Oxidation-reduction ECL is further constructed by using the couple of 0.1 mM BPAPTC/25 mM TPrA. As observed, it displays the typical coreactant ECL behavior. No detectable ECL can be observed while the system merely contains BPAPTC or TPrA. It is only in the presence of both BPAPTC and TPrA that we can see significant ECL emission (Figure 5a,b). Very meaningfully, the ECL signals begin to rise at an onset potential of 0.46 V and then reach the first peak value at +0.62 V and the second peak value at +1.19 V. Such ECL onset potential is distinctly lower than that of the 4CzIPN/TPrA couple shown in Figure 3 and even the lowest results among all ever reported ECLs using TADF emitters [12,18,19,[29][30][31]33,34]. We speculate that the achievement of such low ECL onset potential is mainly attributed to the distinctly lowered electrochemical oxidation potentials of BPAPTC (Supplementary Figure S1b). Moreover, oxidation potentials of BPAPTC and TPrA are closely matched with each other, which is beneficial for coreactant ECL emission at a much lower potential. As calculated, such 0.1 mM BPAPTC/25 mM TPrA coreactant ECL system shows a high relative ΦECL of 116% (vs. 100% for the Ru(bpy)3/TPrA Oxidation-reduction ECL is further constructed by using the couple of 0.1 mM BPAPTC/25 mM TPrA. As observed, it displays the typical coreactant ECL behavior. No detectable ECL can be observed while the system merely contains BPAPTC or TPrA. It is only in the presence of both BPAPTC and TPrA that we can see significant ECL emission (Figure 5a,b). Very meaningfully, the ECL signals begin to rise at an onset potential of 0.46 V and then reach the first peak value at +0.62 V and the second peak value at +1.19 V. Such ECL onset potential is distinctly lower than that of the 4CzIPN/TPrA couple shown in Figure 3 and even the lowest results among all ever reported ECLs using TADF emitters [12,18,19,[29][30][31]33,34]. We speculate that the achievement of such low ECL onset potential is mainly attributed to the distinctly lowered electrochemical oxidation potentials of BPAPTC (Supplementary Figure S1b). Moreover, oxidation potentials of BPAPTC and TPrA are closely matched with each other, which is beneficial for coreactant ECL emission at a much lower potential. As calculated, such 0.1 mM BPAPTC/25 mM TPrA coreactant ECL system shows a high relative Φ ECL of 116% (vs. 100% for the Ru(bpy) 3 /TPrA reference). Moreover, it is interesting to observe two ECL peaks, which were similar to the well-known Ru(bpy) 3 2+ /TPrA system [44,45]. As disclosed, the evolution of those two ECL peaks were dependent on the concentrations of Ru(bpy) 3 2+ and TPrA. These different experimental conditions triggered different subprocesses of these coreactant systems, which well-disclosed the origins of those observed two ECL peaks [45]. The similar mechanism studies are in process for this BPAPTC/TPrA couple, which will be disclosed elsewhere. As shown in Figure 5c, the anodic CV scanning for such a couple is stable under 30 cycles, which is accompanied with stable ECL emissions shown in Figure 5d. It is believed that it is attributed to the high reversibility of BPAPTC under the electrochemical oxidation process (Supplementary Figure S1b). well-disclosed the origins of those observed two ECL peaks [45]. The similar mechanism studies are in process for this BPAPTC/TPrA couple, which will be disclosed elsewhere. As shown in Figure 5c, the anodic CV scanning for such a couple is stable under 30 cycles, which is accompanied with stable ECL emissions shown in Figure 5d. It is believed that it is attributed to the high reversibility of BPAPTC under the electrochemical oxidation process (Supplementary Figure S1b). Figure 6 depicts the PL and ECL spectra of these two TADF lumiphores. It is obvious that the PL and ECL spectra of the same TADF luminophores are basically identical. Compared to PL spectra in DCM, the ECL spectra of those coreactant ECL couples show slight redshifts, which is due to some extent of polarity effect of the supporting electrolyte, i.e., TBAPF6 (see Supplementary Figure S2). To directly confirm the TADF emission nature of the coreactant ECL mode involving TPrA or BPO, the corresponding ECL spectra should be identical to the intrinsic PL spectra of those TADF luminophores concerned. In this sense, current results surely confirm this point. Previously, we noticed that the coreactant ECL spectra in TADF-polymer-modified GCE configurations [19,30] or aqueous ECL using TADF aggregates or TADF nanoencapsulation emitters [12,33,34] were significantly different from their PL counterparts. In general, a noticeable difference in λpeak and/or full width at half maximum (FWHM) of the PL and ECL spectra was observed in those reports, Figure 6 depicts the PL and ECL spectra of these two TADF lumiphores. It is obvious that the PL and ECL spectra of the same TADF luminophores are basically identical. Compared to PL spectra in DCM, the ECL spectra of those coreactant ECL couples show slight redshifts, which is due to some extent of polarity effect of the supporting electrolyte, i.e., TBAPF 6 (see Supplementary Figure S2). To directly confirm the TADF emission nature of the coreactant ECL mode involving TPrA or BPO, the corresponding ECL spectra should be identical to the intrinsic PL spectra of those TADF luminophores concerned. In this sense, current results surely confirm this point. Previously, we noticed that the coreactant ECL spectra in TADF-polymer-modified GCE configurations [19,30] or aqueous ECL using TADF aggregates or TADF nanoencapsulation emitters [12,33,34] were significantly different from their PL counterparts. In general, a noticeable difference in λ peak and/or full width at half maximum (FWHM) of the PL and ECL spectra was observed in those reports, which were attributed to some interrupting factors, such as difference in polarization [46], the involvement of surface state transition [47] or others. To rule out these interrupting possibilities, the implementation of such simplified solution-state coreactant ECL studies is very meaningful, which directly confirms the effectiveness of TADF emission in such coreactant ECL driving methods for the first time. ECL Spectra and Mechanisms of Coreactant TADF-ECL The mechanisms of coreactant ECL featuring TADF emission (TADF-ECL) are schematically shown in Figure 7, which is based on the above mentioned ECL studies, ECL spectra and the earlier disclosed mechanisms of TPrA-or BPO-involved coreactant ECL [19,25,48]. For the oxidative-reduction TADF-ECL, we use BPAPTC/TPrA TADF-ECL system as the example (shown in Figure 7a). Under electrochemical oxidation, holes are directly injected into the highest occupied molecular orbitals (HOMO) of BPAPTC molecules and electrons are injected from TPrA • into the lowest unoccupied molecular orbitals (LUMO). After that, the columbic interactions between holes and electrons in BPAPTC lead to the generation of excitons, which is followed by ECL emission via the TADF mechanism. For the reductiveoxidation TADF-ECL, we thus use the 4CzIPN/BPO TADF-ECL system as the example (shown in Figure 7b). Under the electroreduction process, electrons are directly injected into the LUMO of 4CzIPN molecules and holes are indirectly injected from C 6 H 5 CO 2 • radical intermediate species into the HOMO of 4CzIPN molecules. Subsequently, excitons (Frankel type) are generated on those 4CzIPN molecules, which is followed by ECL emission via the TADF mechanism. Thanks to the all-exciton-harvesting superiority of TADF emission, both electrochemically generated singlet and triplet excitons can radiatively decay via such coreactant TADF-ECL mode. It is thus very meaningful to further develop such coreactant TADF-ECL featuring low ECL potentials and high Φ ECL towards a satisfactory application in a wide range. which were attributed to some interrupting factors, such as difference in polarization [46], the involvement of surface state transition [47] or others. To rule out these interrupting possibilities, the implementation of such simplified solution-state coreactant ECL studies is very meaningful, which directly confirms the effectiveness of TADF emission in such coreactant ECL driving methods for the first time. The mechanisms of coreactant ECL featuring TADF emission (TADF-ECL) are schematically shown in Figure 7, which is based on the above mentioned ECL studies, ECL spectra and the earlier disclosed mechanisms of TPrA-or BPO-involved coreactant ECL [19,25,48]. For the oxidative-reduction TADF-ECL, we use BPAPTC/TPrA TADF-ECL system as the example (shown in Figure 7a). Under electrochemical oxidation, holes are directly injected into the highest occupied molecular orbitals (HOMO) of BPAPTC molecules and electrons are injected from TPrA • into the lowest unoccupied molecular orbitals (LUMO). After that, the columbic interactions between holes and electrons in BPAPTC lead to the generation of excitons, which is followed by ECL emission via the TADF mechanism. For the reductive-oxidation TADF-ECL, we thus use the 4CzIPN/BPO TADF-ECL system as the example (shown in Figure 7b). Under the electroreduction process, electrons are directly injected into the LUMO of 4CzIPN molecules and holes are indirectly injected from C6H5CO2 • radical intermediate species into the HOMO of 4CzIPN molecules. Subsequently, excitons (Frankel type) are generated on those 4CzIPN molecules, which is followed by ECL emission via the TADF mechanism. Thanks to the all-exciton-harvesting superiority of TADF emission, both electrochemically generated singlet and triplet excitons can radiatively decay via such coreactant TADF-ECL mode. It is thus very meaningful to further develop such coreactant TADF-ECL featuring low ECL potentials and high ΦECL towards a satisfactory application in a wide range. Conclusions In conclusion, we present a details study on liquid ECL using two common TADF molecules in an organic medium. Both CV, annihilation and coreactant ECL mode are measured and discussed, with the purpose of establishing more clear relationships be- Conclusions In conclusion, we present a details study on liquid ECL using two common TADF molecules in an organic medium. Both CV, annihilation and coreactant ECL mode are measured and discussed, with the purpose of establishing more clear relationships between potential/activity/stability of those different radical intermediate species of TADF molecules and the resultant ECL potential, efficiencies, i.e., Φ ECL , and stabilities in coreactant ECL mode. The conclusions are as follows: (i) it is highly feasible to realize a low ECL potential for coreactant TADF-ECL by choosing TADF luminophores featuring low redox potential. In this work, a satisfactory ECL onset potential as low as +0.46 V (vs. Ag/Ag + ) is achieved for BPAPTC; (ii) redox reversibility of TADF molecules highly determines the activity/stability of its electrogenerated radical intermediate species and the resultant ECL efficiency; (iii) if radical intermediate species of TADF luminophores are unstable, e.g., 4CzIPN •+ , its ECL efficiency and stability can be largely promoted by using coreactant ECL mode with a suitable coreactant. Moreover, for the first time, it is confirmed that the ECL spectra of those coreactant ECL systems are identical to the PL spectra of those TADF luminophores. It thus proves that those coreactant ECL systems emit light via those TADF emitters themselves, rather than other complexed emission routes. It is anticipated that the performance and application of TADF-ECL is promising. Currently, more in-depth mechanistic studies concerning TADF-ECL have been performed and will be disclosed elsewhere. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27217457/s1. More details about experimental sections; Figure S1: CV results of 4CzIPN and BPAPTC; Figure S2: PL spectra of pure 4CzIPN, or the mixture of 4CzIPN and TBAPF 6 , or the mixture of 4CzIPN, TBAPF 6 and BPO (a), PL spectra of pure BPAPTC, or the mixture of BPAPTC, TBAPF 6 or the mixture of BPAPTC, TBAPF 6 and TPrA (b).
9,213.4
2022-11-01T00:00:00.000
[ "Chemistry" ]